rss_2.0Journal of Artificial General Intelligence FeedSciendo RSS Feed for Journal of Artificial General Intelligencehttps://sciendo.com/journal/JAGIhttps://www.sciendo.comJournal of Artificial General Intelligence Feedhttps://sciendo-parsed.s3.eu-central-1.amazonaws.com/6472063b215d2f6c89db92b2/cover-image.jpghttps://sciendo.com/journal/JAGI140216What’s Next if Reward is Enough? Insights for AGI from Animal Reinforcement Learninghttps://sciendo.com/article/10.2478/jagi-2023-0002<abstract>
<title style='display:none'>Abstract</title>
<p>There has been considerable recent interest in the “The Reward is Enough” hypothesis, which is the idea that agents can develop general intelligence even with simple reward functions, provided the environment they operate in is sufficiently complex. While this is an interesting framework to approach the AGI problem, it also brings forth new questions - what kind of RL algorithm should the agent use? What should the reward function look like? How can it quickly generalize its learning to new tasks? This paper looks to animal reinforcement learning - both individual and social - to address these questions and more. It evaluates existing computational models and neural substrates of Pavlovian conditioning, reward-based action selection, intrinsic motivation, attention-based task representations, social learning and meta-learning in animals and discusses how insights from these findings can influence the development of animal-level AGI within an RL framework.</p>
</abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jagi-2023-00022023-12-15T00:00:00.000+00:00Learning and decision-making in artificial animalshttps://sciendo.com/article/10.2478/jagi-2018-0002<abstract><title style='display:none'>Abstract</title><p> A computational model for artificial animals (animats) interacting with real or artificial ecosystems is presented. All animats use the same mechanisms for learning and decisionmaking. Each animat has its own set of needs and its own memory structure that undergoes continuous development and constitutes the basis for decision-making. The decision-making mechanism aims at keeping the needs of the animat as satisfied as possible for as long as possible. Reward and punishment are defined in terms of changes to the level of need satisfaction. The learning mechanisms are driven by prediction error relating to reward and punishment and are of two kinds: multi-objective local Q-learning and structural learning that alter the architecture of the memory structures by adding and removing nodes. The animat model has the following key properties: (1) autonomy: it operates in a fully automatic fashion, without any need for interaction with human engineers. In particular, it does not depend on human engineers to provide goals, tasks, or seed knowledge. Still, it can operate either with or without human interaction; (2) generality: it uses the same learning and decision-making mechanisms in all environments, e.g. desert environments and forest environments and for all animats, e.g. frog animats and bee animats; and (3) adequacy: it is able to learn basic forms of animal skills such as eating, drinking, locomotion, and navigation. Eight experiments are presented. The results obtained indicate that (i) dynamic memory structures are strictly more powerful than static; (ii) it is possible to use a fixed generic design to model basic cognitive processes of a wide range of animals and environments; and (iii) the animat framework enables a uniform and gradual approach to AGI, by successively taking on more challenging problems in the form of broader and more complex classes of environments</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jagi-2018-00022018-07-27T00:00:00.000+00:00Towards General Evaluation of Intelligent Systems: Lessons Learned from Reproducing AIQ Test Resultshttps://sciendo.com/article/10.2478/jagi-2018-0001<abstract><title style='display:none'>Abstract</title><p> This paper attempts to replicate the results of evaluating several artificial agents using the Algorithmic Intelligence Quotient test originally reported by Legg and Veness. Three experiments were conducted: One using default settings, one in which the action space was varied and one in which the observation space was varied. While the performance of freq, Q<sub>0</sub>, Q<sub>λ</sub>, and HLQ<sub>λ</sub> corresponded well with the original results, the resulting values differed, when using MC-AIXI. Varying the observation space seems to have no qualitative impact on the results as reported, while (contrary to the original results) varying the action space seems to have some impact. An analysis of the impact of modifying parameters of MC-AIXI on its performance in the default settings was carried out with the help of data mining techniques used to identifying highly performing configurations. Overall, the Algorithmic Intelligence Quotient test seems to be reliable, however as a general artificial intelligence evaluation method it has several limits. The test is dependent on the chosen reference machine and also sensitive to changes to its settings. It brings out some differences among agents, however, since they are limited in size, the test setting may not yet be sufficiently complex. A demanding parameter sweep is needed to thoroughly evaluate configurable agents that, together with the test format, further highlights computational requirements of an agent. These and other issues are discussed in the paper along with proposals suggesting how to alleviate them. An implementation of some of the proposals is also demonstrated.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jagi-2018-00012018-03-07T00:00:00.000+00:00Homeostatic Agent for General Environmenthttps://sciendo.com/article/10.1515/jagi-2017-0001<abstract><title style='display:none'>Abstract</title><p> One of the essential aspect in biological agents is dynamic stability. This aspect, called homeostasis, is widely discussed in ethology, neuroscience and during the early stages of artificial intelligence. Ashby’s homeostats are general-purpose learning machines for stabilizing essential variables of the agent in the face of general environments. However, despite their generality, the original homeostats couldn’t be scaled because they searched their parameters randomly. In this paper, first we re-define the objective of homeostats as the maximization of a multi-step survival probability from the view point of sequential decision theory and probabilistic theory. Then we show that this optimization problem can be treated by using reinforcement learning algorithms with special agent architectures and theoretically-derived intrinsic reward functions. Finally we empirically demonstrate that agents with our architecture automatically learn to survive in a given environment, including environments with visual stimuli. Our survival agents can learn to eat food, avoid poison and stabilize essential variables through theoretically-derived single intrinsic reward formulations.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.1515/jagi-2017-00012018-03-07T00:00:00.000+00:00Learning and Reasoning in Unknown Domainshttps://sciendo.com/article/10.1515/jagi-2016-0002<abstract><title style='display:none'>Abstract</title><p>In the story <italic>Alice in Wonderland</italic>, Alice fell down a rabbit hole and suddenly found herself in a strange world called Wonderland. Alice gradually developed knowledge about Wonderland by observing, learning, and reasoning. In this paper we present the system A<sc>lice</sc> I<sc>n</sc> W<sc>onderland</sc> that operates analogously. As a theoretical basis of the system, we define several basic concepts of logic in a generalized setting, including the notions of domain, proof, consistency, soundness, completeness, decidability, and compositionality. We also prove some basic theorems about those generalized notions. Then we model Wonderland as an arbitrary symbolic domain and Alice as a cognitive architecture that learns autonomously by observing random streams of facts from Wonderland. Alice is able to reason by means of computations that use bounded cognitive resources. Moreover, Alice develops her belief set by continuously forming, testing, and revising hypotheses. The system can learn a wide class of symbolic domains and challenge average human problem solvers in such domains as propositional logic and elementary arithmetic.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.1515/jagi-2016-00022017-01-23T00:00:00.000+00:00The Sigma Cognitive Architecture and System: Towards Functionally Elegant Grand Unificationhttps://sciendo.com/article/10.1515/jagi-2016-0001<abstract><title style='display:none'>Abstract</title><p>Sigma (Σ) is a cognitive architecture and system whose development is driven by a combination of four desiderata: <italic>grand unification</italic>, <italic>generic cognition</italic>, <italic>functional elegance</italic>, and <italic>sufficient efficiency</italic>. Work towards these desiderata is guided by the <italic>graphical architecture hypothesis</italic>, that key to progress on them is combining what has been learned from over three decades’ worth of separate work on <italic>cognitive architectures</italic> and <italic>graphical models</italic>. In this article, these four desiderata are motivated and explained, and then combined with the graphical architecture hypothesis to yield a rationale for the development of Sigma. The current state of the cognitive architecture is then introduced in detail, along with the graphical architecture that sits below it and implements it. Progress in extending Sigma beyond these architectures and towards a full cognitive system is then detailed in terms of both a systematic set of higher level <italic>cognitive idioms</italic> that have been developed and several <italic>virtual humans</italic> that are built from combinations of these idioms. Sigma as a whole is then analyzed in terms of how well the progress to date satisfies the desiderata. This article thus provides the first full motivation, presentation and analysis of Sigma, along with a diversity of more specific results that have been generated during its development.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.1515/jagi-2016-00012017-01-23T00:00:00.000+00:00Tra-la-Lyrics 2.0: Automatic Generation of Song Lyrics on a Semantic Domainhttps://sciendo.com/article/10.1515/jagi-2015-0005<abstract><title style='display:none'>Abstract</title><p>Tra-la-Lyrics is a system that generates song lyrics automatically. In its original version, the main focus was to produce text where stresses matched the rhythm of given melodies. There were no concerns on whether the text made sense or if the selected words shared some kind of semantic association. In this article, we describe the development of a new version of Tra-la-Lyrics, where text is generated on a semantic domain, defined by one or more seed words. This effort involved the integration of the original rhythm module of Tra-la-Lyrics in PoeTryMe, a generic platform that generates poetry with semantically coherent sentences. To measure our progress, the rhythm, the rhymes, and the semantic coherence in lyrics produced by the original Tra-la-Lyrics were analysed and compared with lyrics produced by the new instantiation of this system, dubbed Tra-la-Lyrics 2.0. The analysis showed that, in the lyrics by the new system, words have higher semantic association among them and with the given seeds, while the rhythm is still matched and rhymes are present. The previous analysis was complemented with a crowdsourced evaluation, where contributors answered a survey about relevant features of lyrics produced by the previous and the current versions of Tra-la-Lyrics. Though tight, the survey results confirmed the improvements of the lyrics by Tra-la-Lyrics 2.0.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.1515/jagi-2015-00052015-12-30T00:00:00.000+00:00Choosing the Right Path: Image Schema Theory as a Foundation for Concept Inventionhttps://sciendo.com/article/10.1515/jagi-2015-0003<abstract><title style='display:none'>Abstract</title><p>Image schemas are recognised as a fundamental ingredient in human cognition and creative thought. They have been studied extensively in areas such as cognitive linguistics. With the goal of exploring their potential role in computational creative systems, we here study the viability of the idea to formalise image schemas as a set of interlinked theories. We discuss in particular a selection of image schemas related to the notion of ‘path’, and show how they can be mapped to a formalised family of microtheories reflecting the different aspects of path following. Finally, we illustrate the potential of this approach in the area of concept invention, namely by providing several examples illustrating in detail in what way formalised image schema families support the computational modelling of conceptual blending.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.1515/jagi-2015-00032015-12-30T00:00:00.000+00:00On Mathematical Provinghttps://sciendo.com/article/10.1515/jagi-2015-0007<abstract><title style='display:none'>Abstract</title><p>This paper outlines a logical representation of certain aspects of the process of mathematical proving that are important from the point of view of Artificial Intelligence. Our starting-point is the concept of <italic>proof-event</italic> or <italic>proving</italic>, introduced by Goguen, instead of the traditional concept of mathematical proof. The reason behind this choice is that in contrast to the traditional static concept of mathematical proof, proof-events are understood as processes, which enables their use in Artificial Intelligence in such contexts, in which problem-solving procedures and strategies are studied.</p><p>We represent proof-events as problem-centered spatio-temporal processes by means of the language of the calculus of events, which captures adequately certain temporal aspects of proof-events (i.e. that they have <italic>history</italic> and form <italic>sequences of proof-events</italic> evolving in time). Further, we suggest a “loose” semantics for the proof-events, by means of Kolmogorov’s calculus of problems. Finally, we expose the intented interpretations for our logical model from the fields of automated theorem-proving and Web-based collective proving.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.1515/jagi-2015-00072015-12-30T00:00:00.000+00:00From Distributional Semantics to Conceptual Spaces: A Novel Computational Method for Concept Creationhttps://sciendo.com/article/10.1515/jagi-2015-0004<abstract><title style='display:none'>Abstract</title><p>We investigate the relationship between lexical spaces and contextually-defined conceptual spaces, offering applications to creative concept discovery. We define a computational method for discovering members of concepts based on semantic spaces: starting with a standard distributional model derived from corpus co-occurrence statistics, we dynamically select characteristic dimensions associated with seed terms, and thus a subspace of terms defining the related concept. This approach performs as well as, and in some cases better than, leading distributional semantic models on a WordNet-based concept discovery task, while also providing a model of concepts as convex regions within a space with interpretable dimensions. In particular, it performs well on more specific, contextualized concepts; to investigate this we therefore move beyond WordNet to a set of human empirical studies, in which we compare output against human responses on a membership task for novel concepts. Finally, a separate panel of judges rate both model output and human responses, showing similar ratings in many cases, and some commonalities and divergences which reveal interesting issues for computational concept discovery.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.1515/jagi-2015-00042015-12-30T00:00:00.000+00:00A Play on Words: Using Cognitive Computing as a Basis for AI Solvers in Word Puzzleshttps://sciendo.com/article/10.1515/jagi-2015-0006<abstract><title style='display:none'>Abstract</title><p>In this paper we offer a model, drawing inspiration from human cognition and based upon the pipeline developed for IBM’s Watson, which solves clues in a type of word puzzle called <italic>syllacrostics</italic>. We briefly discuss its situation with respect to the greater field of artificial general intelligence (AGI) and how this process and model might be applied to other types of word puzzles. We present an overview of a system that has been developed to solve syllacrostics.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.1515/jagi-2015-00062015-12-30T00:00:00.000+00:00Unnatural Selection: Seeing Human Intelligence in Artificial Creationshttps://sciendo.com/article/10.1515/jagi-2015-0002<abstract><title style='display:none'>Abstract</title><p>As generative AI systems grow in sophistication, so too do our expectations of their outputs. For as automated systems acculturate themselves to ever larger sets of inspiring human examples, the more we expect them to produce human-quality outputs, and the greater our disappointment when they fall short. While our generative systems must embody some sense of what constitutes human creativity if their efforts are to be valued as creative by human judges, computers are not human, and need not go so far as to actively pretend to be human to be seen as creative. As discomfiting objects that reside at the boundary of two seemingly disjoint categories, creative machines arouse our sense of the uncanny, or what Freud memorably called the <italic>Unheimlich</italic>. Like a ventriloquist’s doll that finds its own voice, computers are free to blend the human and the non-human, to surprise us with their knowledge of our world and to discomfit with their detached, other-worldly perspectives on it. Nowhere is our embrace of the unnatural and the uncanny more evident than in the popularity of <italic>Twitterbots</italic>, automatic text generators on Twitter that are followed by humans precisely because they are non-human, and because their outputs so often seem meaningful yet unnatural. This paper evaluates a metaphor generator named <italic>@MetaphorMagnet</italic>, a Twitterbot that tempers the uncanny with aptness to yield results that are provocative but meaningful.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.1515/jagi-2015-00022015-12-30T00:00:00.000+00:00Editorial: Computational Creativity, Concept Invention, and General Intelligencehttps://sciendo.com/article/10.1515/jagi-2015-0001<abstract><title style='display:none'>Abstract</title><p>Over the last decade, computational creativity as a field of scientific investigation and computational systems engineering has seen growing popularity. Still, the levels of development between projects aiming at systems for artistic production or performance and endeavours addressing creative problem-solving or models of creative cognitive capacities is diverging. While the former have already seen several great successes, the latter still remain in their infancy. This volume collects reports on work trying to close the accrued gap.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.1515/jagi-2015-00012015-12-30T00:00:00.000+00:00The Action Execution Process Implemented in Different Cognitive Architectures: A Reviewhttps://sciendo.com/article/10.2478/jagi-2014-0002<abstract><title style='display:none'>Abstract</title><p> An agent achieves its goals by interacting with its environment, cyclically choosing and executing suitable actions. An action execution process is a reasonable and critical part of an entire cognitive architecture, because the process of generating executable motor commands is not only driven by low-level environmental information, but is also initiated and affected by the agent’s high-level mental processes. This review focuses on cognitive models of action, or more specifically, of the action execution process, as implemented in a set of popular cognitive architectures. We examine the representations and procedures inside the action execution process, as well as the cooperation between action execution and other high-level cognitive modules. We finally conclude with some general observations regarding the nature of action execution.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jagi-2014-00022014-12-30T00:00:00.000+00:00Artificial General Intelligence: Concept, State of the Art, and Future Prospectshttps://sciendo.com/article/10.2478/jagi-2014-0001<abstract><title style='display:none'>Abstract</title><p> In recent years broad community of researchers has emerged, focusing on the original ambitious goals of the AI field - the creation and study of software or hardware systems with general intelligence comparable to, and ultimately perhaps greater than, that of human beings. This paper surveys this diverse community and its progress. Approaches to defining the concept of Artificial General Intelligence (AGI) are reviewed including mathematical formalisms, engineering, and biology inspired perspectives. The spectrum of designs for AGI systems includes systems with symbolic, emergentist, hybrid and universalist characteristics. Metrics for general intelligence are evaluated, with a conclusion that, although metrics for assessing the achievement of human-level AGI may be relatively straightforward (e.g. the Turing Test, or a robot that can graduate from elementary school or university), metrics for assessing partial progress remain more controversial and problematic.</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jagi-2014-00012014-12-30T00:00:00.000+00:00Black-box Brain Experiments, Causal Mathematical Logic, and the Thermodynamics of Intelligencehttps://sciendo.com/article/10.2478/jagi-2013-0005<abstract><title style='display:none'>Abstract</title><p> Awareness of the possible existence of a yet-unknown principle of Physics that explains cognition and intelligence does exist in several projects of emulation, simulation, and replication of the human brain currently under way. Brain simulation projects define their success partly in terms of the emergence of non-explicitly programmed biophysical signals such as self-oscillation and spreading cortical waves. We propose that a recently discovered theory of Physics known as Causal Mathematical Logic (CML) that links intelligence with causality and entropy and explains intelligent behavior from first principles, is the missing link. We further propose the theory as a roadway to understanding more complex biophysical signals, and to explain the set of intelligence principles. The new theory applies to information considered as an entity by itself. The theory proposes that any device that processes information and exhibits intelligence must satisfy certain theoretical conditions irrespective of the substrate where it is being processed. The substrate can be the human brain, a part of it, a worm’s brain, a motor protein that self-locomotes in response to its environment, a computer. Here, we propose to extend the causal theory to systems in Neuroscience, because of its ability to model complex systems without heuristic approximations, and to predict emerging signals of intelligence directly from the models. The theory predicts the existence of a large number of observables (or “signals”), all of which emerge and can be directly and mathematically calculated from non-explicitly programmed detailed causal models. This approach is aiming for a universal and predictive language for Neuroscience and AGI based on causality and entropy, detailed enough to describe the finest structures and signals of the brain, yet general enough to accommodate the versatility and wholeness of intelligence. Experiments are focused on a black-box as one of the devices described above of which both the input and the output are precisely known, but not the internal implementation. The same input is separately supplied to a causal virtual machine, and the calculated output is compared with the measured output. The virtual machine, described in a previous paper, is a computer implementation of CML, fixed for all experiments and unrelated to the device in the black box. If the two outputs are equivalent, then the experiment has quantitatively succeeded and conclusions can be drawn regarding details of the internal implementation of the device. Several small black-box experiments were successfully performed and demonstrated the emergence of non-explicitly programmed cognitive function in each case</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jagi-2013-00052014-04-25T00:00:00.000+00:00Causal Mathematical Logic as a guiding framework for the prediction of “Intelligence Signals” in brain simulationshttps://sciendo.com/article/10.2478/jagi-2013-0006<abstract><title style='display:none'>Abstract</title><p> A recent theory of physical information based on the fundamental principles of causality and thermodynamics has proposed that a large number of observable life and intelligence signals can be described in terms of the Causal Mathematical Logic (CML), which is proposed to encode the natural principles of intelligence across any physical domain and substrate. We attempt to expound the current definition of CML, the “Action functional” as a theory in terms of its ability to possess a superior explanatory power for the current neuroscientific data we use to measure the mammalian brains “intelligence” processes at its most general biophysical level. Brain simulation projects define their success partly in terms of the emergence of “non-explicitly programmed” complex biophysical signals such as self-oscillation and spreading cortical waves. Here we propose to extend the causal theory to predict and guide the understanding of these more complex emergent “intelligence Signals”. To achieve this we review whether causal logic is consistent with, can explain and predict the function of complete perceptual processes associated with intelligence. Primarily those are defined as the range of Event Related Potentials (ERP) which include their primary subcomponents; Event Related Desynchronization (ERD) and Event Related Synchronization (ERS). This approach is aiming for a universal and predictive logic for neurosimulation and AGi. The result of this investigation has produced a general “Information Engine” model from translation of the ERD and ERS. The CML algorithm run in terms of action cost predicts ERP signal contents and is consistent with the fundamental laws of thermodynamics. A working substrate independent natural information logic would be a major asset. An information theory consistent with fundamental physics can be an AGi. It can also operate within genetic information space and provides a roadmap to understand the live biophysical operation of the phenotype</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jagi-2013-00062014-04-25T00:00:00.000+00:00Editorial: Whole Brain Emulation seeks to Implement a Mind and its General Intelligence through System Identificationhttps://sciendo.com/article/10.2478/jagi-2013-0012ARTICLEtruehttps://sciendo.com/article/10.2478/jagi-2013-00122014-04-25T00:00:00.000+00:00Is Brain Emulation Dangerous?https://sciendo.com/article/10.2478/jagi-2013-0011<abstract><title style='display:none'>Abstract</title><p> Brain emulation is a hypothetical but extremely transformative technology which has a non-zero chance of appearing during the next century. This paper investigates whether such a technology would also have any predictable characteristics that give it a chance of being catastrophically dangerous, and whether there are any policy levers which might be used to make it safer. We conclude that the riskiness of brain emulation probably depends on the order of the preceding research trajectory. Broadly speaking, it appears safer for brain emulation to happen sooner, because slower CPUs would make the technology‘s impact more gradual. It may also be safer if brains are scanned before they are fully understood from a neuroscience perspective, thereby increasing the initial population of emulations, although this prediction is weaker and more scenario-dependent. The risks posed by brain emulation also seem strongly connected to questions about the balance of power between attackers and defenders in computer security contests. If economic property rights in CPU cycles1 are essentially enforceable, emulation appears to be comparatively safe; if CPU cycles are ultimately easy to steal, the appearance of brain emulation is more likely to be a destabilizing development for human geopolitics. Furthermore, if the computers used to run emulations can be kept secure, then it appears that making brain emulation technologies ―open‖ would make them safer. If, however, computer insecurity is deep and unavoidable, openness may actually be more dangerous. We point to some arguments that suggest the former may be true, tentatively implying that it would be good policy to work towards brain emulation using open scientific methodology and free/open source software codebases</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jagi-2013-00112014-04-25T00:00:00.000+00:00Will We Hit a Wall? Forecasting Bottlenecks to Whole Brain Emulation Developmenthttps://sciendo.com/article/10.2478/jagi-2013-0009<abstract><title style='display:none'>Abstract</title><p> Whole brain emulation (WBE) is the possible replication of human brain dynamics that reproduces human behavior. If created, WBE would have significant impact on human society, and forecasts frequently place WBE as arriving within a century. However, WBE would be a complex technology with a complex network of prerequisite technologies. Most forecasts only consider a fraction of this technology network. The unconsidered portions of the network may contain bottlenecks, which are slowly-developing technologies that would impede the development of WBE. Here I describe how bottlenecks in the network can be non-obvious, and the merits of identifying them early. I show that bottlenecks may be predicted even with noisy forecasts. Accurate forecasts of WBE development must incorporate potential bottlenecks, which can be found using detailed descriptions of the WBE technology network. Bottlenecks identification can also increase the impact of WBE researchers by directing effort to those technologies that will immediately affect the timeline of WBE development</p></abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jagi-2013-00092014-04-25T00:00:00.000+00:00en-us-1