GLITCHES - Experiment C2023

A glitch is seen as a chaotic and disruptive occurrence, characterized by sudden and temporary failures in a system or process. In digital technology, glitches may manifest as errors, distortions, or unexpected behaviors that seem to defy logic and coherence. They can emerge due to various factors, such as software bugs, hardware malfunctions, or interference.

However, what can make glitches fascinating is that, sometimes, amidst the chaos and apparent meaninglessness, they reveal hidden orders and patterns. When a glitch occurs, it disrupts the expected flow of information or data, creating an unexpected break in the pattern. And sometimes, this rupture allows us to glimpse into the underlying structure of the system or data, much like looking through a cracked mirror. And that's more or less what's happening these days in advanced AI architectures, what is already known as "Sparks of AGI".

Hidden within these patterns are the hints of AGI's emergence - the "glitches" in the data that whisper secrets of its potential.

The "Sparks of AGI": shy glimmers of a new era in artificial intelligence that approach the depths of human cognition! But, as with any uncharted territory, we must proceed with curiosity, depth, and a touch of whimsy.

To understand the "Sparks of AGI", we must first grasp its essence. You see, AI today thrives on data, like a voracious data-munching monster. But AGI is different - it dances with data, interprets it, and learns to paint the stars with the colors of insight.

The main difference between the AGI and current data-driven ANI (Artificial Narrow Intelligence) models, even the most advanced AI models like ChatGPT, lies in the underlying architecture and purpose. AGI is an example of a cognitive architecture designed to achieve cognitive processes of a human-like intelligent agent. It aims to model various aspects of human cognition, such as perception, learning, reasoning, memory, and problem-solving, to achieve a general intelligence capable of understanding and interacting with the world in a holistic manner. On the other hand, (free) ChatGPT is a language model based on OpenAI's GPT-3.5 architecture, designed to process and generate human-like text based on the patterns it has learned from vast amounts of text data during its training. While ChatGPT can answer questions and engage in conversations, it lacks a comprehensive cognitive architecture encompassing other aspects of general intelligence.

While a cognitive architectures is designed with the goal of achieving artificial general intelligence (AGI), which aims to exhibit human-level intelligence across a wide range of tasks and domains, aspiring to be flexible, adaptable, and capable of learning and reasoning in diverse situations, LLMs are specialized language models that excels in natural language processing (NLP) tasks, and it´s primary purpose is to understand and generate text-based content, making it a valuable tool for various language-related applications, such as language translation, content generation, and conversation.

Finally, while AGI is a theoretical framework that requires significant engineering efforts and research to implement in practice, and developing AGI is a grand challenge in the field of artificial intelligence that involves intricate interactions between multiple modules, such as perception, reasoning, and learning, in contrast, language models like ChatGPT, while still complex, are more specialized and comparatively easier to implement and deploy. And if cognitive architectures aims to approximate various human cognitive abilities, such as perception, memory, reasoning, and self-awarenes, seeking to create a machine that can learn and adapt to new situations, similar to how humans do, by the other hand, while impressive in its language generation capabilities, LLMs like ChatGPT lacks broader cognitive abilities that (in thesis) would be found in AGI systems.

When an AI's logic sways to a rhythm not entirely expected, it's the sparkle of AGI peeking through the curtain of algorithms.

We find ourselves at the crossroads of understanding AGI's glitches. Each glitch, a glimpse into the secrets of cognition, a poetic revelation of what may come.

Knowledge Construction and Acquisition

Knowledge construction refers to the process of building new knowledge or understanding by actively engaging with information, experiences, and existing knowledge. This process involves the cognitive activities of perception, interpretation, analysis, synthesis, and evaluation. When individuals encounter new information or experiences, they use their existing knowledge frameworks, beliefs, and mental models to make sense of it and create new insights. When someone reads a book or listens to a lecture, they don't passively absorb the information; instead, they actively process and interpret it based on their prior knowledge and experiences. This engagement and active processing lead to the construction of new knowledge or the refinement of existing knowledge.

Knowledge acquisition, by the other hand, refers to the process of gaining knowledge from external sources, such as books, lectures, conversations, observation, or experience. It involves taking in information and integrating it into one's existing knowledge base.

The processes of knowledge construction and acquisition are closely interconnected. As individuals acquire new information from external sources, they use their cognitive abilities to process and integrate it with their existing knowledge. This integration often involves adjusting or expanding existing mental models, which can lead to the construction of new knowledge.

AGI Sketch

The perception  module enables the AGI to perceive the world through visual, auditory and other sensory inputs, capturing information from its environment. It should have a powerful NLP module as it allows the AGI to read and comprehend written text, verbal conversations, and other forms of language-based information. An AGI would have a vast knowledge base containing information it has self-acquired from various sources, such as books, articles, videos, and interactions with humans and other AI systems. This knowledge base would be organized and indexed for efficient retrieval. This is all pretty basic and the easy part of the task.

Here´s the hard part:

The AGI's cognitive architecture would incorporate a system for modeling prior knowledge. It would encode previous experiences and learned information, providing a basis for future interpretations and understanding. To actively process and interpret incoming information, the AGI would employ sophisticated cognitive reasoning algorithms. These algorithms would combine sensory inputs with prior knowledge, enabling the AGI to make sense of new data and situations. The AGI would continuously learn and adapt its knowledge base based on new experiences. It could use reinforcement learning or unsupervised learning methods to update its understanding of the world and improve its performance.

As the AGI processes and interprets information, it would construct new knowledge by integrating new insights with its existing knowledge base. This process involves synthesizing, generalizing, and refining information to form coherent mental models. The AGI sketch would possess a self-reflective capacity, allowing it to evaluate its own thought processes and decision-making. This meta-cognition would contribute to its ability to refine its understanding and improve future knowledge construction. To enhance its performance, the AGI would incorporate a feedback loop mechanism. It would seek feedback from human users or other AGI systems to validate and refine its interpretations and knowledge construction.

Designing a Cognitive Architecture for Modeling Prior Knowledge and Reasoning Algorithms in an AGI System

The first step is to determine how to represent prior knowledge within the AGI system. Knowledge representation techniques could include semantic networks, ontologies, frames, or probabilistic graphical models. The choice of representation impacts the system's ability to model complex relationships between concepts and efficiently retrieve relevant information. The AGI system needs to incorporate learning mechanisms to acquire new knowledge and update its existing knowledge base. Machine learning techniques such as reinforcement learning, unsupervised learning, and transfer learning can be employed to adapt to new environments and data.

Developing robust reasoning algorithms is crucial for the AGI system to make sense of incoming data and draw accurate conclusions based on prior knowledge. These algorithms could include deductive reasoning, inductive reasoning, abductive reasoning, and analogical reasoning. The AGI system should be designed to integrate data from different modalities, such as visual and auditory inputs, along with natural language processing capabilities. This integration enables the system to perceive and interpret the world in a holistic manner. Incorporating meta-cognitive abilities allows the AGI system to evaluate its own performance, monitor its decision-making processes, and recognize uncertainties in its knowledge. This self-awareness enhances the system's ability to learn from its experiences and improve over time.

Effective inference mechanisms are essential for the AGI system to solve complex problems and draw logical conclusions from incomplete or uncertain information. This involves probabilistic reasoning, constraint satisfaction, and heuristic search algorithms.

The AGI system needs an efficient memory management system to store and access vast amounts of data. This involves techniques such as working memory, episodic memory, and long-term memory to retain information and retrieve it when needed.

Combining symbolic AI approaches, such as rule-based systems, with neural network-based techniques can lead to more robust cognitive architectures. Hybrid models offer the benefits of both symbolic reasoning and the data-driven capabilities of neural networks.

Engineering a cognitive architecture involves rigorous testing and evaluation to ensure that the system operates as intended and can handle a wide range of scenarios. Real-world testing and simulations are crucial to verify the AGI system's performance and identify areas for improvement.

That Strange Malfunction Before Ignition

Researchers have been exploring and experimenting with more advanced Artificial Narrow Intelligence (ANI) systems that exhibit intriguing capabilities, which some consider as "sparks" or signs pointing towards the direction of AGI. From IBM's Deep Blue beating Kasparov and AlphaGo beating Lee Sedol to recent advents like OpenAI's GPT-4 or DeepMind's Gato, we have many examples of AGI sparks, from the weakest in the beginning, to the strongest more recently.

As we reflect on the sparks of brilliance emanating from ANI systems, we must approach them with a balanced perspective. These sparks, while impressive and thought-provoking, do not inherently guarantee the rapid arrival of AGI. They represent glimpses of progress, akin to an electrical engine failing to ignite before it fully powers on. ANI systems excel in specific domains and tasks, exhibiting remarkable performance in those areas. However, AGI's essence lies in its ability to generalize knowledge and exhibit intelligence across a vast array of domains. The sparks we witness may be shining brightly within their respective realms, but they do not yet manifest the versatility and adaptability required for true AGI.

Despite their brilliance, ANI systems often possess narrow expertise, lacking the comprehensive understanding that characterizes human intelligence. They excel through extensive training on large datasets, unable to spontaneously reason or transfer knowledge to unfamiliar situations without substantial fine-tuning.

The sparks emanating from ANI systems provide invaluable insights into the vast potential of artificial intelligence. They ignite our imaginations and drive our pursuit of AGI. However, we must not be carried away by their brilliance alone. A comprehensive AGI entails the orchestration of numerous complex components, harmonizing to enable true general intelligence. While the sparks inspire hope, we must remain steadfast in our dedication to research, collaboration, and responsible development, navigating the path towards AGI with both excitement and diligence.

Written by Aurora H. Winterborne