The key to achieving this is cracking the 'neural code,' which is the way the brain encodes sensory information and transfers it to carry out cognitive tasks like learning, problem-solving, internal dialogue, and visualizing thoughts.
Azoff discusses this in his new book 'Towards Human-Level Artificial Intelligence: How Neuroscience can Inform the Pursuit of Artificial General Intelligence', where he outlines that one of the essential milestones in developing human-level AI is creating systems capable of simulating consciousness.
Simulating consciousness in computers
Azoff notes that there are various types of consciousness, and even simpler creatures like bees exhibit some degree of consciousness, though without self-awareness. He compares this to the human experience of being "in the flow," or deeply focused on a task. According to Azoff, the initial step towards conscious AI would involve developing a system that simulates this type of consciousness without self-awareness.
He believes that consciousness helps animals-and could help AI-plan actions, predict outcomes, and recall past experiences to improve decision-making. Azoff also highlights visual thinking as a critical element in understanding and replicating consciousness. He suggests that while current AI operates through large language models (LLMs), human visual thinking predated language and could provide a crucial framework for human-level AI.
Azoff stated, "Once we crack the neural code, we will engineer faster and superior brains with greater capacity, speed, and supporting technology that will surpass the human brain. We will achieve this first by modeling visual processing, which will allow us to emulate visual thinking. I speculate that in-the-flow-consciousness will emerge from that. I do not believe that a system needs to be alive to have consciousness."
Cautionary advice
Despite his optimism, Azoff also cautions society to ensure the responsible use of AI technology. "Until we have more confidence in the machines we build, we should ensure two key safeguards. First, humans must retain sole control of the off switch. Second, AI systems should be designed with built-in behavioral safety rules," he warned.
Research Report:Toward Human-Level Artificial Intelligence
Related Links
Kisaco Research
Taylor and Francis Group
All about the robots on Earth and beyond!
Subscribe Free To Our Daily Newsletters |
Subscribe Free To Our Daily Newsletters |