Artificial intelligence (AI) is commonly defined as the science of making computers do things that would require intelligence when done by humans. Five years ago, IBM’s computer system, Watson, defeated “Jeopardy” champions Ken Jennings and Brad Rutter by using algorithms and relevant search terms to explore more than 200 million pages of archived material and promptly produce a correct answer – time after time. The man-against-machine tournament illustrated how AI promises a future that is both awe-inspiring and eerie. Two years later, IBM announced the same AI software system employed by Watson would be used by Memorial Sloan Kettering Cancer Center to help make management decisions in lung cancer treatment.

In March, Google’s artificially intelligent computer system “Alpha Go” claimed victory over grandmasters of Go, a game developed in Asia more than 2,500 years ago and considered more complex and intuitive than chess. This AI milestone happened years sooner than experts had predicted. And as early as this summer, Google plans to introduce prototypes of its self-driving cars on the roads in California. The AI pioneer expects that autonomous vehicles soon will enable drivers to safely send texts and sip coffee during their daily commute. At the same time, warnings of AI-created doomsday scenarios flourish. Some scientists predict super-intelligent machines will eliminate millions of jobs, while others fear smart robots one day could develop beyond their original mission and decide to wipe out mankind.

We reached out to three UC Irvine professors deeply involved in AI research to get their insight into this technology.

Prior to joining UCI as a professor in the Department of Cognitive Sciences and the Department of Computer Science, Jeff Krichmar worked with Nobel Laureate Gerald Edelman for nearly 10 years on the Darwin robot series of brain-based devices (BBDs). A BBD is a realistic brain model that controls a robot performing a behavioral task. That led to Krichmar’s interest in using robots to understand the brain and intelligence. His current lab, the Cognitive Anteater Robotics Laboratory, or CARL, continues this tradition of understanding through building.

Pierre Baldi is a UCI Chancellor’s Professor of computer science and the director of the university’s Institute for Genomics and Bioinformatics. His research includes artificial intelligence, statistical machine learning, data mining and their applications to problems in the natural sciences. His main interest is understanding intelligence in both brains and machines.

In January, Kai Zheng joined the faculty of the Donald Bren School of Information and Computer Sciences, Department of Informatics. He is responsible for advancing biomedical informatics – also known as health informatics – research at UCI. Zheng has used AI primarily to process large volumes of patient care data in an effort to identify patterns that could allude to important underlying and overlooked mechanisms, such as associations among seemingly unrelated diseases.

 

Q: How does machine intelligence differ from human intelligence?

KZ: Generally speaking, machine intelligence is based on mathematical reasoning, whereas human intelligence is largely driven by heuristics (estimates and past experience). For example, when determining if it is safe to cross through an intersection when the traffic light turns yellow, human drivers largely rely on heuristic judgment, while AI would base the prediction on a precise calculation with a range of factors, such as travel speed, distance from the crossing and weight of the vehicle. Another key distinction is human intelligence is much more adaptive, while machine intelligence often has difficulties handling unfamiliar situations. 

JK: I believe our bodies, the materials they’re made of, our goals, internal drives and needs critically shape our intelligence. I think we will be able to create machines that mimic many aspects of human intelligence, but they will never be the same. 

PB: In computers, processing and storage are separate. In brains, processing and storage are intimately intertwined. In part for this reason, brains are roughly four orders of magnitude more efficient than computers from an energy standpoint. The IBM Watson supercomputer that surpassed human performance in the game of Jeopardy consumes 80,000 watts. A human brain consumes 20-40 watts. Today, machine intelligence is still fairly limited to welldefined, specialized domains and tasks. It is far from having the universal quality of human intelligence. However, the gap is rapidly shrinking.

 

Q: Is the goal of AI to simulate human intelligence?

PB: No. The main goal of AI is to build intelligent machines that can emulate and sometimes surpass human intelligence. The goal of AI is also to understand intelligence at a deeper, more unified level, beyond issues of hardware implementations. 

JK: That is the goal of many in the field of AI. My primary goal is to understand the brain, and that may ultimately lead to understanding human intelligence. A secondary goal is to make smarter robots, based on principles of the mammalian nervous system.

KZ: To a degree, yes. There are two distinctive efforts: strong AI versus weak AI. The former intends to simulate human intelligence – to create machines that can think and function the same as human beings. Weak AI, on the other hand, develops AI systems that imitate specific human behaviors, such as voice recognition and natural language processing. It should be noted that we are currently unable to build machines that truly replicate human intelligence because we don’t yet fully understand how human intelligence works.

 

Q: What is the origin of AI?

JK: Many attribute the beginning of AI research to a meeting at Dartmouth College in 1956 that included John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon. They originally thought that human problem-solving capabilities could be achieved within a summer by a select group of researchers.

PB: The source of AI is the human desire to understand, emulate and surpass human intelligence. It is the human quest to “forge the gods” and build machines that behave intelligently, like humans and beyond.

KZ: AI reflects mankind’s aspiration to build mechanical or computing devices that can function like the human brain. It can be traced back to classical philosophers’ attempts to describe human thinking as a symbolic system. In his 1950 paper, Computing Machinery and Intelligence, British mathematician Alan Turing for the first time posed the question: “Can machines think?”

  

Q: What types of tasks is AI ultimately not capable of achieving?

KZ: None. I think ultimately AI is capable of achieving everything that human intelligence has to offer. Again, machines may not truly replicate nervous systems and human brains (nor do they need to), but they can arrive at the same or similar outcomes using different mechanisms, such as fast processing of large volumes of data.

JK: I think most tasks are achievable if you think of AI in the broad sense. That is, any algorithm, method or machine can be used to create an intelligent system. Some tasks will be achieved in the near term, while others may take decades.

PB: From our current understanding, there are many tasks that cannot be solved by current computers, or Turing machines. Besides the issue of capability, there is also the issue of efficiency. The classic example here is the traveling salesman problem, or finding the shortest tour among a set of ‘n’ cities. A program exists to find the shortest tour; in fact it does not require any intelligence — you just have to list all possible tours, measure their lengths and pick the shortest one. This procedure, however, takes exponential time to complete as a function of the number of cities and cannot be carried for say 100 cities. So the question is whether there exists a clever AI program that finds the shortest tour efficiently, in time that is polynomial in the number of cities. Most computer scientists believe that such a computer program does not exist, although proving that it does not exist remains one of the main open challenges in computer science and mathematics.

 

Q: What are some examples of how machine learning is used today?

PB: Today machine learning is already everywhere. You use it constantly without even being aware of it. It is hidden in the search engines that you use to search the Internet. It is hidden in your cell phone, your car, your appliances, your house and so forth. It is used in all the computer-vision systems, speech-recognition systems, natural language understanding and translation systems, etc. that are being deployed everywhere. It is used in robots, drones, self-driving cars and many other control systems. The list goes on. As scientists, we use it every day to analyze complex data to make new discoveries in physics, chemistry and biology. We use machine learning to analyze data from complex instruments ranging from particle colliders to telescopes to genome sequencers, and to predict the properties of molecules and materials, or the outcome of chemical reactions.

KZ: IBM Deep Blue, IBM Watson and Google AlphaGo are probably the most prominent examples of AI-based technologies; all three made their fame by outperforming human contenders in competitions. However, AI-based applications can be much more commonly found in our everyday lives. For example, speech-recognition software such as Siri and Cortana, and voice commands widely used in modern vehicles, are all applications of AI.

 

Q: What are the remaining big challenges for AI?

PB: While there has been a lot of progress, our theoretical understanding of AI and machine learning is still very incomplete. For those of us interested in understanding natural intelligence, the ways in which intelligence is implemented in the wetware of the human brain remains a major challenge that will take a long time. To understand intelligence in the brain, you must first understand how the brain stores information, and this is still a very messy story, in spite of all the progress being made.

KZ: Poor adaptability will continue to be a key challenge to AI. While machines can learn from training data and their past performance, this process is often cumbersome, and they are not yet versatile enough to transform knowledge acquired in a given setting to solve new problems in unknown territories.

JK: AI systems are very good at doing one thing, but are not flexible enough to adapt to change and context. As an example, take a look at the movie “Ex Machina.” We are nowhere near the point shown in the movie where artificial systems are self-sufficient, have real understanding of what they are sensing and doing, and can interact with humans in a natural way. But, what really struck me as the biggest challenge was making an AI that moved and manipulated objects as fluidly as Ava did in that

movie. We are not close to seeing that level of sophistication in our robots and AI systems. And I think this is crucial for an AI that mimics nature. This again brings up the importance of coupling brain, body and behavior with the environment.

 

Q: Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have all expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking cautioning that AI could “spell the end of the human race.” What do you think about dystopian warnings like these?

JK: Like any new technology, we need to be careful how it is applied. But I think the benefits outweigh the risks. AI has great potential for benefiting society in healthcare, elderly care, disaster relief and high-risk or mundane tasks, just to name a few. We are a long way from the point where these systems are truly intelligent, but I think it is important to have discussions now about how AI technology should be applied.

PB: I think they are healthy and justified. AI is an extremely powerful technology, far more powerful than anything we have seen before, and the human race should be careful about its use and deployment. There is no immediate danger and no reason to panic in the short term. But it is wise for us to monitor the situation and carefully think through possible future scenarios.

KZ: It will not happen in the foreseeable future, obviously, but I believe that it will ultimately become a reality that future generations will confront. What concerns me more is the degrading of human intelligence due to the increased popularity of AI. For example, more and more we rely on GPS for navigation, and consequently, many of us are no longer able to read and process maps. This generational degradation of intelligence, in contrast to rising machine intelligence, may one day create a severe problem threatening the existence of mankind. We may eventually destroy ourselves due to our overreliance on machines.

 

Q: Are you eager to own a self-driving car?

JK: Absolutely! I think the roadways will be safer and more efficient once self-driving cars become prevalent. And I think this will be a great game-changer. Most of us want to multitask. I’d love to be reading or corresponding while in the car. Also, self-driving cars could be a great societal benefit for the elderly and handicapped.

KZ: Yes. While it requires a lot of engineering to achieve perfection, the AI techniques utilized in a self-driving car are actually relatively simple (compared to the ambiguous task that the field of AI sets out to do: creating machines that can function like the human brain). With more advanced sensors and optics, increased computing power and more accurate object-recognition algorithms, I think self-driving cars can be highly reliable and can easily surpass the performance of average human drivers. I look forward to owning one. 

PB: Not particularly. I actually like driving cars with manual transmissions.

  

Q: What does the future hold for AI?

JK: I think the next big breakthroughs will be seen in the neurosciences. One specific area I am excited about is neuromorphic engineering, where the goal is to design computers based on the brain’s architecture. Using this approach, we are constructing machines that use very little energy, but have more computing power than conventional computers. These cutting-edge computers have the potential to create AI that thinks more like humans and are efficient enough to be self-sufficient.

KZ: I am very optimistic about the future of AI. Thanks to advancement in computing technologies, we now have unprecedented computing power that can be delivered via high-speed networks to very small devices such as cell phones, watches and pacemakers. Many AI techniques that were only theoretically possible in the past are now commonly used in practice and will soon make their way to consumer electronics.

PB: You are going to see more and more of it, everywhere, from smart houses to smart grids, from self-driving cars to smart drones, from advanced speech recognition to personal assistants, and so forth. In addition, there is going to be a trend toward increasingly more general forms of intelligence – intelligent systems that can integrate information from, and perform in, multiple domains. There also is going to be progress toward “universal intelligence,” using techniques where agents can interact with any environment and progressively learn how to behave intelligently. What is much more opaque is the more distant future of AI, and while I think about it, I would rather not make any predictions.


-Sharon Hendry, Calit2

 

connect with us

         

© UC Irvine School of Social Sciences - 3151 Social Sciences Plaza, Irvine, CA 92697-5100 - 949.824.2766