Yesterday evening, Julia Gwilt and Parminder Lally attended the Cambridge Wireless event on "Narrowing the Intelligence Gap", hosted at Amazon's site in Cambridge. The event featured an excellent talk by Neil Lawrence (IPC Machine Learning at Amazon and Professor of Machine Learning at the University of Sheffield) on whether the latest AI is "more human".
Neil began by stating that answering the question of whether artificial intelligence will ever match human intelligence requires us to first ask what we mean by "intelligence". Is intelligence merely using information to reach a specific goal, or is it something more?
Currently, many of us have adapted our behaviours to suit AI and machines. For example, we enter queries into Google in a particular way because we think this will help the search engine to find the answer. Similarly, driverless cars are tested in countries such as the US where there are strict rules about what can be on the road - we have adapted the environment to enable the driverless cars to run. However, Neil argued that the AI should be smart enough to adapt to our behaviour too. At the moment, it would be unthinkable for driverless cars to operate in cities like Delhi, where the roads are used equally by cars, bikes, pedestrians and animals, because the AI could not adapt to the environment, and changing the environment to suit the AI is highly unlikely. However, a human visitor to Delhi could relatively quickly learn the rules of the road.
Humans also possess emotional intelligence. We are able to model each other and adapt our communication with others based on what our model of them tells us they can understand. We are able to understand each other without using words (e.g. we can tell when a close friend is upset by the smallest changes in behaviour or speech). We use context to help us understand the meaning of particular words. Currently, AI is unable to do any of this, or not very well. For example, a computer cannot understand the depth of Ernest Hemingway's six word story: "For sale: baby shoes, never worn".
Neil discussed how while Boston Dynamics’ dog robot (SpotMini) impresses audiences with the various tricks it can perform, a real dog is still far more impressive. The SpotMini analyses its surroundings and then works out which of the very many pre-programmed routines it should perform. In contrast, real dogs can communicate with us and can respond to cues, both physical and emotional, of their human owners. In other words, Neil explained that AI is still a long way away from human intelligence because computer intelligence is simply different to human intelligence. Perhaps we should be worrying about what happens if a computer doesn't understand humans enough, rather than worrying about computers becoming smarter than humans? After all, if a computer cannot understand human interactions, human emotions and context as well as we can, then can we rely on AI as providing us with relevant or useful solutions to our human problems?
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.