The people of Chandler, Arizona, slashed tires, threw rocks and braked abruptly. All in an effort to deter testing by Waymo, a Google self-driving car subsidiary, reported The New York Times last December.
Then, in August, Charles Pinkham decided to block a car with his body.
Waymo pulled out of his neighborhood.
Driverless cars sound like something out of a science fiction movie, which might explain why sentiments against them run so negative. For all the ways artificial intelligence improves lives, as technologies progress, the downsides become apparent – discriminatory hiring AIs, facial recognition used against ethnic minorities – and with those come anxieties.
To understand how warranted our fears about AI might be, UMSL Daily sat down with University of Missouri–St. Louis faculty members Badri Adhikari, an assistant professor of computer science studying machine learning; Cezary Janikow, the chair of the Department of Mathematics and Computer Science specializing in AI by evolution computation; and Gualtiero Piccinini, a Curators’ Distinguished Professor of Philosophy investigating the mind and computation.
What’s your greatest fear about AI as it’s being used now?
BA: I would split the question into two by asking what are the threats of AI in the next few years and then beyond? Five years and beyond, the way AI is progressing so fast, things are difficult to predict.
Facial recognition, that’s one of the fears. One of the others that many people in the field of AI fear a lot is automatic weapons. For example, a machine gun put on a drone. Those are already developed. The U.S. Army has a lot of such AI-enabled devices and uses them actively in war. This is one of the immediate fears.
After five or six years, there are many fears of AI. One is the potential loss of uniqueness of human identity. Once we start to have robots sitting beside us and chatting with us, how do we differentiate humans as unique beings in the universe?
CJ: If you look past the five-year horizon or more, it’s hard to imagine. You have to ask filmmakers and others. Speaking of technology, the cell phone today is very powerful, a huge computing power, but if you look at the car, for example, or a drone, it can be much bigger. So, the computing power is much more than anyone could have even predicted even 30 years ago.
Do our current fears have to do with the uncanny valley theory, which states that humans respond positively to human-like robots until they become like humans but not perfect facsimiles?
CJ: No, I think it’s a little bit different. Most of artificial intelligence, it’s basically somewhere sitting on a computer so it’s not really a human form.
BA: In the past, there are many examples of things that people couldn’t imagine at the beginning, but we later got used to. Right now, most of us feel very uncomfortable chatting with a robot sitting next to us, but maybe, years down the line, that will be the norm.
CJ: We should start with the question, what is artificial intelligence? Today, it is basically a set of technologies that work in very specific areas – take a self-driving car or facial recognition – basically a computer program that does something, but we speak of this as artificial intelligence. What I think your question refers to and what some people refer to as artificial intelligence is really an artificial mind, which is a different story.
There is a famous book by (Marvin) Minsky, “Society of the Mind,” which was 20, 30 years ago. He was one of those people that were trying to discuss how the mind really works and from there how to reproduce those processes. Today, people are getting closer, so I think this will be the artificial intelligence, really.
Do you think we can reproduce the mind’s processes in a computer?
GP: There are people now making claims like we will have a singularity where computers become more intelligent than human beings; we will be able to upload our minds into a computer and thereby gain some kind of digital immortality. Some people are scared by these prospects and some people are excited, but all of this does presuppose that we can replicate all the powers of the mind or at least the significant powers of the mind in an artifact like a computer.
There’s plenty of mental powers, abilities, that we don’t understand very well, and because we don’t understand them well, we don’t know even how to begin to replicate them in a machine. Things like deep learning neural networks are really more powerful than previous generations of machine learning systems. Not that they do it exactly the way people do it, but there are similarities. In fact, they can do things that people cannot do, so there’s always been this kind of interesting tradeoff between artificial methods and human methods. Once we figure out some things about how humans do things, then we can throw a lot of computer power at it in a machine and go beyond the human. But there are other things we don’t understand – how a mind works, how a mind constructs and creates ideas. Some people have tried to get computers to do creative things and succeeded to a degree but not to the point of really constructing the kind of complex creative products that people come up with.
Then, there’s the biggest mystery – consciousness. It’s very difficult to understand how these conscious feelings relate to the neural machinery that is running inside of us. There’s a big debate in philosophy and beyond about the nature of consciousness. Is it something physical? If it’s something nonphysical, that makes it very difficult to replicate in a machine, but even if it is something physical, we really don’t understand it very well, so I don’t think anybody really has a good idea of how to try to replicate that in a machine.
Can you explain the concept of the singularity?
GP: I think it can be understood it two ways. One would be that the machines develop a full-blown conscious mind at the same time that they also become more intelligent than humans. Then there’s a weaker singularity, which is machines become more intelligent than humans but not conscious.
I think it’s interesting that some people are afraid of the singularity because of the classic “robots take over the world,” and some people feel very optimistic – the singularity is going to save the planet from climate change – and then some think it’s so far off that it’s not worth considering.
BA: We tend to think that as deep learning or AI makes progress, “Oh, it looks like we’re getting closer to the singularity or we’re getting closer to simulating thinking.” However, the research toward simulating the mind’s consciousness is far behind compared to the progress we have made in solving many other problems.
If you check recent deep learning papers and articles, you will see that they are all mostly in the areas of human mental labor. AI algorithms and methods have done very well mostly in all the areas where humans do mental thinking repeatedly for the same task. For example, driving cars and diagnosing the same disease over and over again. AI can perform very well at those tasks because we have a lot of data in such areas. One of the very powerful abilities of humans is to learn from very few examples. Machines cannot learn from one particular failure or one particular instance. From that perspective, we are still very far from simulating consciousness.
CJ: Also, the definition of intelligence has changed. What happens if your phone calls you and says, “Oh, I’m Joan, how are you doing?” It’s possible today, so I think the definitions also have changed as far as what is really artificial intelligence and what’s on the horizon.
What do you think about things like Alexa or Siri? We’re pretty ready adopters of having this AI technology in our homes.
CJ: I have no problems using them, those are tools. Once they become conscious and start lying to you for some specific purpose then it becomes a problem.
BA: From an AI developer and AI scientist perspective, these technologies are AI. But if you look at it technically, compared to what Cezary pointed out – human consciousness and our mental abilities to solve problems – it’s nothing. Alexa or Siri can have full conversations with us, but if you look into these algorithms, you will find that they are not extremely complicated. In many instances, it is as simple as writing a computer program that does, “When somebody says this, fetch a random commonly answered sentence and spit it out it as an answer.”
So, you’re saying we shouldn’t be afraid of our Alexas because they don’t have judgement.
BA: Of course, yeah, but we should be very careful not to be fooled. These systems could be so well trained, or even hacked, that they can automatically call you with a voice of someone you know, and fool you to transfer money. In particular, we should stay alert when we are talking over the phone because we could always be talking to a machine.
GP: I think what we have to be afraid of right now is how people, especially not well-intentioned people, can use this technology to exploit or manipulate us on a collective level. We now know a little bit about what was done in the 2016 election in the United States. We need to put some kind of barriers in place to protect people, but it’s still an embryonic conversation at the political level. The last thing we want is a society in which elections are won by the party with the greatest control over AI technology designed to manipulate the population.
Is regulation or other control over AI technology something wanted?
CJ: That is very problematic because you have to put a check on something that you don’t understand yet because it’s evolving, and to put on proper regulations you have to be ahead. The second problem is that regulations, in general, always slow down development and new ideas.
If we don’t have official regulation, is there anything developers should be considering in regards to AI ethics. How deeply are those discussed?
BA: With the power of AI we can do so many things now. I could create an entire movie replacing somebody’s face with yours. It is possible. So, at some point, it does boil down to AI developers to decide what projects they want to work on and what projects they don’t. I ask my students to consider AI ethics and focus on things which are good for humanity.
CJ: To follow on this, let me pose a question to you. Today, you cross the street and someone drives a car and hits you, and you get a broken leg. You sue that driver, right? Suppose it’s a self-driving truck. Do you sue the truck or the manufacturer or the programmer or the government that allowed it?
BA: That’s a question that’s being heavily discussed in many courts right now. To take it further, one of the questions in AI is what if a human-like robot commits a crime. Last year, Saudi Arabia gave citizenship to a robot named Sophia. Now, if Sophia commits a crime, do we sue the robot? The problem is that we are far behind in AI ethics compared to how fast the technology is moving.
Is there anything we didn’t cover?
BA: We could talk about the difficulty of building these AI systems. If you are interested in knowing how to build such systems in order to understand their potential and limitations, it is important to note that it is not difficult. Once someone knows computer programming, they are not very far from building AI-based software.
GP: And then there are a lot of positive things that come out of AI. It may be obvious, and we didn’t really emphasize that, but a lot of the things that AI enables make our lives better.