There is a stark line between artificial intelligence (AI) enthusiasts who passionately believe the technology will take us all to new heights of problem-solving and innovation, and those who consider AI to be a potential enabler of real-life Terminators.
However, it’s fair to say that most business decision-makers likely fall somewhere in the middle with their expectations and reservations about this rapidly evolving technology. They are curious about how it might transform their organization for the better, but they also wonder if it might be the job-killer that many workers fear.
AI’s potential is vast, but the technology has a way to go before it becomes anything near what anyone thinks it might become. Of course, that's what makes this all so exciting. While commercially available AI-powered products are already changing the way we live and work, there are still some limitations. Here’s a quick look at six things AI can’t do — at least, not yet.
Most AI systems are highly trained to solve specific problems, although AI innovators are trying to evolve the technology so that it can perform different types of tasks at once. Google recently made significant strides on this front with its Routines capability for Google Assistant. But that type of multitasking is the AI doing a series of select things (e.g., turning on lights, sharing reminders) after receiving one command from a user. So, let’s just say that today’s AI is not capable of monitoring a sales analytics dashboard while also responding to a customer’s email and taking notes in a quarterly sales meeting teleconference.
AI has an “explainability” problem. It provides answers and predictions based on the algorithms and data models that it uses to learn, but consumers of AI-powered products aren’t likely to know exactly what information the AI uses for decision-making. For example, when determining if an image depicts a human being, does the AI make a judgment based on “seeing” a mouth, or an eye, or a nose in the image? And, as a recent McKinsey Quarterly article noted, the larger and more complex data models become, the more difficult it becomes to explain, in human terms, why AI reached a certain decision.
Here’s another, deeper question to ponder about AI’s decision-making capabilities: How can end users know that the answers AI technologies provide are fair and unbiased? Again, humans feed AI data and code to shape and inform the technology’s decision-making. So, “garbage in, garbage out” is certainly a risk with any AI.
AI technologies simply don’t know if they are doing anything “right” or “wrong” when making a decision, including judgments that could result in a living thing’s death or injury. Should the driverless car swerve to miss a concrete barrier and mow down children and dogs in a crosswalk, or should the car stay on course, slam into the barrier and kill four passengers?
What is the right answer? Is there a right answer? Not exactly, seeing as either outcome is terrible. Still, MIT’s Moral Machine Project aims to teach AI, like self-driving cars, how to make the “right” judgments based on what most human beings think a morally acceptable decision would be in a given situation.
Just as AI can’t make a moral judgment, it cannot understand a person’s feelings. An AI chatbot can inform an irate customer, “I’m sorry to hear about the issue with your order. I understand your frustration.” But the bottom line is that the technology is not really sorry, and it has no idea what frustration — or any other emotion — feels like because it’s a robot.
So, while AI can be a fantastic helper for customer service operations, there is no substitute for the human element in many types of customer interactions. (And there is no doubt that most customers will continue to expect, and even demand, the ability to “talk to a real person” when they want to, no matter how helpful and efficient an AI chatbot may be.)
However, there is call center technology that can pick up on the emotions of callers, helping make call center employees more aware and empathetic to the emotional state of the person on the line. This same technology can also be used to help call center workers project the right attitude, as it can detect if they sound grumpy and send a notification that they might need to take a break. But the AI still isn’t “feeling” anything, it’s just letting you know that someone else is.
Even IBM, which is at the forefront of AI innovation, sees creativity as “the ultimate moonshot” for the technology. On this topic, the company writes, “While advancements in AI mean that computers can be coached on some parameters of creativity, experts question the extent to which AI can develop its own sense of creativity. Can AI be taught how to create without guidance?”
Like many things with AI, the answer is maybe. In fact, there already have been instances of AI doing creative things, like producing songs and paintings. But those feats were possible with the guidance of human programmers. So, for now, at least, truly spontaneous creativity remains a decidedly human trait.
The five limitations listed above help to underscore the sixth: AI cannot replace people. It’s true that AI can do many things exponentially faster than humans. It can also perform data-related tasks that are impossible for the human brain to perform. And, yes, AI’s application in the workplace can result in the elimination of roles that can be automated, like pure data entry jobs.
However, at the same time, AI can free workers to perform entirely new tasks — and more meaningful and interesting work. So, one could argue that AI’s greatest limitation is human reluctance to experiment with the technology and explore its potential as an invaluable helper in the workplace and beyond.