There is a stark line between artificial intelligence (AI) enthusiasts who passionately believe the technology will take us all to new heights of problem-solving and innovation, and those who consider AI to be a potential enabler of real-life Terminators.
However, it’s fair to say that most business decision-makers likely fall somewhere in the middle with their expectations and reservations about this rapidly evolving technology. They are curious about how it might transform their organization for the better, but they also wonder if it might be the job-killer that many workers fear.
AI’s potential is vast, and some amazing strides have been made recently with the release of Open AI’s ChatGPT, Google’s tease with Bard, and art generators being used to recreate their own versions of stock photos like the “women laughing alone with salad” image below. Truly the stuff of nightmares!
Like most other advancements in technology, there are pros and cons, questions and concerns, those that would use it for good and others that would take advantage of its capabilities to do harm. We’ve created a list of AI capabilities and inabilities to answer some questions you may have and squash any lingering fears… for now
AI has been generating a lot of buzz recently, and with good reason. Here are just a handful of its incredible capabilities:
This is a hot topic being talked about highly in the news, and, depending on what side you’re on, the fortunate or unfortunate answer is yes, AI is currently capable of writing academic essays. They may not be literary masterpieces but they’re doing JUST well enough to pass undergraduate level assignments! When we hopped onto Open AI’s ChatGPT to see for ourselves, we asked it directly if it was capable of writing academic essays, and this is how it responded:
“Yes, I am capable of writing academic essays. As a language model, I have been trained on a vast corpus of academic texts and have access to a wide range of information and knowledge that can be used to produce high-quality academic writing.
However, it's important to note that my responses are generated based on the input I receive, and I don't have personal opinions or biases. So while I can provide academic writing on a given topic, I don't have the ability to form original ideas or arguments.
Furthermore, it's important to recognize that academic essays often require a specific structure and format, and I can certainly adhere to those guidelines. However, if there are specific requirements for an assignment or a particular style guide that needs to be followed, it's important to make that clear to me so that I can produce writing that meets those requirements.”
Beyond essays, it can even create stories, tell jokes, and offer advice. South Park produced an episode about this recently called “Deep Learning” and even had ChatGPT co-write part of the script to make fun of what it is able to do! Despite all its capabilities, it does have limitations and will tell you so if it can’t answer a question. Ok, we know what you’re thinking. If you’re creating an article about AI’s writing capabilities, how do I know you didn’t use it to write THIS particular blog I’m reading right now? The best we can offer is a written pinky swear that a human being wrote this!
This is another topic to stir up controversy. Does it count as art if it’s created by AI? We would refer to it as art with an asterisk. AI isn’t capable of creating art all on its own. For one thing, it has to be commanded to create art, and the inspiration it draws from is fed to it through machine learning. Once instructions have been given, though, we've seen a range of beautifully created visuals to some dark and creepy things as well. And yes, it gets worse than the women laughing alone with their salads.
Another issue surrounding human reliance on AI, is will it stunt our own creativity? If we continue down this path, what will the future quality of all art forms like music lyrics, literature, poems, etc. become? We’ve seen how quickly mobile smartphones became a necessity in most of our lives, so it’s not too far fetched to conclude that AI will be as well.
There are different subsets of AI. One that is programmed and one that learns through experience. Programmed AI is fed a certain amount of data and it is able to make decisions and realize all the possible outcomes of those decisions before it makes a move. The machine learning algorithm starts at knowing nothing and collect data as it learns from experiences, much like we do. The more exposure it gets, the better it gets at that particular function.
Boston Dynamics uses this particular type of AI for its robots. Much like a baby learns to crawl, walk, and then run, Boston Dynamic's robots learned to balance on one leg, climb difficult terrain, and even jump!
If you think about the MASSIVE amount of data available when it comes to finances and investments, AI’s predictive capabilities are only going to improve everyday. Is it legal? Is it “fair”? The answer is YES to both! At least 52% of US financial firms rely on artificial intelligence for stock market trading and developing investment strategies! AI has allowed stock traders and investors to make more informed decisions with greater accuracy and speed. It utilizes the machine learning we mentioned before to analyze and identify trends that we would never be able to do ourselves, and even adjust its trade strategies based on ones made previously.
Invoca’s own “Signal Discovery” has some pretty amazing AI capabilities! It will automatically analyze and categorize all the thousands of phone calls your company receives. You’ll have access to data that will educate you and your team on what strategies, terms, topics, or phrases are contributing to — or hindering — your success. You can then use these insights to improve your sales scripts, punch up your marketing messaging, and even inform your research and development strategy so that you can keep your customers happy. Learn more about Signal Discovery here!
Though AI has made leaps and bounds in the past few years, it still faces some limitations. In this section, we’ll break down what AI can’t do — at least, not quite yet.
Most AI systems are highly trained to solve specific problems, although AI innovators are trying to evolve the technology so that it can perform different types of tasks at once. Google recently made significant strides on this front with its Routines capability for Google Assistant. But that type of multitasking is the AI doing a series of select things (e.g., turning on lights, sharing reminders) after receiving one command from a user. So, let’s just say that today’s AI is not capable of monitoring a sales analytics dashboard while also responding to a customer’s email and taking notes in a quarterly sales meeting teleconference.
AI has an “explainability” problem. It provides answers and predictions based on the algorithms and data models that it uses to learn, but consumers of AI-powered products aren’t likely to know exactly what information the AI uses for decision-making. For example, when determining if an image depicts a human being, does the AI make a judgment based on “seeing” a mouth, or an eye, or a nose in the image? And, as a recent McKinsey Quarterly article noted, the larger and more complex data models become, the more difficult it becomes to explain, in human terms, why AI reached a certain decision.
Here’s another, deeper question to ponder about AI’s decision-making capabilities: How can end users know that the answers AI technologies provide are fair and unbiased? Again, humans feed AI data and code to shape and inform the technology’s decision-making. So, “garbage in, garbage out” is certainly a risk with any AI.
AI technologies simply don’t know if they are doing anything “right” or “wrong” when making a decision, including judgments that could result in a living thing’s death or injury. Should the driverless car swerve to miss a concrete barrier and mow down children and dogs in a crosswalk, or should the car stay on course, slam into the barrier and kill four passengers?
What is the right answer? Is there a right answer? Not exactly, seeing as either outcome is terrible. Still, MIT’s Moral Machine Project aims to teach AI, like self-driving cars, how to make the “right” judgments based on what most human beings think a morally acceptable decision would be in a given situation.
Just as AI can’t make a moral judgment, it cannot understand a person’s feelings. An AI chatbot can inform an irate customer, “I’m sorry to hear about the issue with your order. I understand your frustration.” But the bottom line is that the technology is not really sorry, and it has no idea what frustration — or any other emotion — feels like because it’s a robot.
So, while AI can be a fantastic helper for customer service operations, there is no substitute for the human element in many types of customer interactions. (And there is no doubt that most customers will continue to expect, and even demand, the ability to “talk to a real person” when they want to, no matter how helpful and efficient an AI chatbot may be.)
However, there is call center technology that can pick up on the emotions of callers, helping make call center employees more aware and empathetic to the emotional state of the person on the line. This same technology can also be used to help call center workers project the right attitude, as it can detect if they sound grumpy and send a notification that they might need to take a break. But the AI still isn’t “feeling” anything, it’s just letting you know that someone else is.
Even IBM, which is at the forefront of AI innovation, sees creativity as “the ultimate moonshot” for the technology. On this topic, the company writes, “While advancements in AI mean that computers can be coached on some parameters of creativity, experts question the extent to which AI can develop its own sense of creativity. Can AI be taught how to create without guidance?”
Like many things with AI, the answer is maybe. In fact, there already have been instances of AI doing creative things, like producing songs and paintings. But those feats were possible with the guidance of human programmers. So, for now, at least, truly spontaneous creativity remains a decidedly human trait.
The five limitations listed above help to underscore the sixth: AI cannot replace people. It’s true that AI can do many things exponentially faster than humans. It can also perform data-related tasks that are impossible for the human brain to perform. And, yes, AI’s application in the workplace can result in the elimination of roles that can be automated, like pure data entry jobs.
However, at the same time, AI can free workers to perform entirely new tasks — and more meaningful and interesting work. So, one could argue that AI’s greatest limitation is human reluctance to experiment with the technology and explore its potential as an invaluable helper in the workplace and beyond.
Invoca’s AI capabilities automatically capture insights from phone conversations at scale to help marketing and sales teams target customers more effectively and generate more conversions. Check out these resources to learn more: