Welcome to the Better Communication Results vidcast, edition 110.
What purpose do companies serve in an AI world?
Julian Birkinshaw, a professor of strategy and entrepreneurship at London Business School, asks a pertinent question: in an AI world what is the purpose of companies?
He then goes on to show that human-led companies do four things exceptionally well:
- Create value by managing tensions between competing priorities;
- Create value by taking a long-term perspective;
- Create value through purpose — a moral or spiritual call to action
- Create value by nurturing “unreasonable” behaviour.
It is this last point that speaks to the heart of human value-creation. Computers run on logical paths, shaving off excess fat until the lean is all that is left. But computers aren’t good at thinking ‘outside the square’. Computers wouldn’t find themselves in an airport lounge, the last flight to a treasured destination cancelled due to engine malfunction, a pretty girl waiting at the other end. But a maverick human being would charter a working plane, round up all of the now flightless passengers to that destination, charge them a nice sum to purchase a seat, and still get into their destination in time to meet the pretty girl.
That’s what Richard Branson did, and that’s what mavericks do when given the opportunity to flex their cognitive muscles and take a risk.
Our businesses are risk-averse, no matter all their rhetoric about valuing failure, but in the coming age of AI, when thinking in straight lines is left to computers, it will be the mavericks and the unreasonable risk takers that move a company out of the ‘me also’ category and into its own.
Advertising brands have yet to tap into the full AI promise
AI has made considerable inroads into how businesses communicate their brand, but there is still more that can be done.
Chatbots, product recommendation agents, and deep learning agents like Siri, Google Assistant, Cortana and Alexa have made great strides in bridging the brand-customer gap.
But there have been experimental failures along the way. Who can forget the failure of Microsoft’s Tay AI bot, which within 16 hours of being released onto Twitter learned of our darkest desires and mimicked them. It had to be shut down.
But in spite of the failures there has been unqualified success. Netflix’s recommendation engine is universally praised as an example of ‘AI done right’.
Despite the doom and gloom from the pessimists about how AI will make everyone redundant, the future is bright for the creative mind.
The brand-to-consumer digital experience will always need the application and understanding of the human creative process as its main input. However, the automation and accuracy that artificial intelligence ultimately will be able to provide will give brands and their employees the time and support they need to truly delight their customers.
What’s a chatbot?
A chatbot is a computer program designed to simulate a conversation with human users, especially over the internet.
It can be of two types: Simple, or AI-based.
In the Simple type, the chatbot responds to queries by following a flowchart of menus. You set the flows and information related to your product and when the customer asks about a particular information that flow will be triggered automatically.
An AI-based chatbot recognises keywords from the user query and responds to the user accordingly.
So, for example, a user input of, “book me a flight to Dubai on the 19th” will find the bot recognising the destination, Dubai, and the date, 19th, and the command, book me a flight. With the help of natural language processing (NLP), these bots can also respond in a natural-sounding language with users.
Chatbots used to need extensive coding to create, but there are now third-party vendors that will allow you to build your own chatbot, without knowing any code. Floatbot.ai is one such business.
The real payoff from Artificial Intelligence is still a decade off
Eduardo Campanella says the fourth industrial revolution is already underway, but the society-transforming effects are still under development.
If you want to read a good overview of industrial revolutions, and why we are
Arguing that global accounting practices have not kept pace with the rapid growth of intangible, non-GDP elements, Campanella looks to the growing uptake of robotics and AI projects amongst the largest companies around the world. While small employers are still grappling with the implications of the third industrial revolution, computers, the big companies are throwing their considerable resources at AI. But it takes a quarter of a century for the effects to trickle down through society and the economy.
As Campanella says, be patient. If history is any guide, the payoff from artificial intelligence will come at some point, probably not before 2030. So, until then, use the time to learn skills robots will not yet be able to master.
Permission requested and companies will pay for transgressions
Geoff Livingston points out that irrespective of the developer of AI algorithms and processes, it will be the company that uses the software that will bear the brunt of any ethical mishandling and mistakes.
And the ire of the consumer will be writ large on the public walls of Facebook, Twitter, LinkedIn, Medium, and blogs, podcasts and YouTube.
Companies, argues Geoff, are unlikely to hire skilled data scientists to look at the software and algorithms they purchase and ensure that biases and errors are kept out of the initial code. It will be left to IT managers and CIOs to ensure that there are no lurking timebombs waiting to pounce. As Geoff says, unintended data and algorithm biases can paint brands as biased or politically motivated, which might be at odds with their own branding and communications output.
Think of a charity inadvertently denying aboriginal members of the community fair access, and you can quickly see how community anger can flare up. That anger will vent itself very publicly.
Back in the early part of this year Forrester wrote a useful report on this. As they say, just as FICO in the USA isn’t held accountable when a consumer questions a bank’s decision to deny them credit, so Amazon, Google, and other providers of trained machine learning models will not be held accountable for how other companies use their models. Instead, the companies themselves, as the integrators of these models, will bear the consequences of unethical practices.
As Geoff says, marketers and their executive teams should revisit their crisis PR plans. Specifically, marketers should create guidelines for an AI-related adverse event.
This presentation contains images that were used under a Creative Commons License. Click to see the full list of images and attributions.