AI as a human right
Marie Johnson has written powerfully about the rights we have as humans to the potent application of AI.
Speaking particularly of the disabled, she highlights a case study from a project in which she was involved.
‘Nadia’ is a government interface program that learns and speaks in a language that the listener can understand. Too often the interface with government and professional service businesses is either face-to-face or via dense paperwork, but such an approach is not suitable for all or all of the time. There are times when the disabled cannot make it to in-person meetings, or write yet another personal medical history, as required by a seemingly endless stream of health professionals. Nadia aims to be a computer-face that the disabled can interact with while feeling safe and understood.
Shortly to be given haptic sensors which will enable it to help the deaf, Nadia has been described by those she services as a liberator.
“The Web is fundamentally designed to work for all people, whatever their hardware, software, language, culture, location, or physical or mental activity. When the web meets this goal, it is accessible to people with a diverse range of hearing, movement, sight, and cognitive ability.” — Sir Tim Berners-Lee
“For cultures where traditions and meaning are passed through story-telling, AI digital humans can enable the young to have conversations with past elders. And significantly, it enables elders to tell their stories and have conversations with future generations.
“It is my hope that very soon, my grandsons with dyslexia and communication disabilities, can interact with a digital human AI reading coach whenever and wherever they want: a life-long learning companion to build their confidence, stimulate their imaginations and unlock their immense potential.” — Marie Johnson
What concerns me is AI falling into a two- or three-speed category, like the rest of the economy has. The rich and powerful will be able to access all the fruits of AI as they happen; the rest of us play catch-up or get left behind completely because we don’t have the resources to pay.
This, of course, is what is happening already in the corporate use of AI. Only the well-funded can afford to engage with the technology; it could be five or more years before the fruits of their learnings become available to the next tier of organisation.
The rise of the empathy economy
Computerworld’s Mike Elgin reckons its time to get emotional about the empathy economy.
Artificial Intelligence can now detect emotions better than we can. Deep learning has now locked in 21 of the purported 27 major and minor emotions we humans can express.
Which is great news for those elements of business that rely on human empathy to get the job done. Like Customer Service and Healthcare, for example. Or call centres, transport, factory safety, and virtual assistants.
As with ‘Nadia’ in our previous story, empathetic responses from entities such as government departments, customer service teams and healthcare professionals will shortly lead to fewer feelings of exclusion and distrust.
“Another major and obvious application for emotion detection is in cars and trucks. By using biometric sensors and cameras in dashboards, seats or seat belts, onboard AI can tell if a driver is stressed out or too tired to drive. Emotion-detecting cars could reduce accidents and lower insurance premiums. Ford, for example, is working with the EU to develop such a system” — Mike Elgin
IBM’s Watson is so finely tuned that it can detect emotion and even sarcasm in text.
Of course, it will be the well-heeled enterprise that will be in the first wave of early adopters of empathy AI, but don’t expect to have to wait too long for empathy-reading software to be widely available. Facebook has just been given patents for ’emotion-detecting selfie generators’ that auto-select filters depending on the user’s mood as Facebook finds it.
Imagine, software that will be able to console you and say, “There, their, they’re.”
Not being fired by AI
Do you remember the story of Ibrahim Diallo, the programmer who was fired by a computer AI system for which there seemed no input or judicial process? See bcr 104 and also Shel Holtz’s take on the story.
Professor of Intelligent Systems and Director of Future & Emerging Technologies, University of Portsmouth, Adrian Hopgood, is arguing that the system that fired Diallo was not an AI one. Precisely because there was no ability to intervene in the decision-making process.
Hopwood feels that a properly-designed AI program would have had multiple checks and balances in it, which would allow Diallo’s manager and Director to have stopped the firing process and find out the cause of the program’s decision to fire.
Instead, the very lack of these checks and balances shows that there was a distinct lack of AI involved in Diallo’s firing.
So AI got a black eye when it wasn’t even its fight.
Hopwood also argues that now is the time to sort out the ethical issues that this case has raised.
“A sacked employee may feel that they have been wronged and may wish to challenge the decision through a tribunal. That situation raises the question of who was responsible for the original decision and who will defend it in law. Now is surely the moment to address the legal and ethical questions posed by the rise of AI, while it is still in its infancy” — Adrian Hopwood
The EU is looking closely at the ethics of AI and, like the infamous GDPR, may shortly have something that makes people sit up and take notice.
Source: https://thenextweb.com/syndication/2018/07/08/this-man-was-fired-by-a-computer-real-ai-could-have-saved-him/ and https://www.linkedin.com/pulse/fir-podcast-143-fired-mistake-artificial-intelligence-shel-holtz/
Reasons to be cheerful, one, two, three…
And finally, a bonus fourth story.
AI gets a lot of press for being a job destroyer, but eight members of the Young Entrepreneur Council give their reasons for why they think the future with AI is a positive one. Here’s three reasons they gave:
Firstly, while AI will take away jobs that have a high amount of repetitiveness, it will create more opportunities for the highly creative, who will be valued for their creativity.
Secondly, the very fact that advertising, marketing and PR agencies will come to use AI more and more will bring with it agency success and more clients, which leads to more human staff to manage the human aspects of the job, such as relationship management.
Thirdly, artists are starting to explore the potential of the new technologies. Generative poetry, for example, is viewable in literary magazines. I don’t know that we’ll see another Iliad or Odyssey any time soon, but I don’t see why AI can’t write long-form stories that are coherent in their narrative.
If entrepreneurs are thinking the future is bright, are some of us guilty of holding on to prognostications of loss and turmoil for reasons of fear and avoidance of change?
Last week’s vidcast: bcr 105: On machine learning, AI and customer service, and ethics in AI