Accountability and responsibility in the AI age
Vikram Mahidhar reminds us all that AI is only as good as the humans supervising it and programming it. The biases and artefacts that come out of the processing are reflective of the biases programmed in at the beginning. A program trained to recognise totalled car bodies for insurance purposes, for example, will need close supervision of its decision-making outputs, for regulatory and consumer confidence and acceptance of the decision. So too with loan applications. There is a call and a growth in a new class of AI—one that is explainable, and that builds trust by providing evidence.
Vikram also reminds us that a governance strategy is key to engendering trust in our organisation, processes and people. AI is no different in needing a governance strategy than any other facet of the organisation. Traceability sheds light on machine reasoning and logic, the right controls and human intervention remain paramount, and there is an ever-present need to beware of unintentional human biases within data.
The EU has shaken up global companies with its GDPR requirements; with the news that the EU is now working on AI governance and ethics issues, it won’t be long before an equally-rigorous set of regulations makes it way across our doorsteps.
What are AI and Machine Learning?
Terence Mills of Moonshot and AI.io has written an excellent little introduction to the different crafts of machine learning (ML) and artificial intelligence (AI). The two are different, and ML is a branch of AI.
Giving an ultra-brief overview of where AI and ML came from, Terence goes on to explain the differences between the two disciplines, and what they are each good for.
“AI means that machines can perform tasks in ways that are “intelligent.” These machines aren’t just programmed to do a single, repetitive motion — they can do more by adapting to different situations. Machine learning is technically a branch of AI, but it’s more specific than the overall concept. Machine learning is based on the idea that we can build machines to process data and learn on their own, without our constant supervision.”—Terence Mills
Types of AI
There are two types of AI: ‘applied’ and ‘generalised’. Applied is specific and is the most common type, being found in everything from stock trading to automated driving. Generalised is less common because it’s more difficult to build. A generalised AI system would be able to handle multiple tasks at once, in the same way that a human can.
What can Machine Learning do?
One major application of ML can be seen in human-AI interfaces, such as Apple’s Siri, Google’s Assistant and Amazon’s Alexa.
Google has just updated its ‘Translate’ app so that translations can take place anywhere in the world, whether there’s internet connectivity or not.
And online chatbots are becoming more and more prevalent, looking to eventually replace the human customer service technician with a digital one that is just as useful.
“Both AI and ML can have valuable business applications. Determining which one is best for your company depends on what your needs are”—Terence Mills
AI will take longer to change the world than we expect
If history teaches us anything, it is that things take much longer to arrive than we expect.
There is the initial flurry of anticipation when some new discovery emerges from the lab, but the wholesale adoption by society and government takes a much longer period, although when it does eventually become adopted it is transformative in more ways than dreamed of by the futurists.
This is the view of Milton Ezrati over at Forbes. In his essay, he argues that history has shown a considerable lag between the arrival of a development, say, the affordable motor car, and the building of a national road system for the cars to travel on. Or the development of nuclear technology and the safe harness of that technology into power plants.
With this in mind, and with further examples of computer miniaturisation and space travel to pepper his arguments, Ezrati suggests that the arrival of some utopian state of AI-induced ease will probably not happen, at least not how the dreamers are dreaming it. What will happen, he argues, is that (as predicted) jobs will be lost but will be replaced by new jobs (reflecting the lifelong need to constantly pivot and learn), and the eventual adoption of AI technology by societies and governments that will need a considerable period of adjustment, prototyping and assimilation.
The future is bright, but it’s still some way off.