The impact of AI
Concern exists on both sides of the Atlantic that Artificial Intelligence (AI) is playing a harmful role in our democracies, with messages targeted to resonate with specific voters and sway the polls.
Increasingly, for example, we hear of Cambridge Analytica, and its ability to collect data from individual’s Facebook, Twitter, Instagram and other social media accounts. Using this data with advanced AI technology – in the sphere of national elections at least – to influence an outcome is largely viewed with disapproval.
AI pervades much more than our politics. If, for instance, you fail to be the winning bidder on eBay, AI uses cognitive insight to make suggestions for similar items. Algorithms are used to recommend a movie to watch, a person to date or an apartment to rent. If you use a smartphone or a computer, AI is interacting with you.
Away from our personal screens, AI continues to play a major role in the development of automation in industry, in transport, where the autonomous, or driverless car is a prime illustration, in the service industries, where algorithms can determine insurance premiums, mortgages and loans, and even in the field of mental health and counseling. Companies and institutions are increasingly using machine learning models to make decisions that directly or indirectly affect people’s lives, such as selecting candidates for job interviews.
These are just a few mainstream examples of the applications of AI. Let’s not go too near Korean Killer Robots, or Matt McMullen’s Sex Robot, Harmony. The links are there though, if you must.
The ethical challenge of AI
Notwithstanding the potentially huge benefits AI could bring to humanity, it is perhaps the single biggest challenge to the moral fabric which underpins civilised society. Ethical considerations fall into two distinct groups: the first is the effects that AI and automation will ultimately have on jobs and the economic system in which we live.
The second is the moral aspect: to what extent is AI invading our privacy (data collecting and face recognition) and covertly influencing our personal decision-making? When it has to, is it making the right decisions for us? How are these decisions arrived at? Has bias been inadvertently or deliberately introduced into algorithms? Is there transparency?
When most people think of AI, they ask what will happen to jobs. In the light of a study done by Oxford University in 2013, which predicted that 47% of jobs in the US are at risk of becoming automated in the next 20 years, it’s a justified concern. Some people have compared what is happening with automation and AI as being a second Industrial Revolution.
However, at the conference for the Association for the Advancement of Artificial Intelligence (AAAI-16), in Phoenix, Arizona, the economist Erik Brynjolfsson argued that while studies suggest that 60% of jobs would be to some extent automated, only 5% of jobs were expected to be completely automated.
Nevertheless, it is clear that the world of employment is going to be significantly changed, that humans and machines will at least have to work together. And we are not only talking about robots on the factory floor. The capabilities of AI in the field of data sorting, such as spreadsheets, where AI can sort out thousands of pieces of data in seconds, far outstrip the capabilities of the human being.
Artificial intelligence and jobs
But will this mean a fall in employment? Or will humans simply be released to perform higher-order activities? The consensus seems to be that the impact on human employment will increase as AI becomes more sophisticated. This is where the idea of remodelling the economy comes in. Perhaps along the lines of the Universal Basic Income Scheme (UBI), the idea being that every citizen of a country receives a regular un-means-tested payment from the government, whether or not they work so that everybody can afford their basic needs.
A brave new world indeed. Though not so new, as this economic system has already been tested, and its detractors seemingly proven wrong. It has been concluded, for example, that the moral fibre of people, who receive a UBI does not deteriorate. That, rather than provoking an increase in idleness and the use of drugs and alcohol, the UBI promotes economic activity and enterprise.
The manner in which people interact with AI on a daily basis, and what the results of this interaction are, lies at the core of the ethical considerations of AI. To what extent are people being manipulated by this technology? In the case of involvement in politics, is democracy being undermined? The Cambridge Analytica controversy highlighted these concerns.
AI and ethical principles
The UK’s House of Lords Select Committee on AI has made the following recommendations. What they call the five Ethical principles, which should be applied nationally and internationally:
- AI should be developed for the common good and benefit of humanity
- AI should operate on principles of intelligibility and fairness
- AI should not be used to diminish the data rights or privacy of individuals, families or communities
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI
- The autonomous power to hurt, destroy or deceive human beings should never be vested in AI
The second point, ‘AI should operate on principles of intelligibility and fairness’ is a particularly challenging notion, given the essential nature of AI. It opens up questions regarding transparency and bias in AI, the final considerations in this piece.
The ‘Black Box’ problem (the lack of transparency) is a major issue in that burgeoning branch of AI, machine learning (ML). It is specifically an issue when humans are involved. ML could be applied, for example, to recommend the top 10 candidates from 100 applicants for a job post. But how can this selection be verified? It would be possible to check if another human had made the selection. There would be notes and summaries.
But not so with machines. Likewise, AI may be used by a bank to assess whether a customer can or cannot get a loan, but if the customer asks why he has been turned down for that loan, there is no answer from a machine, apart from ‘Computer says no’.
Issues of transparency become even more pressing when there is bias, even though it is most likely unintended. If AI is used to assess applicants for a senior job in a company, and the data fed into the machine details previous similar appointments, most of whom were white males, it will replicate that in its selection. If race or gender is ruled out of the data fed to the model, other seemingly innocent factors, such as postcode or education, can disadvantage people of a certain background.
It is the responsibility of software companies to open up the black box of AI, and ensure there is as much transparency as possible.
Here is a final consideration for readers. In the event of a mishap on the road, when drastic evasive action must be taken, which pedestrian does the driverless car choose to knock over?