prosaicview

Of Technology Biases and Ethics

AI Bias

How Bias can creep into Algorithms. And why ethics is important in Artificial Intelligence.

Technology per se may not have feelings but it can suffer from two human failings – biases and improper ethics. As the Indian government and private sector prepare to use Machine Learning, Natural Language Processing and Text Analytics, Deep Learning platforms, Image recognition, Decision Management and other such technologies that are often clustered under the umbrella term of Artificial Intelligence (AI), they would do well to not only understand the benefits also the shortcomings and dangers of these technologies.

For some time now, researchers and analysts have understood that any technology that depends on vast quantities of mostly human data can suffer from the same failings that humans do. That is why a lot of discussions in the AI world is focused around biases and ethics.

Government services that have deployed AI have had to face the issue of biases in the US and other countries. The bias creeps into AI applications in two ways. First, because of the raw data that is used to help the machine take decisions. Shortly after some US police departments and courtrooms took the help of AI for quicker decisions, problems cropped up. It often recommended higher punishment to coloured people from certain neighbourhoods.

That is because the machine learns from the data diet on which it has been fed. Many police departments in the US had historically been biased against certain races or neighbourhoods or income groups. AI did not correct those biases – it used those biases as its foundation for figuring out the risk presented by a person. The same would be replicated in courtrooms where the machine analysed past case judgements. The historical data convinced the machine that people of colour or people from some areas in the city were more likely to be criminals.

The same bias could affect everything – from credit card applications being approved or rejected and insurance settlement processing or even hiring of minorities. In India, similar biases can crop up while delivering services to citizens. A Dalit or a Muslim may get a bad bargain because historically they had been denied that service. People from certain poor neighbourhoods would find it difficult to get bank loans or credit cards or even passports because of data bias.

The other issue that an engineer can bias the algorithm by the attributes he chooses for the programme to make decisions. For example, if students from certain engineering colleges were given higher weight in a recruitment program, it would automatically push even deserving candidates from other colleges lower down in the list of recommendations.

Beyond biases though, technology ethics is also becoming a major discussion point among both companies and policymakers. Ethics discussions are particularly important in two areas – data collection and data sharing and especially so in the fields of health and biometric data.

For example, a lot of technology and devices are capable of measuring crucial details about your health even today. But should those devices collect and record that data, store it, and analyse it to act on it? There are pros and cons to both sides of the argument. Smartwatches have saved lives by figuring out that the wearer is having a heart attack or some other health crisis and then alerting others about it. But a machine that automatically shares data without even waiting for permission can lead to data misuse. Ditto for things like face recognition or obesity where technology is both a predictive tool as well as a danger if the data is freely shared.

Beyond that, there are second stage programmable ethics that come into play when AI gets used to develop technologies like self-driving automobiles. If a driverless car has the choice between hitting the curb, which will potentially endanger all the passengers sitting in it, or hitting an old pedestrian or young child who suddenly comes in front of it, what should it be programmed to choose?

Finally, AI can be misused to manipulate behaviour as Facebook has been accused of doing in different elections. And the misuse may go beyond just elections and creep into building a certain behaviour in society or certain sections (children etc) and nudge people into certain directions.

While no country has yet found a solution, most are actively working on setting guidelines for AI research and applications. A European Union policy document talks about “trustworthy AI” which should have human oversight, technically robust, follow proper data privacy guidelines, be transparent and can be held accountable. Other countries are discussing similar guidelines. A large amount of research and discussion is taking place to pin down narrower attributes.

In India, AI is just beginning to be rolled out into all sorts of applications both in the private sector as well as government activity. We are at the ground level so to say – and this gives us a wonderful opportunity to avoid many of the mistakes made by countries that have adopted it earlier, start on a data cleanup process and build in guidelines to direct AI application building in the right way.

But the first step is always recognising the issues involved – and that is not getting enough attention yet in the country

Prosenjit Datta

Prosenjit Datta is former editor of Businessworld and Business Today magazines

Add comment

ten + fifteen =