Artificial intelligence outcomes are influenced by a combination of intent and domain knowledge. Ameen Jauhar from the Vidhi Centre for Legal Policy and Abhishek Gupta, founder of the Montreal AI Ethics Institute and a machine learning engineer at Microsoft where he’s on the CSE AI Ethics Review Board, tell Chandrima Banerjee how much of that is tailored to the Indian ecosystem:

Where is the AI ethics discussion in India at the moment? 

Ameen Jauhar: There is a lot to do in terms of developing our own indigenous understanding of what we mean by AI ethics. As a term, ethics has become too ubiquitous. When European or North American schools of thought talk about it, they have years of research backing up why they’re proposing certain things within their ethical framework. But for India, specifically, I don’t think we have that kind of research backing.

Abhishek Gupta: When we’re talking about bias, for instance, most discussions centre on gender and race. This is from western European and North American perspectives. Gender and race are, yes, very important. But there are far more regional differences in India. I think another problem is the stature of social sciences here. At the discussions that were happening last year on AI ethics, all of the people were sourced from IITs and IIMs. Exclusively. You’d be hard-pressed to find anyone with a background in, say, anthropology or sociology.

How does that gap, not having social sciences on board, affect AI innovation?

Ameen: You need social sciences research to bring out how technology will interact with people and communities it is deployed for. Facial Recognition Technology (FRT) in law enforcement is a good example. That’s something people are voicing concern over. But nobody is talking about how different skin tones and skin textures would be designed into an algorithm in India. Or what is the database they are using? When I went to get my passport renewed, I agreed to get my picture taken, my consent was for that alone. There was no informed consent on my side to then allow this dataset to be shared with the NCRB so they could come up with FRT.

Abhishek: There are also other subtle scenarios which, I think, have a longer running fuse. Translation services, for instance, and how they are not built appropriately for a place like India. Relying on broken systems will further enhance inequities in terms of access to knowledge and core government services.

Is there work being done on the labour impact of AI?

Abhishek: People tend to think of this in binaries. A good example is call centres. A lot of tier one customer support can be automated. That said, there’s still a lot of room for those services. This is where discussions on labour impacts of AI fall apart. We’re talking about labour impact, but we don’t have proper context about our labour markets or structure, how people go through the skilling process, what are the social safety nets. It’s important to have people who are trained in social sciences. Or you’re just sitting in an AC office, thinking about worker conditions.

Ameen: In India, we are also talking about AI in courts. But fundamental to that approach is the fact that you’re not going to remove the human element from the ecosystem.

And AI applications in other sectors?

Ameen: In 2018, the AP government had entered into an agreement with Microsoft to create an algorithm that was going to predict dropout rates for students. In an objective sense, it sounds good. But as we have seen, predictive tools have their problems, especially with what kind of datasets you are putting into the algorithm to train it. So, will it put a child from a lower income background at a higher dropout risk because, historically, data is askew? It’s automation bias coming into play. Thing is, are you going to question it?

AI innovation depends on large datasets. Are privacy concerns being taken into account?

Abhishek: There, again, specific localised evidence is a problem. Everybody has an anecdote about how YouTube or Facebook ads are creepy. But where is the empirical evidence?

Ameen: I feel empirical evidence would actually show the contrary. Everybody is not against the trade-off. Privacy perceptions vary. So, it’s not just your financial information. If someone is asking what your caste or religion is, you should question it.

At the same time, my pet peeve with how AI ethics and discussions in India have been struggling is the focus on privacy and data protection. There is a very interesting model called Kingdon’s multiple streams framework, which explains why something becomes a frontrunning issue and gains policy makers’ attention. One of the elements is sustained political push. Data protection has had that since Aadhaar. For AI, it’s yet to happen.

Linkedin
Disclaimer

Views expressed above are the author's own.

END OF ARTICLE