Geeta Chauhan, CTO, Silicon Valley Software Group, joined Rishaad Salamat and Bryan Curtis on Daybreak Asia. She discusses the importance of ethical design in AI, how she envisages a future where we all have personal digital assistants and how users will gain ownership over their data.

Listen to Geeta’s interview at Bloomberg.com and look out for Geeta’s upcoming article “Ethical AI is possible, a Postcard from the Future” on the SVSG blog. 

Alright, we going back to the San Francisco 916 newsroom now because we are joined by Geeta Chauhan, she’s a chief technical officer at Silicon Valley Software Group. Geeta, thank you so much for joining us. You are around to discuss the ethical design of artificial intelligence. So, could you tell us the vision of 2022 is, and if the movement succeeds for ethically aligned AI.

Hello Richard, thanks for having me here. By 2022 if the ethical AI movement succeeds, most of us will have personal digital assistants like the Jarvis in Iron Man helping us with our routine tasks, and also with our professional and our personal lives for decision making. So, Jarvis will come with a fake news detection mode to help verify encounters any false piece of information, it will quietly slip in suggestions into our ears or onto our virtual reality goggles. Companies will start to source data ethically and start to pay data dividends to people. Many countries would have gone to a national identity on the blockchain and blockchain will be just for maintaining the full audit trail.

There are so many things here, I think we better go one by one through them because these are really interesting ideas. A universal national identity on a blockchain. And that users actually own their own data. That’s a big change from today.

Indeed, it is going to be a big policy and a big revolution. I would say.

But who enacts that? Who is it that makes that decision or forces everyone to comply?

It will be a combination of things coming from regulations and the government as well as things coming from organizations themselves. So, if one takes a look at Estonia, they’ve already put their national identity on blockchain. And in the past there have been many incidents where companies paid out people for ethically using their data. So, such scenarios will become a more prevalent. Especially when you’re trying to build a society where each one of us can have a better future down the road, you need solutions like this.

Alright Geeta, so people will own their own identity and their own data. So, would I be able to and would I be expecting a check from the likes of Facebook and Google every time they sell data?

Yes. Definitely things like that will start to happen. In California, for example. The governor Gavin Newson, recently announced a proposal for data dividends for all the citizens of California. So, if such a law goes out, companies will be regularly paying out dividends to people.

Now, this is also interesting, this Hippocratic Digital Oath. So that designers and companies will actually have to adhere to this. And you know that’s the honor system at work, right?

Yes, indeed. It will start as an honor system, but the ownership has to come right from the company’s top. Like the CEO, the CX organization as well as the board of directors have to take ownership for accountability for the decisions that the AI systems that they roll out into the market. They have to be accountable for those. We are already seeing this in the space of security, for example: most companies are doing security by design. Similar things will happen in the AI space. Then everybody will be designing ethically aligned AI systems by design.

This is it, isn’t it? Everybody is going to be ethically with it, etcetera. It seems like a bit of a pipe dream to me. Tell me about also this personal digital assistant that would be able to perform most routine tasks. Is it a bit like the Siri of today or the Alexa? Put it that way.

Yes, you can think of Siri, Alexa, Google’s auto suggestions for our emails has already started. There are apps that have come out which are helping us in our calendar management, x.ai is a good application for that. So the routine tasks which we would rather not spend our time doing—if we’re out of milk, our fridge automatically places an order for us—such kind of assistants will start to emerge, especially to help us with the routine tasks.

Yeah, I guess lot of us went to it when we got the dishwasher a long time ago. But Geeta, in terms of how much of this comes in because of regulatory change versus just societal acceptance, what’s the balance there do you think?

I think a lot of it will be disruptive startups who are coming out with these new types of solutions, and then regulatory bodies will catch up. IEEE for example, has already come out with the ethically aligned AI guidelines. EU has the trustworthy AI guidelines. So, such guidelines will emerge. And at the same time, organizations like OECD, they’ll work with governments across the world and define things like developing metrics, that will govern the future types of impact organizations of the future.