Google DeepMind this week announced a national partnership for Al with the Indian G government, including a collab oration with the Anusandhan National Research Foundation (ANRF) to widen access to its scientific Al models, and a $50,000 grant to IIT-Bombay to build a novel India-Centric Trait Database using its Gemma models on Indic-language health governance documents. For Owen Larter, senior director and head of frontier policy and public affairs at DeepMind, India's ties to the developing world make it uniquely plaçed to shape how Al benefits are distributed glo bally and the responsibility for making that happen, he argues, lies as much with fron tier Al companies as with the governments regulating them. Transparency, he says, is the price of effective governance. Edited excerpts:
Google DeepMind has been vocal about extreme risks from advanced AI. How do you prioritise near-term harms versus longer-term existential concerns in your policy work?
This is a really important conversation, and obviously our mission is to develop advanced Al and put it into the world responsibly. We're excited. about how people are using this technology, such as leading Indian seientists using Alpha Fold to develop new types of cancer therapies. If we're going in to continue to build progress this, we need to make sure this technology is trustworthy and we need to continue to build out governance frameworks.
There is a little bit of a danger in segmenting different risks that we need to address. This is going to be a sort of continuous journey, to come up with dura ble frameworks. There are a few principles that we must work to We need to continue to build a really solid scientific under standing of the technology, of what it can do, its capabilities, and its limitations. It's critical to then work with partners to understand the impact this technology will have when it's used in the real world, and of testing for mitigation.
That's really an approach that we need to apply across of risks is, whatever the set of safety or making sure that our whether it's protecting child systems are useful in different languages, through to critical risks of advanced frontier sys tems developing capabilities that could be misused by bad actors to perpetrate a cyberat tack or create a biological weapon frontier safety frame work since 2024, which we iterate over time. This will not be static issue, Ai governance is never going to be a solved problem, it is an ongoing journey.
