Trying to catch up to a rapidly advancing technology, government agencies, industry, and academia are looking for guidance and best practices around AI and ethics. Many applications of AI can be characterized as some form of algorithmic decision making which raises the question of when we, as a society, should attempt to delegate decision making making processes to computer programs.
There is a difference between using a computer predicted risk score to decide which buildings to inspect and to decide a criminal sentence. As we create a theoretical taxonomy of decisions and their contexts, asking how their differences and similarities should be reflected in a society's norms, laws and policies around automation, we have a pressing practical application in mind. Governments and private actors are using AI to make decisions now.
Working with governments internationally, we are developing best practices around government use and regulation of AI and AVs.
We have designed surveys, conducted interviews, written case studies, directed legal research, created workshops, developed technical and policy tools, written guidance and facilitated regional and international conversations and collaboration between government leaders and other partners.
Our goal is to help policy makers and practitioners around the world make good decisions around AI and AV strategy, policy, ethics, governance and risk.