Designing more ethical Machine Learning algorithms and models: 3 questions to Michael Kearns

0
Designing more ethical Machine Learning algorithms and models: 3 questions to Michael Kearns

Michael Kearns, Professor of Computer and Information Science at the University of Pennsylvania, works in machine learning, algorithmic game theory, and quantitative finance. An elected member of the National Academy of Sciences, the American Academy of Arts and Sciences, the Association for Computing Machinery, the Association for the Advancement of Artificial Intelligence, and the Society for the Advancement of Economic Theory, he will be speaking at the first virtual Amazon Web Services Machine Learning Summit on June 2, 2021.

The first Amazon Web Services (AWS) Machine Learning Summit will bring together customers, developers and the scientific community on June 2 to learn more about advances in the practice of Machine Learning (ML). The free event will explore four major themes including the science of Machine Learning, which will highlight the work being done by AWS and Amazon scientists to advance ML.

In the coming weeks, Amazon Science will feature interviews with speakers on the topic of Machine Learning Science. For the second edition of the series, we spoke with Michael Kearns, Amazon Scholar and Professor of Computer and Information Science at the University of Pennsylvania.

Michael Kearns is the co-author of the book, “The Ethical Algorithm: The Science of Socially Aware Algorithm Design“, published in 2019. This book explores the science of designing algorithms that incorporate social norms such as fairness and privacy into their code to protect humans from unintended impacts caused by the algorithms. He is also the founding director of the Warren Center for Network and Data Sciences – a research center that seeks to understand the role of data and algorithms in shaping interconnected social, economic and technological systems.

What will your talk be about at the AWS Machine Learning Summit?

Recent research has been published by the ML community to help design more “ethical” algorithms and models. That is, approaches that obey important social norms such as fairness, explicability (the obligation to explain algorithmic decisions), and privacy, while allowing us to reap the benefits of artificial intelligence and ML.

For example, the dense algorithmic repository, known as differential privacy, is a powerful method for adding random elements to computations in order to develop Machine Learning models while providing strong individual privacy guarantees.

In addition, Machine Learning algorithms have recently been developed based on game theory. Game theory can be used to reinforce notions of group equity related to ethnicity or gender. Game theory is essentially based on mathematical fields that are concerned with strategic interactions and collective outcomes in systems where individuals interact with each other. Generative Adversarial Networks (GAN) is a model in which two networks are placed in competition in a game theory scenario. The first network, called the generator, produces a sample of data close to the real data set, while the discriminator detects whether a sample is real or the result of the generator.

Why is this topic particularly relevant in the scientific community today?

Everyone has noticed the growing concerns in our society about the potential misdeeds and abuses of artificial intelligence and Machine Learning.

While these concerns call for stronger laws and regulations around the technology, the science I will discuss during my talk on June 2 at the Summit opens the door to an alternative and complementary solution: designing more ethical algorithms and models that “behave better” from the start.

Some of this science is relatively mature – including the differential privacy discussed earlier. Other approaches are beginning to emerge, such as efforts to make ML models more “interpretable” or “explainable.” We need to explore these topics further to develop a deeper behavioral understanding of how people use and interpret predictive models.

As we forge a new science of designing more ethical algorithms, what three developments do you find exciting?

The science around ML is really paving the way for a new set of algorithmic techniques that balance our goals of accuracy and utility with some of the key societal concerns around AI and ML.

Furthermore, we are starting to see large-scale adoptions of this science in real-world applications. For example, the U.S. Census Bureau’s 2020 adoption of differential privacy or the new Amazon SageMaker Clarify service, which enables bias detection in ML models and understanding of model predictions

Finally, over the past decade, we’ve seen a truly interdisciplinary community emerge around these problems, which includes and needs ML researchers, legal and regulatory experts, legislators, social scientists, liberty groups and even philosophers. This makes working on these topics fun, exciting, educational – and ultimately incredibly impactful.

You can attend Michael Kearns’ talk at Amazon Web Services’ virtual Machine Learning Summit on June 2 by registering for the event. If you want to be notified when registration opens, visit the event website.

Translated from Concevoir des algorithmes et des modèles de Machine Learning plus éthiques : 3 questions à Michael Kearns