Sarah Cen, a graduate of the Laboratory for Information and Decision Systems (LIDS) has experienced how she explored the interplay between humans and artificial intelligence systems, to help build accountability and trust.
Cen entered LIDS in 2018 after a circuitous trip. She originally became interested in research during her undergraduate studies in mechanical engineering at Princeton University. She switched gears for her master’s degree, working at Oxford University on radar solutions in mobile robots (mainly for self-driving cars).
She became interested in AI algorithms there, concerned about when and why things go wrong. As a result, she moved to MIT and LIDS for her Ph.D. research, where she worked with Professor Devavrat Shah in the Department of Electrical Engineering and Computer Science to get a better theoretical understanding of information systems.
Cen has worked on a variety of initiatives at LIDS with Shah and other partners, many of which are related to her interest in the connections between humans and computational systems. Cen is working on a project that looks into ways to regulate social media. Her recent research has developed a method for converting human-readable regulations into audits that can be implemented.
A version of the famous trolley issue, which offers a philosophical choice between two bad consequences, was brought up during a discussion on ethical artificial intelligence.
Assume a self-driving car is traveling down a tiny alley with an old woman on one side and a little child on the other, with no way to thread between them without causing a fatality. Who should the car collide with?
The speaker then stated, “Let us take a step back.” Is this even the right question to be asking?
Cen’s life changed after that. Instead of considering the point of contact, a self-driving car could have avoided choosing between two poor possibilities by making a decision earlier — the speaker pointed out that while approaching the alley, the car could have detected that the space was narrow and decreased to a safe speed.
Recognizing that current AI safety approaches are often akin to the trolley problem, focusing on downstream regulation such as liability after someone is left with no good options, Cen wondered: What if we could design better upstream and downstream safeguards for such problems? Much of Cen’s work has been influenced by this question.
Cen claims that “engineering systems are not isolated from the social systems on which they intervene.” Ignoring this truth may result in instruments that are either ineffective or, more concerning, hazardous when deployed.
Cen is also investigating whether people can achieve good long-term outcomes when they are not only competing for resources but also don’t know which resources are best for them.
Cen’s research looks at the relationship between learning and competition, examining if people on both sides of the matching market can be happy.
Cen and Shah discovered that it is possible to achieve a stable outcome (workers aren’t incentivized to leave the matching market), minimal regret (workers are content with their long-term outcomes), fairness (happiness is evenly distributed), and high social welfare by modeling such matching markets.
Cen’s work in the MIT community has also been guided by the principles of inclusion.
She helped organize the inaugural GW6 research summit featuring the research of women graduate students as one of three co-presidents of the Graduate Women in MIT EECS student group, not only to showcase positive role models to students but also to highlight the many successful graduate women at MIT who are not to be underestimated.
On the final note, Cen noted that a system that takes steps to address bias, whether in computing or in the community, has legitimacy and confidence. “Accountability, legitimacy, and trust are all important elements in society, and they will ultimately determine whether institutions survive over time.”