Aaron AupperleeThursday, February 24, 2022Print this page.
Vincent Conitzer expects much to be the same when he returns to Carnegie Mellon University this coming fall.
It will still be the best place in the world for computer science and the technical expertise will still be unmatched. Many of the colleagues, professors and even his Ph.D. advisor will also still be around.
But don't be surprised if the renowned artificial intelligence researcher and ethicist appears lost in the corridors and hallways of the Gates and Hillman Centers. When Conitzer was finishing his graduate work in computer science in 2006, he spent his time in Wean Hall. Gates wasn't built yet.
"Once I'm in Gates, I'm lost," Conitzer said of recent returns to campus.
This fall, Conitzer will join CMU's School of Computer Science, where he earned his master's and Ph.D. in computer science. He is currently a professor of new technologies, computer science, economics and philosophy at Duke University.
Tuomas Sandholm, the Angel Jordan University Professor in the Computer Science Department (CSD) and Conitzer's Ph.D. advisor, is excited to have his former student back on campus and looks forward to collaborating and teaching courses with him.
"Vince is a star, and I had a wonderful time working with him back in the early 2000s," Sandholm said. "Since then, he has had a meteoric rise to become one of the leaders in the field. I am thrilled that we were able to recruit him back to CMU."
Conitzer's rise to the top is evidenced by Duke granting him a double promotion in 2011, elevating him straight to full professor from assistant professor, without a stop at associate professor. At the time, he was the youngest full professor at the university.
At CMU, Conitzer's main appointment will be in CSD, where he will lead the new Foundations of Cooperative AI Lab (FOCAL). He will have affiliate and courtesy appointments in the Machine Learning Department, the Department of Philosophy in the Dietrich College of Humanities and Social Sciences, and the Tepper School of Business. Conitzer will also continue his part-time appointment at the Institute for Ethics in AI at the University of Oxford.
FOCAL will research how to make artificial intelligence systems cooperate with each other and with humans. Conitzer’s work with FOCAL will be supported through a $3 million gift from the Center for Emerging Risk Research and a $500,000 gift from the Cooperative AI Foundation.
The Center for Emerging Risk Research, based in Basel, Switzerland, believes that AI will play an increasingly large role in society over the coming decades, and that it's important to ensure that cooperative intelligence is an important part of AI systems.
"We're delighted to be supporting the founding of FOCAL and the important work they will do," the center said.
The London-based Cooperative AI Foundation selected FOCAL as the recipient of its first major grant.
“With the increasing ubiquity and capabilities of AI systems, it is more important than ever that we develop firm foundations underlying their interactions with one another, and with humans,” the group said. “We are therefore glad to be supporting the excellent work of Professor Conitzer and his collaborators at FOCAL, whose research on this topic will help to improve the cooperative intelligence of advanced AI systems for the benefit of all humanity.”
Conitzer’s work with FOCAL will become increasingly important as algorithms and AI become more prevalent in society and start to perform more complex tasks or are asked to make complicated decisions. These developments could lead to AI systems in conflict either with each other or with the humans they are intended to support.
"At this point, we don't have too many situations in which independent AI systems interact, but the worry is that we're going to see a lot more in the future," Conitzer said. "And increasingly, we'll see AI have more control in the decisions to be made."
This is where Conitzer's background in ethics and philosophy comes into play. While it may seem that an easy solution is to bar AI from making decisions with ethical concerns, that will not always be practical. The speed, scale and scope of the problems and the decisions to be made will eventually outstrip human capacity. A self-driving car cannot ask for human input before it makes a decision. The scale of content moderation on social media already tests how much human moderators can handle. And complex algorithms and marketplaces are needed to handle the scope of factors that must be considered when matching potential organ donors with recipients.
"We need ethics in our computer science education. It's important, and I think it is something that is missing," Conitzer said. "We need to bring this into our curriculum, but it's hard. The traditional teaching of the high-level principles guiding ethical decisions clashes with the precision computer scientists need for their code."
Conitzer's research spans many thorny areas in artificial intelligence and ethics. He studies questions about the implications of systems that are more intelligent than humans, and weighs the pros and cons of explainable machine learning. Yes, the algorithm used to set someone's bail should be interpretable, but no, the algorithm predicting whether a tumor is malignant doesn't necessarily have to be. What about fairness? What about bias? What about autonomous weapons systems?
"It does keep me up at night sometimes, but that is because I think we can tackle these issues and produce better outcomes," Conitzer said. "These are things worth thinking about, and generally, I'm an optimist."
Ever the optimist, Conitzer is certain he will quickly learn his way around Gates.
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu