News 2021

June 2021

SCS Faculty, Ph.D. Student Named to Innovators Under 35 List

Aaron Aupperlee

Virginia Smith, an assistant professor in the Machine Learning Department, and Priya Donti, a Ph.D. candidate in the Computer Science and Engineering and Public Policy departments, have been named to MIT Technology Review's prestigious annual list of Innovators Under 35. The list recognizes exceptionally talented technologists whose work has great potential to transform the world. Smith is a leader in federated learning, which considers training models to learn from data stored in private silos like mobile phones, smart home devices, or hospital records. Her work prioritizes privacy while powering AI to be accurate, trustworthy, and accessible. She has specifically developed personalized federated learning approaches that allow machine learning models to benefit from a breadth of data while adequately capturing outliers. Donti focuses on using machine learning to tackle climate change. Her research incorporates the physics and hard constraints associated with power grids into neural networks to guarantee the safety of the systems, and to increase their efficiency through machine learning. Donti also co-founded Climate Change AI. The organization has held workshops at top machine learning gatherings, led educational events at the UN Climate Change Conference, briefed local and national policymakers from several countries and runs a mentorship program. "We get more than 500 nominations for the list every year and getting that list down to 35 — a task not only for the editors at MIT Technology Review but also for our 30+ judges — is one of the hardest things we do each year," said Tim Maher, managing editor of MIT Technology Review. "We love the way the final list always shows what a wide variety of people there are, all around the world, working on creative solutions to some of humanity's hardest problems." Learn more about Smith, Donti and the rest of this year's honorees on the MIT Technology Review website and in the magazine's July/August issue. The honorees will also be featured at the upcoming EmTech MIT Conference held online September 28-30, 2021. About MIT Technology Review Founded in 1899, MIT Technology Review is a world-renowned, independent media company whose insight, analysis, and interviews explain the newest technologies and their commercial, social, and political impacts. MIT Technology Review derives its authority from its relationship to the world's foremost technology institution and from its editors' deep technical knowledge, capacity to see technologies in their broadest context, and unequaled access to leading innovators and researchers. MIT Technology Review's mission is to bring about better-informed and more conscious decisions about technology through authoritative, influential, and trustworthy journalism. About the EmTech event series MIT Technology Review's EmTech series examines emerging technologies that will drive the new global economy. From mainstage keynotes to Q&As and small discussions, these events provide a curated view of the year's most important developments. EmTech gives attendees the opportunity to discover future trends and learn from the most innovative people and companies in the world. Established more than 20 years ago, EmTech events have become a must-attend for entrepreneurs, business leaders, innovators, policy influencers, media, and more. This year's EmTech events in the United States include EmTech Digital, March 23-25; EmTech Next, June 8-10; and EmTech MIT, September 28-30. Learn more.  

Finding Support for India During its COVID-19 Surge

Aaron Aupperlee

India and Pakistan have fought four wars in the past few decades, but when India faced an oxygen shortage in its hospitals during its recent COVID-19 surge, Pakistan offered to help.On Twitter, hashtags like #IndiaNeedsOxygen and #PakistanStandsWithIndia trended. Finding these positive tweets, however, was not as easy as simply browsing the supportive hashtags or looking at the most popular posts. Negative tweets often hijack the supportive hashtags for trolling or fighting with other users. And Twitter’s algorithm isn’t tuned to surface the most positive tweets during a crisis.Ashique KhudaBukhsh of Carnegie Mellon University’s Language Technologies Institute led a team of researchers who used machine learning to identify supportive tweets from Pakistan during India’s COVID crisis. In the throes of a public health crisis, words of hope can be welcome medicine."When a country is experiencing a national health crisis, the more supportive messages you see, the better," KhudaBukhsh said. "If you’re just randomly searching, you’ll find positive tweets about 44% of the time. Our method gives a person positive tweets 83% of the time. That person will have to do a lot less work to find the supportive tweets."By combining existing language classifiers — algorithms trained by machine learning that determine, for example, whether speech is positive or negative, hopeful or distressing — the team developed a system that identified supportive or positive Pakistani tweets about India 83% of the time, far better than existing methods. The team used their method to conclude that more than 85% of the tweets posted about the COVID crisis in India from Pakistan were supportive."We have boundaries but not in our hearts," one tweet detected by the team began.The team included KhudaBukhsh, Clay H. Yoo and Rupak Sarkar from the LTI and Shriphani Palakodety, an engineer the blockchain and AI company Onai who earned his master’s from the LTI. They published their results in a paper titled, "Empathy and Hope: Resource Transfer To Model Inter-country Social Media Dynamics," which was accepted into the ACL Association for Computational Linguists Workshop on Natural Language Processing for Positive Impact. The work was performed in real-time with the crisis and as members of the team worried about the health of loved ones in India.The research is significant on several fronts. First, the team showed that existing language classifiers could be useful in broad contexts. This is important because to deploy a classifier that will be useful in the middle of the crisis, it must be built quickly. It cannot be built from scratch, and the team wanted to see if existing research into language classifiers could help.To detect supportive tweets during India’s COVID crisis, the team used a hope-speech classifier that KhudaBukhsh and Palakodety built with the late Jaime Carbonell, a distinguished professor in the School of Computer Science who founded the LTI, to identify positive YouTube comments on videos posted about the 2019 escalation of conflict between India and Pakistan over Kashmir. The team then combined the hope-speech classifier with a known empathy-distress classifier.Despite these two language classifiers being built for different reasons and trained on different data, they effectively detected positive tweets during India’s COVID surge."We showed that there is some sort of universality in how we express emotions," KhudaBukhsh said. "And we showed that we can use existing solutions, combine them and attack future crises quickly.The research was also potentially significant to the crisis in India. KhudaBukhsh and Carbonell envisioned the hope-speech classifier as an alternative way to combat hate speech. Instead of detecting and deleting, downplaying or blocking hate speech — which exists in droves on the internet — the pair sought to use their hope-speech classifier to identify and amplify supportive messages. People are influenced by what they see and read, and if hopeful messages are put in front of them rather than hateful ones, it could affect how they think and act.The team identified tweets that offered prayers to India, spoke to the common humanity of two countries, and sent love."Heartbreaking to see this situation in our neighbourhood. Send love and prayers from Pakistan. May Almighty Allah help humanity through this pandemic. Stay strong. Stay safe," a tweet found by the team read.Emphasizing the support between India and Pakistan could make a difference, KhudaBukhsh said. And since so many fights now happen over the internet, maybe that’s the place to start."These two countries have such an acrimonious past," KhudaBukhsh said. "Any positive behavior from either side can help promote world peace."

 Various graphs and charts representing traffic jams displayed on the screen of a laptop, and a smartphone.

CMU Spinoff Marinus Analytics Awarded AI XPRIZE Third Place

Yana Ilieva

Marinus Analytics, a Carnegie Mellon University spinoff company, won third place and $500,000 in the IBM Watson AI XPRIZE competition.The company, headquartered on Pittsburgh's North Side, was formed in 2014 by Emily Kennedy (DC 2012) and Cara Jones. Kennedy began working on the project while still an undergraduate in Carnegie Mellon's Dietrich College of Humanities and Social Sciences, and Jones left her job as a staff member of the Robotics Institute to help launch the company.Nearly 10 years later, Marinus Analytics offers a cutting-edge suite of AI-powered technology that aids law enforcement in finding and protecting victims of human trafficking.The company's flagship software, Traffic Jam, was developed by the Robotics Institute's Auton Lab. It uses AI to help police efficiently sort through online escort service ads, grouping them by phrases, phone numbers or locations. This grouping helps detectives identify the people who control the victims of sex trafficking. Traffic Jam has over 2,500 registered law enforcement and nonprofit users, tackling the global problem of human trafficking and preventing future exploitation. Marinus Analytics was one of the three grand prize finalists and the only finalist from the U.S. Zzapp, an Israeli company that uses AI to predict where stagnant water bodies may occur and subsequently prevent malaria outbreaks, won first place and $5 million. Canadian company Aifred Health, which focuses on using AI to help physicians create personalized mental health treatment plans for patients, won second place and $1 million.Read more about Marinus AI and the IBM Watson AI XPRIZE competition in this story on CMU's news page. 

Computational Biologists Look to Nature for New Drugs

Aaron Aupperlee

Identifying molecules and predicting the identity of unknown substances play huge roles in discovering natural products that can be used in drugs to treat cancers, viral infections and other ailments. Researchers in the Computational Biology Department have recently made important strides to improve and accelerate these processes. In one study, a team developed algorithms that match the signals of a microbe's metabolites with its genomic signals and identify which ones likely correspond to a natural product that could be used in drug development. In another, the team created an algorithm that uses mass spectrometry data from molecules to predict an unknown substance's identity, telling scientists early in their research if they've stumbled on something new.

Potty Talk

Daniel Tkacik

Lorrie Cranor decided to present at this month's Conference on Privacy Engineering Practice and Respect (PEPR) from an extremely private room in her home. "Today, I'm talking to you from my third-floor bathroom," Cranor said at the start of her presentation. "I know this is an unusual place to record a presentation, but hey, this is an unusual presentation." Cranor, the director of CyLab and a professor in the departments of Engineering and Public Policy and the Institute for Software Research, has used potty talk to teach about privacy since 2014, when she conducted a study in which she asked people of all ages to sketch drawings in response to the question, "What does privacy mean to you?" The study's authors analyzed the illustrations to identify privacy themes and define the conceptual metaphors used to convey privacy. Illustrations included a wide range of concepts, including locks, doors, windows, and of course, over two dozen drawings of what might be the most quintessential private space: the bathroom. While children drew bathrooms as a refuge from siblings, adults drew themselves sitting on toilets enjoying a quiet break. You can read more about Cranor's bathroom talk and see examples of drawings from her 2014 study in this story on CyLab's website.

Hewlett Packard Acquires AI Company Co-founded by Machine Learning Professor

Aaron Aupperlee

A machine learning technology company co-founded by Ameet Talwalkar, an assistant professor in the Machine Learning Department at Carnegie Mellon University's School of Computer Science, will join Hewlett Packard Enterprise (HPE). Determined AI, a San Francisco-based startup, builds software that trains artificial intelligence models more quickly and at scale using its open-source machine learning platform. Talwalkar is chief scientist at Determined AI, which he co-founded in 2017 with Neil Conway and Evan Sparks. "We are thrilled about the opportunity to partner with HPE to deliver co-designed software and hardware and tackle some of society's most pressing challenges," the founders wrote in a blog post announcing the acquisition. "HPE shares our vision that driving an open standard for AI software infrastructure is the fastest way for the industry to realize the potential of AI." The founders wrote that HPE will continue to expand Determined AI's training platform as an open-source project. The platform allows engineers to easily implement and train machine learning models to provide faster and more accurate insights from data in almost every industry. For example, Determined AI's platform sped up the training of a machine learning model for drug discovery from three days to three hours. "AI-powered technologies will play an increasingly critical role in turning data into readily available, actionable information to fuel this new era," said Justin Hotard, senior vice president and general manager in Hewlett Packard's High Performance Computing and Mission Critical Solutions divisions. "Determined AI's unique open-source platform allows ML engineers to build models faster and deliver business value sooner without having to worry about the underlying infrastructure." Talwalkar received his Ph.D. and master's degree in computer science at New York University's Courant Institute and joined the CMU faculty in 2018. He recently received a Faculty Early Career Development Program (CAREER) award from the National Science Foundation to help automate the design of new deep learning models for a diverse set of tasks in the physical and social sciences.

Computer History Museum Honored Raj Reddy

Aaron Aupperlee

The Computer History Museum (CHM) celebrated a pioneer in robotics, artificial intelligence, and speech recognition as it inducted Raj Reddy, Carnegie Mellon University's Moza Bint Nasser University Professor of Computer Science and Robotics, as a fellow on Thursday, June 24. The honor recognizes extraordinary individuals for a lifetime of achievement in computing and technology. "Being selected to be a fellow of the Computer History Museum seems like you have become an antique," Reddy said after the museum announced the honor. "I guess when you have been working with computers for over six decades you do become ancient!" Reddy was the founding director of CMU's Robotics Institute and a former dean of the School of Computer Science. His achievements include developing the first system capable of recognizing continuous speech, initiating CMU's autonomous vehicle program and creating the Universal Digital Library. The latter — a free, online digital library — now includes more than 1.5 million volumes and book digitization centers in China, India, Egypt and the United States. The CHM brought together Reddy's colleagues and contemporaries to pay tribute to his lifetime of work. The speakers shared short stories about Reddy and show how his work has impacted the world. Invited speakers included CMU President Farnam Jahanian; CHM CEO Dan'l Lewin; Tata chair Natarajan Chandrasekaran; Microsoft AI Azure CTO Xuedong Huang; and AI experts and pioneers Kai-Fu Lee, Yunhe Pan and Ed Feigenbaum. Reddy also shared his personal reflections on becoming a CHM fellow. More information on the event is available on the Computer History Museum website.

Algorithm Uses Mass Spectrometry Data To Predict Identity of Molecules

Aaron Aupperlee

An algorithm designed by researchers from Carnegie Mellon University's Computational Biology Department and St. Petersburg State University in Russia could help scientists identify unknown molecules. The algorithm, called MolDiscovery, uses mass spectrometry data from molecules to predict the identity of unknown substances, telling scientists early in their research whether they have stumbled on something new or merely rediscovered something already known. This development could save time and money in the search for new naturally occurring products that could be used in medicine. "Scientists waste a lot of time isolating molecules that are already known, essentially rediscovering penicillin," said Hosein Mohimani, an assistant professor and part of the research team. "Detecting whether a molecule is known or not early on can save time and millions of dollars, and will hopefully enable pharmaceutical companies and researchers to better search for novel natural products that could result in the development of new drugs." The team's work, "MolDiscovery: Learning Mass Spectrometry Fragmentation of Small Molecules," was recently published in Nature Communications. The research team included Mohimani; CMU Ph.D. students Liu Cao and Mustafa Guler; Yi-Yuan Lee, a research assistant at CMU; and Azat Tagirdzhanov and Alexey Gurevich, both researchers at the Center for Algorithmic Biotechnology at St. Petersburg State University. Mohimani, whose research in the Metabolomics and Metagenomics Lab focuses on the search for new, naturally occurring drugs, said after a scientist detects a molecule that holds promise as a potential drug in a marine or soil sample, for example, it can take a year or longer to identify the molecule with no guarantee that the substance is new. MolDiscovery uses mass spectrometry measurements and a predictive machine learning model to identify molecules quickly and accurately. Mass spectrometry measurements are the fingerprints of molecules, but unlike fingerprints there's no enormous database to match them against. Even though hundreds of thousands of naturally occurring molecules have been discovered, scientists do not have access to their mass spectrometry data. MolDiscovery predicts the identity of a molecule from the mass spectrometry data without relying on a mass spectra database to match it against. The team hopes MolDiscovery will be a useful tool for labs in the discovery of novel natural products. MolDiscovery could work in tandem with NRPminer, a machine learning platform developed by Mohimani's lab, that helps scientists isolate natural products. Research related to NRPminer was also recently published in Nature Communications. 

CMU Team Will Put Expert Knowledge at the Fingertips of Frontline Medics

Aaron Aupperlee

The Defense Advanced Research Projects Agency (DARPA) has selected Carnegie Mellon University as one of five teams to develop artificial intelligence that will help field medics better use portable ultrasound devices to diagnose and treat injuries on the battlefield. DARPA's Point-of-Care Ultrasound Automated Interpretation program will challenge the teams to create an extensible AI model that can be trained to identify injuries and assist with interventions using limited data — 15 to 30 images or video clips instead of thousands. "Because we cannot train the AI on large datasets, we are going to incorporate knowledge straight from doctors," said John Galeotti, director of the Biomedical Image Guidance Laboratory in the Robotics Institute and head of the CMU team. "We are going to collect information from clinical experts and put it on top of the AI system so the model does not have to learn as many new concepts on its own for each new application." Portable point-of-care ultrasound devices could help frontline medics quickly capture images of injuries and confirm whether interventions to temporarily treat them or alleviate pain were administered properly or should be tried again. These devices could increase the speed and accuracy of the care provided on the battlefield or in other scenarios where evacuations could take time. But frontline medical personnel often lack significant training with these instruments, hindering their deployment. AI promises to bridge that gap. DARPA selected five research teams to create an AI model for the 18-month challenge: CMU, Drexel University, Netrias, Novateur Research Solutions and Kitware Inc. The CMU team, which includes Artur Dubrawski, Alumni Research Professor of Computer Science and head of the Auton Laboratory, will work to train an AI model that combines computer vision and machine learning to help medics identify what they see through the ultrasound. They'll also incorporate clinical rules and best practices from medical experts to guide and evaluate the interventions when assessing for traumatic brain injury. DARPA requires the system to diagnose a life-threatening pneumothorax condition, which prevents the lungs from inflating, and measure the diameter of the optic nerve sheath to detect high intracranial pressure. The system must also tell a medic whether a nerve block injection needle was administered in the correct place and if a breathing tube was inserted correctly. The value of the technology extends far beyond the military and battlefield, Galeotti said. It could be used with devices in ambulances to provide better treatment to roadside accident victims and be carried by paramedics, EMTs and other first responders to offer more effective aid outside hospital settings.  "This could help first responders provide better aid earlier, which would lead directly to not only saving more lives but also to alleviating pain and preventing long-lasting injuries," Galeotti said.

 Portraits of Jason Wu and Jeff Bigham.

CMU, Apple Team Improves iOS App Accessibility

Aaron Aupperlee

A team at Apple analyzed nearly 78,000 screenshots from more than 4,000 apps to improve the screen reader function on its mobile devices. The result was Screen Recognition, a tool that uses machine learning and computer vision to automatically detect and provide content readable by VoiceOver for apps that would otherwise not be accessible.Jason Wu, a Ph.D. student in Carnegie Mellon University's Human-Computer Interaction Institute (HCII), was part of the team, whose work, "Screen Recognition: Creating Accessibility Metadata for Mobile Applications From Pixels," won a Best Paper award at the recent Association for Computing Machinery (ACM) Computer-Human Interaction (CHI) conference. His advisor, Jeffrey Bigham, an associate professor in HCII and the Language Technologies Institute and head of the Human-Centered Machine Learning Group at Apple, was also among the paper's authors.Apple's VoiceOver uses metadata supplied by ad developers that describes user interface components. Without this metadata, VoiceOver may not be able to read a section of text aloud; describe a photo; or assist with using a button, slider or toggle. This could leave an entire app inaccessible to a user with a disability."We saw an opportunity to apply machine learning and computer vision techniques to automatically generate this metadata for many apps from their visual appearance, which greatly improved the VoiceOver experience for previously inaccessible apps," Wu said.The team first collected and annotated screen shots to train the Screen Recognition model to identify user interface elements. They then worked with blind quality assurance engineers to improve the way the screen reader relayed information. Finally, the team grouped items, such as photos and their subtext; created a way for users to know if an item was clickable; and enabled elements on the screen to be read in a logical order.Users who tested Screen Recognition found it made apps more accessible. One person told the researchers that they could play a game with Screen Recognition enabled that had been completely unusable without it."Guess who has a new high score? I am in AWE! This is incredible," the person wrote.Screen Recognition was released as a feature in iOS 14 last September, and the team continues to improve on it. The model created for Screen Recognition could also be used to help developers create apps that are more accessible to begin with.Read more about the work behind Screen Recognition and the team's hopes for the feature's future in a post on the Machine Learning Research at Apple page.

Societal Computing Ph.D. Works To Improve Online Experiences

Katy Rank Lev

The choice to study societal computing, an interdisciplinary program blending computer science and policy, felt logical for Aurelia Augusta, who embodies the intersection of multiple identities.  "Being Black, a Nigerian immigrant, and then coming out as trans and understanding my identity as a trans woman has really informed my methodological rigor," said Augusta, a Ph.D. candidate in the School of Computer Science's Institute for Software Research. "Intersectionality is important to me in all areas." Augusta said she sees the world through multiple lenses and keeps a central question in mind when she conducts her research: what does this mean, and who is this for? In the tech sector, she worked on trust and safety projects that ranged from implementing spam filters to developing tools to review scripts that volunteers use when phone or text banking in political campaign work. She lives the tensions and contradictions marginalized people experience online, and realized graduate research would best equip her to make positive changes. "I know what it's like to feel out of place," she said. "I find it very easy to sympathize and empathize with a lot of people." Through her research, Augusta has found that marginalized people, specifically Black women and trans people, are systematically harassed online, and people in intersectional groups are much more frequently and severely harassed than others. "The field just is not talking about this," Augusta said. Read more about Augusta and her research in this student profile.

 A portrait of Graham Neubig.

Neubig Named Finalist for Blavatnik National Awards for Young Scientists

Aaron Aupperlee

Voice-controlled virtual assistants, like Alexa and Siri, work great in English. And they work pretty well in other languages, like Japanese. But they don't understand speakers of most of the world's 7,000 languages. Graham Neubig, an associate professor in the Language Technologies Institute at Carnegie Mellon University, wants to change that by making natural language processing — the technology underpinning virtual assistants, instant translation tools and autocomplete functions in text messages and email — work for everyone. "I want to allow people to speak to computers in their own language," Neubig said. "We can make this technology an equalizing factor instead of it broadening the gap between the haves and the have-nots." For his work to democratize voice technologies, Neubig was named a finalist for the Blavatnik National Awards for Young Scientists by the Blavatnik Family Foundation and the New York Academy of Sciences. Three winners — one each in life sciences, chemistry, and physical sciences and engineering — will be announced on July 20. Each receives $250,000. The award is the world's largest unrestricted prize honoring early-career scientists and engineers. Neubig, among the 10 finalists in the physical sciences and engineering category, is CMU's first for the award. Neubig has shaped the fundamental science behind applications such as machine translation, speech recognition and question answering. The open-source software stemming from his research implements techniques to process words, grammatical structure and semantics, and is widely used by research organizations and leading technology companies. His research may one day make it possible for speakers of all the world's languages to communicate directly with each other, and with computers, in their own words. "Graham has been recognized as a finalist for his important work in natural language processing. His rich understanding of language and his mastery of methods in machine learning has led to a number of extraordinary scientific contributions," said Nicholas B. Dirks, president and CEO of the New York Academy of Sciences The 31 finalists were selected from 298 nominations by 157 U.S. research institutions across 38 states. Their discoveries include the neuroscience of addiction, the development of gene-editing technologies, designing next-generation battery storage, understanding the origins of photosynthesis, making improvements in computer vision, and pioneering new frontiers in polymer chemistry. "Each day, young scientists tirelessly seek solutions to humanity's greatest challenges," said Len Blavatnik, founder and chairman of Access Industries and head of the Blavatnik Family Foundation. "The Blavatnik Awards recognize this scientific brilliance and tenacity as we honor these 31 finalists. We congratulate them on their accomplishments and look forward to their continued, future discoveries and success." Read more about the finalists at the Blavatnik Awards website.

 A portrait of Jessica Hammer.

Jessica Hammer Named HCII Interim Associate Director

Aaron Aupperlee

Award-winning game designer Jessica Hammer will soon level up when she takes on the role of interim associate director of the Human-Computer Interaction Institute in Carnegie Mellon University's School of Computer Science. "We are lucky to do exceptional research and teaching at the HCII. Even better, we get to combine the two to shape the future of human-computer interaction," said Hammer, the HCII's Thomas and Lydia Moran Assistant Professor of Learning Science. "I want to make sure that all members of our community — from our first-year undergraduates to the most senior faculty — can contribute to this mission." Hammer, who has often learned the hard way how to develop a game to meet its vision, said that her training will help her ensure the department's practices and organizations are best suited to meet its goals. "I want to help create a department where our work is not just impactful, but also joyful," Hammer said. Hammer will take on the new role while the HCII's current director, Jodi Forlizzi, transitions to her position as the associate dean for diversity, equity and inclusion in SCS and reduces her duties as department head. Forlizzi will continue to oversee faculty hiring; the reappointment, promotion and tenure process; staff hiring and budgetary matters. Hammer will lead faculty meetings, attend faculty leadership meetings, and oversee the curriculum and awards committees. Forlizzi will supervise the HCII master's programs; Hammer the Ph.D. and undergraduate programs.   "I have enormous respect for everything Jodi has accomplished as director of the HCII. She has helped us get through globally challenging times and has empowered others in the department to learn to lead from her," Hammer said. "We will be conducting a search for a new director, and part of my role as associate director will be providing leadership continuity to the department. I'd like to help the new director quickly master the basics of the job, so they can move on to reinforcing what's already great at the HCII and figuring out where they can help us grow." Hammer holds a joint appointment with CMU's Entertainment Technology Center. She earned her B.A. at Harvard University, her M.S. from the NYU Interactive Telecommunications Program and her Ph.D. in cognitive studies at Columbia University.

Forlizzi Named Associate Dean of Diversity, Equity and Inclusion for School of Computer Science

Aaron Aupperlee

Jodi Forlizzi will lean on her background as a designer to address what she calls "a truly wicked problem" in her role as the inaugural associate dean of diversity, equity and inclusion in the School of Computer Science. "I'm trained as a designer, and many systemic design practices apply to this work," said Forlizzi, the Charles M. Geschke Director of the Human-Computer Interaction Institute since 2017 and faculty since 2000. "We can start by making small changes that could have great impact." Forlizzi helped initiate some of those changes in her work as the diversity, equity and inclusion lead for the school. Under her guidance, SCS has started documenting important processes including those for hiring, onboarding, offboarding, reviews, promotions, and award and committee selections. The school has also gathered and centralized outreach activities to better coordinate and update them as needed. Forlizzi has also led the charge to create a Broadening Participation in Computing Plan and expand the number of GEM fellowships to increase the participation of underrepresented groups at graduate levels in engineering and science. "We've been chipping away at it steadily, but DEI work is slow. It's incremental, and sometimes it takes a long time for any one action to have an impact." Forlizzi said. Among her first tasks will be to prioritize the work of Carol Frieze, the former director of the Women@SCS and SCS4All programs. Frieze recently retired, and Forlizzi does not want her work to fade. Additionally, Forlizzi sees a need to expand mentorship opportunities for the diverse cadre of faculty, staff and students the school hopes to attract. Forlizzi's appointment comes at a time when the spotlight is on institutions to react against racism. Wanda Heading-Grant recently started as CMU's chief diversity officer and vice president for Diversity, Equity and Inclusion, and schools across campus are appointing associate deans of DEI. Forlizzi said the unsettled nature of the US — with protests, rallies and calls for social justice — suggest that conditions are ripe for radical change. "Now is the time for action," Forlizzi said. Martial Hebert, the dean of the School of Computer Science, said the unsettled nature of the country and the calls for action are reminders of the importance of this work. He said improving diversity, equity and inclusion is a major priority for both the school and the university, and Hebert is confident Forlizzi is up to the task.

Three SCS Faculty Members Receive Amazon Research Awards

Aaron Aupperlee

Amazon selected five Carnegie Mellon University faculty members to receive funding in its latest round of Amazon Research Awards. Of the five selected, three are School of Computer Science faculty members: Katerina Fragkiadaki, Ruben Martins and Heather Miller. David Danks and Sivaraman Balakrishnan from the Dietrich College of Humanities and Social Sciences also received fellowships. The awards provide funding, access to Amazon public datasets, and the use of artificial intelligence and machine learning services and tools. Each award is intended to support one year of work for one to two graduate students or postdoctoral students with faculty supervision. "The 2020 Amazon Research Awards recipients represent a distinguished array of academic researchers who are pursuing research across areas such as machine learning algorithms and theory, fairness in AI, computer vision, natural language processing, edge computing, and medical research," said Bratin Saha, vice president of AWS Machine Learning Services. "We are excited by the depth and breadth of their proposals, as well as the opportunity to advance the science through strengthened connections among academic researchers, their institutions and our research teams." Fragkiadaki, an assistant professor in the Machine Learning Department, received an award to support her group's ongoing work on intelligent manipulation in conjunction with Robotics Institute faculty member Chris Atkeson. Their research will focus on helping artificial agents quickly adapt to new environments through continual learning from a library of reusable behaviors. This library will help AI agents generalize manipulation skills across different objects, their configurations, scene clutter and camera viewpoints. Martins, a systems scientist in the Computer Science Department who will start as an assistant research professor in July, will use his award to study optimization problems where the variables, like the number of students and courses at CMU, put ideal solutions out of the reach of existing algorithms. Instead, Martins will look to solve these problems efficiently but not perfectly to save money, human resources and computation time. Miller, an assistant professor in the Institute for Software Research, received an award for work with her Ph.D. candidate, Christopher Meiklejohn, on a project called Filibuster. The tool systematically identifies bugs in applications like Netflix or Audible before they can reach production and potentially bring down large web services. The CMU researchers said that the ability to access to Amazon datasets and use its artificial intelligence and machine learning services and tools will be a great help in furthering their research. Amazon provided 101 awards to recipients from 59 universities. Read more about the awards and their recipients in on the Amazon Science site.

Shoot Better Drone Videos With a Single Word

Aaron Aupperlee

The pros make it look easy, but making a movie with a drone can be anything but.First, it takes skill to fly the often expensive pieces of equipment smoothly and without crashing. And once you've mastered flying, there are camera angles, panning speeds, trajectories and flight paths to plan.With all the sensors and processing power onboard a drone and embedded in its camera, there must be a better way to capture the perfect shot."Sometimes you just want to tell the drone to make an exciting video," said Rogerio Bonatti, a Ph.D. candidate in Carnegie Mellon University's Robotics Institute.Bonatti was part of a team from CMU, the University of Sao Paulo and Facebook AI Research that developed a model that enables a drone to shoot a video based on a desired emotion or viewer reaction. The drone uses camera angles, speeds and flight paths to generate a video that could be exciting, calm, enjoyable or nerve-wracking — depending on what the filmmaker tells it.The team presented their paper on the work at the 2021 International Conference on Robotics and Automation this month. The presentation can be viewed on YouTube."We are learning how to map semantics, like a word or emotion, to the motion of the camera," Bonatti said.But before "Lights! Camera! Action!" the researchers needed hundreds of videos and thousands of viewers to capture data on what makes a video evoke a certain emotion or feeling. Bonatti and the team collected a few hundred diverse videos. A few thousand viewers then watched 12 pairs of videos and gave them scores based on how the videos made them feel.The researchers then used the data to train a model that directed the drone to mimic the cinematography corresponding to a particular emotion. If fast moving, tight shots created excitement, the drone would use those elements to make an exciting video when the user requested it. The drone could also create videos that were calm, revealing, interesting, nervous and enjoyable, among other emotions and their combinations, like an interesting and calm video."I was surprised that this worked," said Bonatti. "We were trying to learn something incredibly subjective, and I was surprised that we obtained good quality data."The team tested their model by creating sample videos, like a chase scene or someone dribbling a soccer ball, and asked viewers for feedback on how the videos felt. Bonatti said that not only did the team create videos intended to be exciting or calming that actually felt that way, but they also achieved different degrees of those emotions.The team's work aims to improve the interface between people and cameras, whether that be helping amateur filmmakers with drone cinematography or providing on-screen directions on a smartphone to capture the perfect shot."This opens the door to many other applications, even outside filming or photography," Bonatti said. "We designed a model that maps emotions to robot behavior."

Two Alumni With SCS Ties Earn Fulbrights

Heidi Opdyke

Two alumni with ties to the School of Computer Science were among the eight from Carnegie Mellon University recently awarded grants through the Fulbright U.S. Student Program, sponsored by the U.S. Department of State's Bureau of Educational and Cultural Affairs. Sara Adkins, who graduated in 2018 with a bachelor's degree in computer science and arts, earned a study/research award for graduate studies in sound and music computing at Queen Mary University of London in the United Kingdom. Shannon Lu, a 2020 graduate with bachelor's degrees in information systems and statistics and machine learning and a minor in human-computer interaction, chose the Taiwan English Teaching Assistantship for her Fulbright program. Adkins said CMU helped her understand what it means to be an interdisciplinary researcher and define her own niche area. "I did an integrated double major in computer science and music technology because I knew I liked both of those fields but didn't really know how I wanted to combine them," Adkins said. "Taking computer music classes in the School of Computer Science opened my eyes to the ways that computer science can be applied to music. This helped me decide that I wanted to focus my career on being a technologist who works with musicians and helps them make new electronic sounds." Her research project for the Fulbright will expand on her senior capstone project for CMU's BXA Intercollege Degree Programs. "The idea is for an algorithm to compose music in real time as a musician is performing. I explored the ways a human musician could interact with an algorithm to create a new art form," she said. Lu said that one of the reasons she chose the Taiwan program was because she wanted an immersive experience to improve her language skills and gain a cross-cultural perspective on technology. "At the time I was applying, it was around when Taiwan's initial COVID response was showing to be highly effective," Lu said. "I thought it was a unique opportunity to learn about how different technologies have the potential to affect day-to-day life in a meaningful way." Lu started cultivating her interest in community engagement at CMU through roles in Residential Education, involvement with multicultural organizations on campus and volunteer work with Outreach360 — a student organization that does an alternative spring break trip to the Dominican Republic. "Among other experiences, these taught me to be a more compassionate and empathetic person and to think about ideas from different points of view," Lu said. "I really valued the interpersonal connections I made, which is why I wanted to challenge myself in a new role and environment." The Fulbright Program aims to improve cultural diplomacy and allow scholars and their international hosts to gain an appreciation of different viewpoints and beliefs though engagement in the community. Read more about the university’s eight Fulbright scholars on the CMU News website.  

CMU Team Develops Machine Learning Platform That Mines Nature for New Drugs

Aaron Aupperlee

Researchers from Carnegie Mellon University's Computational Biology Department in the School of Computer Science have developed a new process that could reinvigorate the search for natural product drugs to treat cancers, viral infections and other ailments.The machine learning algorithms developed by the Metabolomics and Metagenomics Lab match the signals of a microbe's metabolites with its genomic signals and identify which likely correspond to a natural product. Knowing that, researchers are better equipped to isolate the natural product to begin developing it for a possible drug."Natural products are still one of the most successful paths for drug discovery," said Bahar Behsaz, a project scientist in the lab and lead author of a paper about the process. "And we think we're able to take it further with an algorithm like ours. Our computational model is orders of magnitude faster and more sensitive."In a single study, the team was able to scan the metabolomics and genomic data for about 200 strains of microbes. The algorithm not only identified the hundreds of natural product drugs the researchers expected to find, but it also discovered four novel natural products that appear promising for future drug development. The team's work was published recently in Nature Communications.The paper, "Integrating Genomics and Metabolomics for Scalable Non-Ribosomal Peptide Discovery," outlines the team's development of NRPminer, an artificial intelligence tool to aid in discovering non-ribosomal peptides (NRPs). NRPs are an important type of natural product and are used to make many antibiotics, anticancer drugs and other clinically used medications. They are, however, difficult to detect and even more difficult to identify as potentially useful."What is unique about our approach is that our technology is very sensitive. It can detect molecules with nanograms of abundance," said Hosein Mohimani, an assistant professor and head of the lab. "We can discover things that are hidden under the grass."Most of the antibiotic, antifungal and many antitumor medications discovered and widely used have come from natural products.Penicillin is among the most used and well-known drugs derived from natural products. It was, in part, discovered by luck, as are many of the drugs made from natural products. But replicating that luck is difficult in the laboratory and at scale. Trying to uncover natural products is also time and labor intensive, often taking years and millions of dollars. Major pharmaceutical companies have mostly abandoned the search for new natural products in the past decades.By applying machine learning algorithms to the study of genomics, however, researchers have created new opportunities to identify and isolate natural products that could be beneficial."Our hope is that we can push this forward and discover other natural drug candidates and then develop those into a phase that would be attractive to pharmaceutical companies," Mohimani said. "Bahar Behsaz and I are expanding our discovery methods to different classes of natural products at a scale suitable for commercialization."The team is already investigating the four new natural products discovered during their study. The products are being analyzed by a team led by Helga Bode, head of the Institute for Molecular Bioscience at Goethe University in Germany, and two have been found to have potential antimalarial properties.This study was conducted in collaboration with researchers from the University of California San Diego; Saint Petersburg University; the Max-Planck Institute; Goethe University; the University of Wisconsin, Madison; and the Jackson Laboratory. 

Amazon Names Five Graduate Research Fellows as Part of New SCS Collaboration

Aaron Aupperlee

Five Carnegie Mellon University students with ties to the School of Computer Science were selected for the inaugural Amazon Graduate Research Fellows Program. Amazon and CMU established the program to further the company's commitment to supporting promising researchers across academia. In recent years, the company has collaborated with several major universities to help amplify the work being done by master's and Ph.D. students. The five fellows are Nil-Jana Akpinar, Natalia Lombardi de Oliveria, Divyansh Kaushik, Emre Yolcu and Minji Yoon. The program supports graduate students engaged in scientific research in automated reasoning, computer vision, robotics, language technology, machine learning, operations research and data science. Fellows will also be invited to interview for a science internship at Amazon. "Each fellow was selected based on their academic excellence and potential to achieve big things in their chosen fields," said Alexa Smola, Amazon Web Services vice president and distinguished scientist. "We reviewed their research proposals to make sure they're doing really great work. They are the real stars here. We're supplying some funding, but they are performing the actual research." Yoon and Yolcu are pursuing doctoral degrees in the Computer Science Department. Yoon is working on automating and democratizing graph mining. Yolcu has made contributions to the complexity of proof systems that reason about symmetries, with publications appearing in SAT and NeurIPS. Kaushik is a Ph.D. candidate in the Language Technologies Institute. He is working on developing natural language processing (NLP) systems that can perform reliably in real-world deployment settings. Akpinar and de Oliveria are pursuing doctoral degrees through a joint program in the Machine Learning Department and the Department of Statistics and Data Science. Akpinar's work focuses on bias auditing in algorithmic systems and shows how differential victim crime reporting rates can lead to biased outcomes of predictive policing algorithms. De Oliveria studies estimating generalizations, known as optimism in classical statistics terms. Her work examines the difference between the test and training performance of a predictive algorithm. Read more about the Amazon Graduate Research Fellows Program and the recipients in this blog post on the Amazon Science site.