News 2020

August 2020

School of Computer Science Launches Educational Equity Office with $3 Million Grant

Virginia Alvino Young

There's no question that the world needs more computer scientists. But how do universities reach students early enough to spark interest in the field? How do they encourage a broad swath of students who may be underrepresented in computer science or whose lack of resources put pursuit of the subject out of their reach? And, more urgently, how do they do this amid a global pandemic? A three-year, $3 million grant from the Hopper-Dean Foundation will allow Carnegie Mellon University to tackle those big problems through the creation of the Carnegie Mellon Computer Science Pathways program. Dedicated to diversity, inclusion and equity, the office will support students who are underrepresented, underresourced or both. "This work has been happening here for years, but now we're institutionally committing to it thanks to the generosity of Hopper-Dean," said Ashley Williams Patton, director of the new office, which focuses on giving high school students access to computer science. Patton said that in the era of COVID-19, that means transitioning away from some traditional programming and creating an emergency response plan. "We're working on the digital divide and infrastructure issues to provide more equitable access to technology," Patton said. For instance, CS Pathways recently helped provided WiFi to students learning remotely in Coraopolis, Pennsylvania, through a partnership with Meta Mesh Wireless Communities.

Amateur Drone Videos Could Aid in Natural Disaster Damage Assessment

Byron Spice

It wasn't long after Hurricane Laura hit the Gulf Coast Thursday that people began flying drones to record the damage and posting videos on social media. Those videos are a precious resource, say researchers at Carnegie Mellon University, who are working on ways to use them for rapid damage assessment. By using artificial intelligence, the researchers are developing a system that can automatically identify buildings and make an initial determination of whether they are damaged and how serious that damage might be. "Current damage assessments are mostly based on individuals detecting and documenting damage to a building," said Junwei Liang, a Ph.D. student in CMU's Language Technologies Institute (LTI). "That can be slow, expensive and labor-intensive work." Satellite imagery doesn't provide enough detail and shows damage from only a single viewpoint — vertical. Drones, however, can gather close-up information from a number of angles and viewpoints. It's possible, of course, for first responders to fly drones for damage assessment, but drones are now widely available among residents and routinely flown after natural disasters. "The number of drone videos available on social media soon after a disaster means they can be a valuable resource for doing timely damage assessments," Liang said. Xiaoyu Zhu, a master's student in the LTI's AI and Innovation program, said the initial system can overlay masks on parts of the buildings in the video that appear damaged and determine if the damage is slight or serious, or if the building has been destroyed. The team will present their findings at the Winter Conference on Applications of Computer Vision (WACV 2021), which will be held virtually next year. The researchers, led by Alexander Hauptmann, an LTI research professor, downloaded drone videos of hurricane and tornado damage in Florida, Missouri, Illinois, Texas, Alabama and North Carolina. They then annotated the videos to identify building damage and assess the severity of that damage. The resulting dataset — the first that used drone videos to assess building damage from natural disasters — was used to train the AI system, called MSNet, to recognize building damage. The dataset is available for use by other research groups via Github. The videos don't include GPS coordinates — yet — but the researchers are working on a geolocation scheme that would enable users to quickly identify where the damaged buildings are, Liang said. This would require training the system using images from Google Streetview. MSNet could then match the location cues learned from Streetview to features in the video. The National Institute of Standards and Technology sponsored this research.

Choset Joins International Group Focused on AI for Social Good

Byron Spice

Howie Choset, the Kavcic-Moura Professor of Computer Science, has joined the Global Partnership on Artificial Intelligence (GPAI), an international group founded this year by the United States and 14 other nations to shape a global agenda on how best to use AI to benefit society. Choset was invited by the White House's Office of Science and Technology Policy to join the group as one of a handful of U.S. experts. The GPAI was launched by the technology ministers of the Group of Seven nations, "to shape the evolution of AI in a way that respects fundamental rights and upholds our shared values," Michael Kratsios, the U.S. chief technology officer, explained in a May editorial in the Wall Street Journal. Choset belongs to the GPAI's Working Group on Responsible AI and participates in its AI and Pandemic Response subgroup. Choset said a major meeting on AI and COVID-19 is being planned for this December in Montreal. "The central idea of the GPAI is AI for social good," Choset said. "It's about ensuring that AI is used for societal benefit rather than commercial applications." In addition to the U.S., the founding member nations include Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, the Republic of Korea, Singapore, Slovenia, the United Kingdom and the European Union. The GPAI includes working groups on data governance, the future of work, and innovation and commercialization, as well as responsible AI.

100 Maps From CMU's EarthTime Chart Humanity's Greatest Challenges

Byron Spice

EarthTime, the innovative data visualization technology developed by Carnegie Mellon University's CREATE Lab, takes center stage in a new book addressing some of the greatest challenges facing mankind. "Terra Incognita: 100 Maps To Survive the Next 100 Years," is being published by Century, an imprint of Penguin Random House UK. Written by Ian Goldin, a professor of global development at Oxford University, and Robert Muggah, a Canadian political scientist who specializes in security, cities and new technology, the book includes satellite maps and data visualizations by the CREATE Lab team of Paul Dille, Gabriel O'Donnell and Ryan Hoffman. The book visualizes and analyzes the impact of human activity on the planet and society, examining such issues and megatrends as pandemics, global climate change, inequality, violence, migration, health, education and accelerating technologies. The publisher says it plans to support the book with a social media campaign featuring the book's visualizations as short online videos. "We hope that this book gives readers both context and optimism about the global challenges we face," said Hoffman, CREATE Lab project manager. "It helps shift the discussion away from the sensationalist narratives we see on social media and in the news. By leveraging powerful maps and the latest scientific evidence, this book can help shape solution-oriented discourse." In publicity materials released for the book, Steven Pinker, a cognitive psychologist and popular science author, described "Terra Incognita" as "a riveting account of humanity's most pressing problems and innovative solutions." Arianna Huffington, co-founder of the Huffington Post, described it as an unflinching account of our challenges that "will leave you optimistic about the future." EarthTime evolved from CREATE Lab projects for creating and exploring panoramic imagery, eventually incorporating data sets from such sources as the United Nations, NASA, the U.S. Geological Survey, the London School of Health and Tropical Disease, and even the Housing Mortgage Disclosure Act. It became a powerful tool for visualizing data over time and space, and is now an annual fixture at the World Economic Forum's Davos conference. It also is used in education and, locally, for advocacy regarding housing and inequality. Dille, a senior software developer, said the team created dozens of new maps for the book to make the data as current and relevant as possible. Some of the newly visualized data was released only a few months ago. "It is a historical irony that one of the most advanced visualization systems we have developed, EarthTime, lends itself powerfully to the creation of graphic imagery for a traditional hardcover book," said Illah Nourbakhsh, a professor of robotics and director of the CREATE Lab. "The combination of visuals and narratives that constitute 'Terra Incognita' provide a never-before-seen depth of engagement, suitable for all, on how our planet is changing dramatically right under our feet."

Sounds of Action: Using Ears, Not Just Eyes, Improves Robot Perception

Byron Spice

People rarely use just one sense to understand the world, but robots usually only rely on vision and, increasingly, touch. Carnegie Mellon University researchers find that robot perception could improve markedly by adding another sense: hearing.In what they say is the first large-scale study of the interactions between sound and robotic action, researchers at CMU's Robotics Institute found that sounds could help a robot differentiate between objects, such as a metal screwdriver and a metal wrench. Hearing also could help robots determine what type of action caused a sound and help them use sounds to predict the physical properties of new objects."A lot of preliminary work in other fields indicated that sound could be useful, but it wasn't clear how useful it would be in robotics," said Lerrel Pinto, who recently earned his Ph.D. in robotics at CMU and will join the faculty of New York University this fall. He and his colleagues found the performance rate was quite high, with robots that used sound successfully classifying objects 76 percent of the time.The results were so encouraging, he added, that it might prove useful to equip future robots with instrumented canes, enabling them to tap on objects they want to identify.The researchers presented their findings last month during the virtual Robotics: Science and Systems conference. Other team members included Abhinav Gupta, associate professor of robotics, and Dhiraj Gandhi, a former master's student who is now a research engineer at Facebook Artificial Intelligence Research's Pittsburgh lab.To perform their study, the researchers created a large dataset, simultaneously recording video and audio of 60 common objects — such as toy blocks, hand tools, shoes, apples and tennis balls — as they slid or rolled around a tray and crashed into its sides. They have since released this dataset, cataloging 15,000 interactions, for use by other researchers.The team captured these interactions using an experimental apparatus they called Tilt-Bot — a square tray attached to the arm of a Sawyer robot. It was an efficient way to build a large dataset; they could place an object in the tray and let Sawyer spend a few hours moving the tray in random directions with varying levels of tilt as cameras and microphones recorded each action.They also collected some data beyond the tray, using Sawyer to push objects on a surface.Though the size of this dataset is unprecedented, other researchers have also studied how intelligent agents can glean information from sound. For instance, Oliver Kroemer, assistant professor of robotics, led research into using sound to estimate the amount of granular materials, such as rice or pasta, by shaking a container, or estimating the flow of those materials from a scoop.Pinto said the usefulness of sound for robots was therefore not surprising, though he and the others were surprised at just how useful it proved to be. They found, for instance, that a robot could use what it learned about the sound of one set of objects to make predictions about the physical properties of previously unseen objects."I think what was really exciting was that when it failed, it would fail on things you expect it to fail on," he said. For instance, a robot couldn't use sound to tell the difference between a red block or a green block. "But if it was a different object, such as a block versus a cup, it could figure that out."The Defense Advanced Research Projects Agency and the Office of Naval Research supported this research.

SCS Researchers Top Leaderboard in DARPA AutoML Evaluations

Byron Spice

Researchers led by Saswati Ray, a senior research analyst in the School of Computer Science's Auton Lab, have once again received top scores among teams participating in the Defense Advanced Research Project Agency's program for building automated machine learning (AutoML) systems. The Data-Driven Discovery of Models (D3M) program seeks to automate the process of building predictive models for complex systems, with the goal of speeding scientific discovery by enabling subject matter experts to build models with little or no help from data scientists. More than 10 teams, most from academic centers, participate. Four or five times each year, DARPA evaluates each team's algorithm for building this AutoML pipeline by applying it to several previously unseen problems. Over the last two years, Ray's algorithms have consistently outscored all others in these tests, even though her code is shared with the other teams after each evaluation. "She remains the reigning Queen of AutoML," said Artur Dubrawski, research professor of computer science and director of the Auton Lab. "We've gotten used to Saswati doing this, but to continue being number one in such a tight contest for so long is like winning seven or eight Stanley Cups or Super Bowls in a row." In the latest evaluation, a sub-team including the Auton Lab's Jarod Wang, Cristian Challu and Kin Gutierrez also topped the leader board in a component category — building collections of "primitives" for performing ML-related tasks such as data conditioning/preprocessing or classification. Their new time series forecasting algorithm pushed their collection to the top spot, Dubrawski said.

Carnegie Mellon, Pitt Researchers Collaborate To Create Portable Ventilator

Byron Spice (CMU), Allison Hydzik (Pitt)

A low-cost ventilator developed in the wake of the COVID-19 pandemic performed well in initial tests at the University of Pittsburgh School of Medicine, delivering air reliably to a simulated lung at pressures and volumes desired by clinicians.The device, called the RoboVent, is being developed jointly by researchers at Carnegie Mellon University and Pitt."Our system is being built to cost less than $1,000, is easy-to-use and retains most of the functionality of a conventional intensive care unit ventilator," said Howie Choset, a professor in CMU's Robotics Institute. At the projected price, the RoboVent would be affordable for health systems in underserved communities and in low- and middle-income countries globally.Unlike other low-cost ventilators, the RoboVent can also be used for different noninvasive ventilation modes that have become increasingly common in treating COVID-19 patients."The results of the test were pretty successful," said Dr. Jason Rose, a pulmonary and critical care physician and assistant professor of medicine and bioengineering at Pitt. "The initial study identified a few issues that will require further optimization," he added. More validation under different simulated medical conditions will be necessary before he and his fellow researchers will be ready to approach the U.S. Food and Drug Administration for emergency use authorization to use the RoboVent with patients."It was a good proof of concept," he added. "The RoboVent is already better than some emergency ventilators approved this spring when the COVID-19 pandemic peaked in states such as New York." Many emergency ventilators, he noted, lack some functionality and can be difficult to use.Rose, together with Choset and Keith Cook, a CMU professor of biomedical engineering, launched the RoboVent project in March. U.S. hospitals were then bracing for an onslaught of COVID-19 patients that threatened to outstrip the supply of ventilators. Increased production of ventilators, changes in hospitalization rates and evolving patient-treatment strategies have eased the supply problem for now. But future surges in COVID-19 — combined with the onset of influenza season — could once again overwhelm health providers with demand.Choset said the portable RoboVent includes a number of features that make it particularly attractive for coping with the COVID-19 pandemic and for developing countries looking to bolster hospital services for critically ill patients. The RoboVent includes robotic and sensor technology that can detect force as it drives an air pump, and includes air-management controls to create a closed-loop system.The system can be adjusted remotely, so medical personnel need not enter a patient's room routinely, thus saving personal protective equipment and reducing risk to frontline healthcare workers.Conventional ventilation requires inserting a tube into the patient's throat, but Rose said physicians increasingly treat COVID-19 patients with noninvasive alternatives — either high-flow oxygen through a nasal cannula or bilevel positive airway pressure (BiPAP) through a mask. The RoboVent can perform all those forms of ventilation, he added."This could really change the ventilator supply situation globally," Rose said.The RoboVent has a modular design and can be built with parts that are readily available. In case of disruptions, alternative parts are widely accessible or can be easily made, Choset said."We have a manufacturing partner lined up who can possibly make 10,000 RoboVents to handle the next surge, but the machines will have uses beyond COVID-19, especially in low-income areas," he added.

SCS Students Receive Apple AI/ML Fellowships

Byron Spice

Apple has announced that two Ph.D. students in the School of Computer Science — Graham Gobieski and Xinyi Wang — have received fellowships in artificial intelligence (AI) and machine learning (ML). They're two of a dozen students who earned fellowships through Apple Scholars, a program that supports students in computer science and engineering. The scholars were selected based on their innovative research, demonstrated thought leadership, and willingness to take risks and push the envelope in AI/ML. Gobieski, a student in the Computer Science Department, works primarily on developing software and hardware to enable machine learning and sensing applications on ultra-low-power devices. He has developed a system to deploy neural networks to resource-constrained devices and is building an energy-efficient microcontroller. Wang, a student in the Language Technologies Institute, develops methods that allow artificial neural networks to intelligently and efficiently use data for machine translation. Building on these methods, she plans to design natural language processing models that not only support major languages but also serve people who speak minority languages, or those who speak language variations such as dialects and personal language styles. Each scholar will receive support for their research and academic travel for two years, internship opportunities, and a two-year mentorship with an Apple researcher.