News 2019

June 2019

Noam Brown Named MIT Technology Review 2019 Innovator Under 35

Byron Spice

Noam Brown, a Ph.D. student in the Computer Science Department who helped develop an artificial intelligence that bested professional poker players, has been named to MIT Technology Review's prestigious annual list of Innovators Under 35 in the Visionary category. Brown worked with his advisor, Computer Science Professor Tuomas Sandholm, to create the Libratus AI. It was the first computer program to beat top professional poker players at Heads-Up, No-Limit Texas Hold'em. During the 20-day "Brains vs. Artificial Intelligence" competition in January 2017, Libratus played 120,000 hands against four poker pros, beating each player individually and collectively amassing more than $1.8 million in chips. Unlike other games that computers have mastered, such as chess and Go, poker is an imperfect information game — one where players can't know exactly what cards their opponents have. That adds a layer of complexity to the game, necessitating bluffing and other strategies. Technology Review notes that some of Libratus' unorthodox strategies, such as dramatically upping the ante of small pots, have begun to change how pros play poker. More significantly, many real-world situations resemble imperfect information games. Brown and Sandholm maintain that AIs similar to Libratus could provide automated solutions for real-world strategic interactions, including business negotiations, cybersecurity and traffic management. Last year, Brown and Sandholm received the Marvin Minsky Medal from the International Joint Conference on Artificial Intelligence (IJCAI) in recognition of this outstanding achievement in AI. They also earned a best paper award at the 2017 Neural Information Processing Systems conference, the Allen Newell Award for Research Excellence and multiple supercomputing awards for their efforts. Brown, who will defend his Ph.D. thesis in August, is now a research scientist at Facebook AI Research. "MIT Technology Review's annual Innovators Under 35 list is a chance for us to honor the outstanding people behind the breakthrough technologies of the year that have the potential to disrupt our lives," said Gideon Lichfield, the magazine's editor-in-chief. "These profiles offer a glimpse into what the face of technology looks like today as well as in the future." Information about this year's honorees is available on the MIT Technology Review website and in the July/August print magazine, which hits newsstands worldwide on Tuesday, July 2.

Carnegie Mellon University and Argo AI Form Center for Autonomous Vehicle Research

Byron Spice

Carnegie Mellon University and Argo AI today announced a five-year, $15 million sponsored research partnership under which the self-driving technology company will fund research into advanced perception and next-generation decision-making algorithms for autonomous vehicles. Argo AI and Carnegie Mellon will establish the Carnegie Mellon University Argo AI Center for Autonomous Vehicle Research, which will pursue advanced research projects to help overcome hurdles to enabling self-driving vehicles to operate in a wide variety of real-world conditions, such as winter weather or construction zones. "We are thrilled to deepen our partnership with Argo AI to shape the future of self-driving technologies," CMU President Farnam Jahanian said. "This investment allows our researchers to continue to lead at the nexus of technology and society, and to solve society's most pressing problems. Together, Argo AI and CMU will accelerate critical research in autonomous vehicles while building on the momentum of CMU's culture of innovation." Carnegie Mellon has been developing autonomous driving technology for more than 30 years, and the university's expertise and graduates have attracted a number of self-driving car companies to Pittsburgh. Argo AI was founded in 2016 by a team of CMU alumni and experts from across the industry. "Argo AI, Pittsburgh and the entire autonomous vehicle industry have benefited from Carnegie Mellon's leadership. It's an honor to support development of the next-generation of leaders and help unlock the full potential of autonomous vehicle technology," said Bryan Salesky, CEO and co-founder of Argo AI. "CMU and now Argo AI are two big reasons why Pittsburgh will remain the center of the universe for self-driving technology." In addition to Argo, CMU performs related research supported by General Motors, Uber and other transportation companies. "Carnegie Mellon has always been at the leading edge of fundamental research on self-driving cars, and this new agreement with Argo AI will help us continue to expand the frontiers of these important technologies," said J. Michael McQuade, CMU's vice president of research. "With Argo's support, our faculty and particularly our students will be better prepared to tackle the next wave of technical challenges facing autonomous vehicles." Deva Ramanan, an associate professor in the Robotics Institute who also serves as machine learning lead at Argo AI, will be the center's principal investigator. The center's research will involve faculty members and students from across CMU. The center will give students access to the fleet-scale data sets, vehicles and large-scale infrastructure that are crucial for advancing self-driving technologies and that otherwise would be difficult to obtain. The center's research will address a number of technical topics, including smart sensor fusion, 3D scene understanding, urban scene simulation, map-based perception, imitation and reinforcement learning, behavioral prediction and robust validation of software. Research findings will be reported in open scientific literature for use by the entire field. "This partnership between Carnegie Mellon and Argo AI, two of the major players in autonomous driving technology, is welcome news for all of Pittsburgh," said Pittsburgh Mayor William Peduto. "Self-driving cars represent a growing industry and we want to continue to develop and attract the technical talent that will drive it forward." ''I am delighted that CMU continues to collaborate with companies like Argo locally to great impact, embracing the newest technologies and continuing to add to the increased vibrancy of the region,'' said County Executive Rich Fitzgerald. ''Their partnerships around AI, robotics, information technology, engineering and the arts is a real benefit to this community, and make Pittsburgh one of the leading regions in the country in innovation and technology.'' Martial Hebert, director of the Robotics Institute, said that through the partnership, Argo is setting the standard for how to support university efforts in a time when the competition for technical talent is fierce. "Argo is enabling the university to do what it does best by providing our students and faculty with access to data, infrastructure and real-world problems on a large scale," Hebert said. "In the process, we will train graduates who will be the top talent for Argo and the rest of the industry." Read Deva Ramanan's blog post for Argo AI.

Online Atlas of Aquatic Insects Aids Water-Quality Monitoring

Byron Spice

A new online field guide to aquatic insects in the eastern United States, macroinvertebrates.org, promises to be an important tool for monitoring water quality and to support learning how to correctly identify freshwater insects inhabiting rivers, lakes and streams. Carnegie Mellon University, working with Carnegie Museum of Natural History (CMNH), the Stroud Water Research Center, the University of Pittsburgh, Clemson University and a set of volunteer biomonitoring organizations, led development of the new visual atlas and digital field guide. It features highly detailed images of 150 common aquatic bugs, such as mayflies, dragonflies and beetles, along with a few mussels, clams and snails of interest. In addition to helping citizen scientists monitor water quality, the atlas is an open educational resource available for trainers, teachers and students, including those at the college level. Marti Louw, director of the Learning Media Design Center in CMU's Human-Computer Interaction Institute, led the three-year effort, sponsored by the National Science Foundation. She and other members of the research and development team will begin rolling out the new tool to regional environmental educators and watershed organizations at a workshop this Saturday "One key goal is to make the task of accurately identifying aquatic insects easier for citizen scientists, which in turn will allow more people to engage and participate in water quality monitoring and stewardship of freshwater resources," Louw said. The number and types of insects living in waterways and bodies of water and how that diversity changes over time are vital indicators of watershed health, she noted. Water chemistry analysis can provide specific information about water quality at any given moment, said John Wenzel, an entomologist and director of CMNH's Powdermill Nature Reserve in the Laurel Highlands. But studying insect populations is a better way to assess the health of a stream because the presence or absence of certain insects reflects water conditions throughout the year. The new "Atlas of Common Freshwater Macroinvertebrates of Eastern North America" includes not only explorable high-resolution images of the insects, but also detailed multimedia annotations. These help the user know what and where to look for anatomical features that will enable them to recognize orders, families and even genera. It's not an exhaustive catalogue of aquatic insects, but includes the most common types found east of the Mississippi River. Multiple views of each specimen collection were created at CMNH using a robotic camera rig. For each view, 2,000-3,000 individual photographs taken at different focal lengths and positions are digitally stitched together into a single image, enabling users to both look at the insect as a whole and seamlessly zoom in to examine tiny features. "With this modern technology, we can turn your computer into a microscope that you can drive," Wenzel said. Though the field guide can be accessed online, the developers also made sure that the site would "fail gracefully" in areas where internet access is limited or nonexistent. Chris Bartley, principal research programmer in the Robotics Institute's CREATE Lab, led the software development for the atlas and supporting image and content management tools. "One of the lines we've had to walk is whether this is a tool for science professionals or for volunteers and students," Louw said, acknowledging that compromises were necessary to keep the atlas both easy to use and true to the science. "We want to honor the beauty, precision and detail of entomology to coordinate collective observation over time and place, but not let the science be off-putting for learners and first-time users." Wenzel has a somewhat different view. "Going in, I thought the entomology would be the hard part," he said. "But it turns out the critical elements are the custom software that enables you to jump back and forth between photos and facts, and the design elements that enhance learning. The individual facts and photos are nice, but by themselves they don't teach you anything."

Researchers See Around Corners To Detect Object Shapes

Byron Spice

Computer vision researchers have demonstrated they can use special light sources and sensors to see around corners or through gauzy filters, enabling them to reconstruct the shapes of unseen objects.The researchers from Carnegie Mellon University, the University of Toronto and University College London said this technique enables them to reconstruct images in great detail, including the relief of George Washington's profile on a U.S. quarter.Ioannis Gkioulekas, an assistant professor in Carnegie Mellon's Robotics Institute, said this is the first time researchers have been able to compute millimeter- and micrometer-scale shapes of curved objects, providing an important new component to a larger suite of non-line-of-sight (NLOS) imaging techniques now being developed by computer vision researchers."It is exciting to see the quality of reconstructions of hidden objects get closer to the scans we're used to seeing for objects that are in the line of sight," said Srinivasa Narasimhan, a professor in the Robotics Institute. "Thus far, we can achieve this level of detail for only relatively small areas, but this capability will complement other NLOS techniques."This work was supported by the Defense Advanced Research Project Agency's REVEAL program, which is developing NLOS capabilities. The research will be presented today at the 2019 Conference on Computer Vision and Pattern Recognition (CVPR2019) in Long Beach, California, where it has received a Best Paper award."This paper makes significant advances in non-line-of-sight reconstruction — in essence, the ability to see around corners," the award citation says. "It is both a beautiful paper theoretically as well as inspiring. It continues to push the boundaries of what is possible in computer vision."Most of what people see — and what cameras detect — comes from light that reflects off an object and bounces directly to the eye or the lens. But light also reflects off the objects in other directions, bouncing off walls and objects. A faint bit of this scattered light ultimately might reach the eye or the lens, but is washed out by more direct, powerful light sources. NLOS techniques try to extract information from scattered light — naturally occurring or otherwise — and produce images of scenes, objects or parts of objects not otherwise visible."Other NLOS researchers have already demonstrated NLOS imaging systems that can understand room-size scenes, or even extract information using only naturally occurring light," Gkioulekas said. "We're doing something that's complementary to those approaches — enabling NLOS systems to capture fine detail over a small area."In this case, the researchers used an ultrafast laser to bounce light off a wall to illuminate a hidden object. By knowing when the laser fired pulses of light, the researchers could calculate the time the light took to reflect off the object, bounce off the wall on its return trip and reach a sensor."This time-of-flight technique is similar to that of the lidars often used by self-driving cars to build a 3D map of the car's surroundings," said Shumian Xin, a Ph.D. student in robotics.Previous attempts to use these time-of-flight calculations to reconstruct an image of the object have depended on the brightness of the reflections off it. But in this study, Gkioulekas said the researchers developed a new method based purely on the geometry of the object, which in turn enabled them to create an algorithm for measuring its curvature.The researchers used an imaging system that is effectively a lidar capable of sensing single particles of light to test the technique on objects such as a plastic jug, a glass bowl, a plastic bowl and a ball bearing. They also combined this technique with an imaging method called optical coherence tomography to reconstruct the images of U.S. quarters.In addition to seeing around corners, the technique proved effective in seeing through diffusing filters, such as thick paper.The technique thus far has been demonstrated only at short distances — a meter at most. But the researchers speculate that their technique, based on geometric measurements of objects, might be combined with other, complementary approaches to improve NLOS imaging. It might also be employed in other applications, such as seismic imaging and acoustic and ultrasound imaging.In addition to Narasimhan, Gkioulekas and Xin, the research team included Aswin Sankaranarayanan, assistant professor in CMU's Department of Electrical and Computer Engineering; Sotiris Nousias, a Ph.D student in medical physics and bioengineering at University College London; and Kiriakos N. Kutulakos, a professor of computer science at the University of Toronto.The researchers are part of a larger collaborative team, which includes researchers from Stanford University, the University of Wisconsin Madison, the University of Zaragosa, Politecnico di Milano and the French-German Research Institute of Saint-Louis, that is developing a suite of complementary techniques for NLOS imaging.In addition to DARPA, the National Science Foundation, the Office of Naval Research and the Natural Sciences and Engineering Research Council of Canada supported this research.

NASA Selects Carnegie Mellon To Develop Lunar Pit Exploration Technology

Byron Spice

NASA has approved a $2 million research initiative for Carnegie Mellon University roboticists to develop technologies necessary for robots to explore pits on the moon — the lunar equivalent of sinkholes — which might provide access to shelter and resources that could sustain future lunar missions. William "Red" Whittaker, a professor in CMU's Robotics Institute, has proposed using one or more smart, speedy robots to explore and develop models of pits that have been discovered in orbital imagery, but never studied from the moon's surface. "From orbit, you can't get the viewpoints or proximity to see the details that matter," Whittaker said. "That's why we need robots. Is there a way in? Are there overhangs? Could a robot rappel in? Might there be a fissure, cavern or cave opening?" Unlike craters, which are created when asteroids or meteorites strike the moon, pits form when the surface collapses into a hollow underground void. Whittaker said pits could expose caverns that might be used by future explorers and could provide access to minerals, ice and other resources. This latest two-year funding is from the NASA Innovative Advanced Concepts (NIAC) program, which funds visionary "high risk/high payoff" ideas. It will enable Whittaker and his colleagues, including partners from the NASA Ames Research Center and Astrobotic, to further develop the technologies and methods necessary for the mission — maturing the technologies to the point where they could be implemented. This so-called Phase III funding is the first ever awarded by the NIAC program. "We are pursuing new technologies across our development portfolio that could help make deep space exploration more Earth-independent by utilizing resources on the moon and beyond," said Jim Reuter, associate administrator of NASA's Space Technology Mission Directorate. In addition to Whittaker's proposal, funding also was awarded to an asteroid mining concept by TransAstra Corp. "These NIAC Phase III selections are a component of that forward-looking research, and we hope new insights will help us achieve more firsts in space." Additional funding will be necessary to complete preliminary and final design. A robotic mission that will deploy the resulting technology can't occur until perhaps 2023. Such a capability would be far beyond that recently announced by Carnegie Mellon for sending a toaster-size robot developed by Whittaker to the moon in 2021. A pit mission envisioned by Whittaker, called Skylight, would require one or more robots to complete their observations within a week, before they are enveloped by the deep cold of a lunar night that would permanently disable them. He expects the robots would be delivered to the moon by a lander such as the Peregrine lander developed by Pittsburgh's Astrobotic. Completing a Skylight mission quickly will require speedy robots that can travel miles and gather thousands of images. Because of the communication limitations of the rovers, they will need to return periodically to the vicinity of the lander to download images, then resume exploration. "Beyond possessing the autonomous means to explore, the rovers need to know when and how to come home," Whittaker said. The robots also will require a capability Whittaker calls "exploration autonomy," which will allow them to make their own judgments about where they need to go to gather the information needed and about how close they dare get to the rim of the pit. There simply isn't time for "step, stop, mother-may-I" decision making by operators on Earth, he said. Another capability to be developed with the new funding is a modeling engine for computing on board the lander. Whittaker said the computer will need to extract information from the thousands of images collected by the rovers to build a high-fidelity, high-resolution, scientifically valid computer model of the pit, which could then be transmitted back to Earth. NIAC is funded by NASA's Space Technology Mission Directorate, which is responsible for developing the cross-cutting, pioneering new technologies and capabilities needed by the agency to achieve its current and future missions. NASA is charged with returning astronauts to the moon within five years. The space agency is pursuing a two-phase approach: landing astronauts on the moon by 2024 and then establishing a sustained human presence on and around the moon by 2028.

Maxion Wins DSN Test of Time Award

Byron Spice

Roy Maxion, research professor in the Computer Science and Machine Learning departments, will receive the 2019 Test of Time Award at the IEEE/International Federation for Information Processing Conference on Dependable Systems and Networks (DSN 2019), held June 24–27 in Portland, Oregon. The award from DSN — whose primary concern is the reliability of computer systems — recognizes a 2009 research paper that used machine learning to analyze peoples' typing rhythms in a process known as keystroke dynamics. Keystroke dynamics can identify users based on their typing styles, and can also be used in the medical arena to study neurological disorders that affect the human motor system. The paper, "Comparing Anomaly-Detection Algorithms for Keystroke Dynamics," was co-written with Maxion's Ph.D. student Kevin Killourhy and was the first in behavioral/keystroke biometrics to develop a reproducible evaluation method. It also measured the performance of a range of statistical and machine-learning algorithms against a common reference data set so that results could be compared soundly, revealing that very simple classification techniques could achieve top performance. The Test of Time Award, which DSN is presenting for the first time this year, recognizes outstanding papers published in DSN at least 10 years ago that have had a sustained and important impact on the theory and/or practice of dependable systems and networks computing research.

Carnegie Mellon Robot, Art Project To Land on Moon in 2021

Byron Spice (SCS) and Pam Wigley (CFA)

Carnegie Mellon University is going to the moon, sending a robotic rover and an intricately designed arts package that will land in July 2021. The four-wheeled robot is being developed by a CMU team led by William "Red" Whittaker, professor in the Robotics Institute. Equipped with video cameras, it will be one of the first American rovers to explore the moon's surface. Although NASA landed the first humans on the moon almost 50 years ago, the U.S. space agency has never launched a robotic lunar rover. The arts package, called MoonArk, is the creation of Lowry Burgess, space artist and professor emeritus in the CMU School of Art. The eight-ounce MoonArk has four elaborate chambers that contain hundreds of images, poems, music, nano-objects, mechanisms and earthly samples intertwined through complex narratives that blur the boundaries between worlds seen and unseen. "Carnegie Mellon is one of the world's leaders in robotics. It's natural that our university would expand its technological footprint to another world," said J. Michael McQuade, CMU's vice president of research. "We are excited to expand our knowledge of the moon and develop lunar technology that will assist NASA in its goal of landing astronauts on the lunar surface by 2024." Both payloads will be delivered to the moon by a Peregrine lander, built and operated by Astrobotic Inc., a CMU spinoff company in Pittsburgh. NASA last week awarded a $79.5 million contract to Astrobotic to deliver 14 scientific payloads to the lunar surface, making the July 2021 mission possible. CMU independently negotiated with Astrobotic to hitch a ride on the lander's first mission. "CMU robots have been on land, on the sea, in the air, underwater and underground," said Whittaker, Fredkin University Research Professor and director of the Field Robotics Center. "The next frontier is the high frontier." For more than 30 years at the Robotics Institute, Whittaker has led the creation of a series of robots that developed technologies intended for planetary rovers — robots with names such as Ambler, Nomad, Scarab and Andy. And CMU software has helped NASA's Mars rovers navigate on their own. "We're more than techies — we're scholars of the moon," Whittaker said. The CMU robot headed to the moon is modest in size and form; Whittaker calls it "a shoebox with wheels." It weighs only a little more than four pounds, but it carries large ambitions. Whittaker sees it as the first of a new family of robots that will make planetary robotics affordable for universities and other private entities. The Soviet Union put large rovers on the moon 50 years ago, and China has a robot on the far side of the moon now, but these were massive programs affordable only by huge nations. The concept of CMU's rover is similar to that of CubeSats. These small, inexpensive satellites revolutionized missions to Earth's orbit two decades ago, enabling even small research groups to launch experiments. Miniaturization is a big factor in affordability, Whittaker said. Whereas the Soviet robots each weighed as much as a buffalo and China's rover is the weight of a panda bear, CMU's rover weighs half as much as a house cat. The Astrobotic landing will be on the near side of the moon in the vicinity of Lacus Mortis, or Lake of Death, which features a large pit the size of Pittsburgh's Heinz Field that is of considerable scientific interest. The rover will serve largely as a mobile video platform, providing the first ground-level imagery of the site.

Admoni, Nourbakhsh Prime for Discussion on AI

Michael Henninger

As the house lights rise in the O'Reilly Theater, actress Jill Tanner begins her performance as the titular character in the show "Marjorie Prime." She's dressed casually, and speaking to a much younger man in a full suit — a holographic representation of her late husband. He's a youthful, AI version of the man she fell in love with, designed to help her with the frustration of her decaying mind. Between two performances of the play on Sunday, June 2, SCS faculty members Illah Nourbakhsh and Henny Admoni joined Director Marya Sea Kaminski at the Pittsburgh Public Theater for a panel discussion, titled "When Robots Become Our Companions: Facts, Fictions, and Uncomfortable Truths." The panel is part of a series of outreach programming made possible by the 2016 K&L Gates Endowment for Ethics and Computational Technologies. Nourbakhsh, a professor of robotics and director of CMU's CREATE Lab, hopes to continue these community outreach sessions, spurring local discourse over the ethical use of technology in culture and society. "I think 'Marjorie Prime' is an interesting play on this whole issue of authenticity, the human, non-human and identity," Nourbakhsh said. "This is relevant to society because of diseases like Alzheimer's and dementia. But then you add to the mix robotics, and it's really fascinating. When and how is it okay for a system to act like a human, or to use human social cues? And is that manipulation, or care? I think the play does a good job opening up those issues." Since 2017, Nourbakhsh has co-taught a Grand Challenge Seminar called "Artificial Intelligence and Humanity" with Jennifer Keating, assistant dean for educational initiatives in the Dietrich College of Humanities and Social Sciences. "We teach students in their first semester about how AI is changing society, identity, agency, power, free will and surveillance, and those are exactly the issues in the play as well," Nourbakhsh said. "The students love it. They're concerned about the ethical ramifications of technology." Admoni, an assistant professor at the Robotics Institute, works with robots that help people to become more independent in their daily lives, like a robot that can help someone with impaired motor skills pick up a glass of water or take a bite of food. The ethical questions raised by "Marjorie Prime" align with her work. "I have always been a fan of science fiction," Admoni said. "I love R2D2. That is not a unique opinion among roboticists, because it's so straightforward. It rolls around. It exists in our environment. But it's not fancy. It's completely utilitarian. It has exactly the tool you need at exactly the time you need it and nothing extra." The idea of Primes, holographic AIs that can recall a loved one's memories, is a faraway notion, but the questions raised are relevant today. "Art gives us a vision of all the ways that robots and technology can destroy society, and it hasn't come to fruition," Admoni said. "But I think it's important that art push technology in that way. Art gives us a vision of what we do and don't want to do."