Byron SpiceTuesday, March 1, 2022Print this page.
Rust never sleeps, and cracking concrete doesn't get a day off either.
The Jan. 28 collapse of Pittsburgh's Fern Hollow Bridge was a dramatic reminder of that fact. The exact cause of the collapse won't be known until the National Transportation Safety Board completes a months-long study, but Carnegie Mellon University researchers have developed autonomous drone technology that someday might prevent similar catastrophes and lesser mishaps caused by deterioration.
Working with Shimizu Corp., a Tokyo-based construction and civil engineering company, the Robotics Institute built a prototype drone designed for monitoring bridges and other infrastructure. As part of that effort, researchers recently unveiled a new method that enables automated systems to more accurately detect and monitor cracks in reinforced concrete.
Sebastian Scherer, associate research professor of robotics and leader of the CMU team working with Shimizu, said the crack-detection method was one of several technologies that the university developed for the project, which concluded in February 2022. The researchers built a working prototype of a bridge-monitoring drone that employs the crack-detection system and plan to use it at the Frick Park site of the Fern Hollow Bridge to make a detailed model for the post-collapse analysis.
"The automated technology we developed for the Shimizu project is designed to prevent this type of collapse via comprehensive mapping, crack detection and structural analysis that would be too much work if it were done by hand," Scherer said. "Today, typically you only do spot checks on critical parts, since an exhaustive survey and analysis would be too slow. Automated defect-detection technology would enable inspectors to check bridges more frequently and perhaps identify problems before failures occur."
Kris Kitani, associate research professor of robotics, led the research team, whose system improves existing crack-detection algorithms by 10%.
Their system relies on a computer vision framework known as a convolutional neural network (CNN), a class of artificial neural networks commonly used to analyze visual imagery. This framework can readily identify animals and objects such as vehicles and production parts. But Jinhyung "David" Park, a senior in the Computer Science Department majoring in artificial intelligence and a member of Kitani's research team, noted that these systems have much more difficulty detecting cracks.
"Cracks, unlike these other objects, are very thin — often only two or three pixels wide in images," Park said. They also have extremely irregular shapes and can be obscured by marks or shadows on surfaces.
Even when existing systems detect a crack, they cannot always determine how wide it is. They often overestimate the size, which makes it difficult to accurately determine if the crack is a serious defect or whether it might be expanding.
The researchers addressed the problem by using reinforcement learning, a form of artificial intelligence. In this game-like approach, the computer uses trial and error to develop tactics for solving a problem while maximizing its performance based on rewards and penalties.
This approach is commonly used to improve the performance of robotic arms, Park said. In those cases, a robotic system uses reinforcement learning to analyze the space surrounding a robotic arm to determine which motions would best enable the arm to accomplish a goal.
As adapted by the CMU researchers for crack detection, reinforcement learning allows a computer system to analyze each pixel within an image. Rather than decide how to move a robotic arm, the system calculates the probability that each pixel is part of a crack.
"That might sound time-consuming because we're considering every pixel," Park said. "But our reinforcement learning agent is convolutional, so it does these predictions asynchronously for every single pixel at once."
This technique enables the system to not only detect cracks at high resolution, but also to calculate the probability that separate cracks might be part of one larger crack.
"If you have a crack that's kind of shaped like this" Park said, extending the thumb and pinky of his right hand, "and another one here" — extending his left thumb and pinky near his right hand — "but they're disconnected in the middle, maybe the network can recognize, hey, these cracks are actually parts of the same crack."
The team presented its crack-detection work in September at the IEEE International Conference on Image Processing (ICIP 2021) in Anchorage, Alaska, where it won the Best Industry Impact Award. In addition to Park, who was the lead author, Kitani's team included Yi-Chun Chen, a Robotics Institute master's student in computer vision; and Yu-Jhe Li, a Ph.D. student in the Robotics Institute and the Electrical and Computer Engineering Department.
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu