Cornell, U.S. Navy Raise Bar for Autonomous Underwater Imaging

Sonar is typically the preferred imaging method, but acoustic waves can be hard to decipher, often requiring different angles and views before an object can be identified. Image courtesy of Cornell.

Researchers at Cornell University (Ithaca, New York, USA) and the U.S. Navy (Washington, DC, USA) are now using new algorithms to outperform state-of-the-art programming for autonomous underwater sonar imaging.

According to the researchers, testing of this process has revealed potential to significantly improve the speed and accuracy of identifying objects such as explosive mines, sunken ships, airplane black boxes, pipelines, and corrosion on ship hulls. The team’s complete research was recently published in the IEEE Journal of Oceanic Engineering.

Challenges of Sea Reconnaissance

Sea reconnaissance is filled with challenges such as murky waters, unpredictable conditions, and vast areas of subaquatic terrain, the researchers explain. Sonar is the preferred imaging method in most cases, but acoustic waves can be difficult to decipher, often requiring different angles and views of an object before it can be identified.

“If you have a lot of targets and they are distributed over a large region, it takes a long time to classify them all,” says Silvia Ferrari, research leader and a mechanical and aerospace engineering professor at Cornell. “Sometimes an autonomous underwater vehicle won’t be able to finish the mission because it has limited battery life.”

Research Collaboration

To improve the capability of these vehicles, Ferrari’s research group teamed up with the Naval Surface Warfare Center in Panama City, Florida, USA, and the Naval Undersea Warfare Center in Newport, Virginia, USA. The combined team created and tested a new imaging approach called informative multi-view planning, which integrates information about where objects might be located with sonar processing algorithms.

These algorithms decide the optimal views, as well as the most efficient path to obtain those views. The planning algorithms take into account the sonar sensor’s field-of-view geometry along with each target’s position and orientation, and they can make on-the-fly adjustments based on current sea conditions.

Initial Testing Results

In computer-simulated tests, the research team’s algorithms competed against state-of-the-art imaging methods to complete multi-target classification tasks. According to the team, the new algorithms completed the tasks in just half the time, and they showed a 93% improvement in accurately identifying targets.

In a second test in which the targets were more randomly scattered, the new algorithms performed the imaging task at a rate of more than 11% faster, and with 33% more accuracy.

“Until these algorithms, we were never able to account for the orientation and some of the more complicated automatic target variables that influence the quality of the images,” Ferrari says. “Now we can accomplish the same imaging tasks with higher accuracy, and in less time.”

As a final test, the algorithms were programmed into a REMUS-100 autonomous underwater vehicle tasked with identifying 40 targets scattered within an area of St. Andrew Bay off the coast of Florida. Performing in its first undersea trial, the new algorithms achieved the same speed as the state-of-the-art algorithms, and with equal or superior classification performance, according to the researchers.

“Demonstrating the developed algorithms using an actual vehicle in sea trials is a very exciting achievement,” says Jane Jaejeong Shin, a post-doctoral researcher who is now an assistant professor of mechanical and aerospace engineering at the University of Florida (Gainesville, Florida, USA). “This result shows the potential of these algorithms to be extended and applied more generally in similar underwater survey missions.”

The research was funded with a grant from the U.S. Office of Naval Research Code 32.

Sources: Cornell University,; U.S. Office of Naval Research,

Related Articles