Feature ArticleApplying Multibeam Imaging Sonar As an AUVís Obstacle Avoidance Sensor
By Hongli Xu • Lei Gao
Obstacle avoidance is an essential ability of autonomy for an AUV to work in a complex environment, which not only concerns the accomplishment of missions, but also influences safety. In addition, automatic detection, identification and avoidance of unknown obstacles are a sign of intelligence. There have been many research interests in obstacle avoidance in recent decades. Obstacle avoidance means that an AUV can autonomously sense unknown obstacles and adjust its trajectory to avoid collision in real time.
The obstacle avoidance sensor plays a central role in detecting frontal objects and is fundamental to avoid collision. In practice, it seems to be the AUVís eye. The AUV depends on its output data to determine whether obstacles exist that would prevent its forward motion. Therefore, the information collected by the sensor in unit time has a direct effect on the AUVís obstacle avoidance architecture and method. Originally, there was widespread application for an echosounder acting as an obstacle avoidance sensor. But echosounders can only measure the distance of an obstacle in a defined-angle direction. To get the whole view, many echosounders would need to be installed on the AUVís nose. Recently, advances in sonar technology have enabled the development of reliable, high-resolution, multibeam imaging sonar that can acquire a real-time image of the perceived environment.
More imaging sonar has been applied to AUVs to date. With the continuous increase in the application demands of an AUV in the rugged seafloor, there is a need for a sensing device like imaging sonar, which can acquire high-resolution images in real time and provide adequate information for avoiding obstacles.
The sonar in our project is a P450-130S produced by Teledyne Blueview (Bothell, Washington), which is a 2D multibeam imaging sonar. An AUV integrated with the sonar underwent trials at Qiandao Lake in Zhejiang, China in August 2013. Each ping image from this sonar shows the objects seen at the 130° horizontal angle of view and the 15° vertical angle of view. Some special characteristics in this imaging sonar were determined by its working theory. Firstly, there may be a shaded part behind an object when the sonar sees distant or large objects, similar to a lamp casting light on some objects. Its view distance became very small when it was close to the seafloor or an obstacle. Finally, it cannot determine the objectís position in a vertical direction because it was only a 2D sonar without vertical resolution.
In practice, there were two scenarios in terms of real-time avoidance decision making. One was whether the sonar could accurately ďseeĒ the objects that appear in front of the AUV, which directly determines whether the AUV can avoid them. For example, two cages were hung down the surface beside an underwater right-angle dam. In the first scenario, we could not see the two cages from the sonar images when the distance between them was larger than 150 meters. When the distance was nearer and almost 100 meters, the blur of these cages loomed but was not clear. Until the distance was about 50 meters, the foursquare outlines of these cages were visible, and even the dam edges were wider and clearer than before.
The other scenario occurred when no obstacles were ahead, and some false objects appeared in the sonar images because of environmental interference. For instance, a wake of a boat and a corps of stochastic air bubbles could be detected by the sonar. This scenario was more frequent and had a more serious influence on real-time obstacle avoidance than the first. These false objects may lead to mistaken behavior of avoidance and departure from the desired trajectory. This was not our desired state.
Detection and Abstraction
The real-time obstacle avoidance method based on imaging sonar consists of two steps. First is sonar image processing, including obstacle detection and characteristic abstraction, which convert, through a series of image processing methods, an original image to an obstacle grid map that can be recognized by an AUV. The following step was a real-time decision of avoidance to determine the behavior for avoiding obstacles according to a real-time obstacle map, the defined mission requirement and the vehicle kinematics.
The Blueview multibeam imaging sonar includes 768 beams in each ping. Its data update rate correlates highly with the detection range. The farther the range was set, the longer the interval between two pings. Typically the interval was about 600 milliseconds when the range was 100 meters. A software developer kit was provided by Blueview to accomplish the interaction between the control computer and the sonar. The computer acquires the gray image in real time, in which a bright point implies there was a stronger reflection and maybe a potential obstacle. Converting the bright area into the obstacle area generates an obstacle map.
The image processing comprises four steps, including filtering, enhancement, segmentation and binary processing. Filtering was used to suppress the noise of the original sonar image. The noise generally conducts itself with a minimum or maximum value and becomes higher with enhancement of the objectís intensity. So, the filtering threshold needs to be dynamically determined by the imageís average intensity. In the second step, the enhancement will further increase the signal-to-noise ratio of the filtered image and lay the ground work for the following image segmentation. In the segmentation of the third step, we present a fuzzy k-mean clustering algorithm, in which n data object is divided into k subclusters with internal clustering and external dispersal. The clustering center updates according to the orientation principle, while the mean square deviation serves as a similarity measure function. Finally, the binary processing translates the gray image into the binary image, in which obstacle grid is denoted by 1 and free grid denoted by 0. To continue this article please click here.
Hongli Xu received a Ph.D. in pattern recognition and intelligent systems from the University of Chinese Academy of Sciences in 2009, and then began her career at the Shenyang Institute of Automation CAS. Her research areas focus on autonomous control, path planning and cooperative control of AUVs. Lei Gao received a masterís degree in engineering from Harbin Engineering University in 2007. He joined the Shenyang Institute of Automation CAS in 2012 and works as a research and development engineer focusing on sonar image processing.