Home | Contact ST  

Feature Article

Cam-Trawl: A Combination Trawl And Stereo-Camera System
NFMS Designs Device for High-Resolution, Nonlethal Sampling Of Marine Organisms to Complement Survey Trawls

AUTHORS:
Kresimir Williams
Fisheries Biologist
Richard Towler
Information Technology Specialist
Christopher Wilson
Midwater Assessment and Conservation Engineering Program Manager
National Marine Fisheries Service
Alaska Fisheries Science Center, Seattle, Washington
The conservation and management of fish stocks requires, among other things, animal abundance estimates. Fisheries scientists often use bottom and midwater trawls to estimate fish abundance when conducting either conventional trawl surveys or acoustic-trawl surveys. For acoustic-trawl surveys, trawl catches provide information to convert acoustic data into estimates of abundance. Both acoustic and conventional trawl surveys provide catch information on target species composition and size structure. The catch, however, represents a relatively long space-integrated and time-integrated sample of the environment due to the time required to deploy and recover trawling nets.

High-resolution information on species and size composition along the trawl path can be invaluable for both types of surveys. For acoustic-trawl surveys, the high-resolution information can improve interpretation of the acoustic data. For either trawl survey type, many smaller or fragile animals travel through the trawl and are not retained in the catch, so information about these often critically important components of the ecosystem require sampling with other specialized equipment. Additionally, fish captured by trawls often do not survive, and thus trawl survey methods are inappropriate in some areas where fish stocks are severely depleted by overfishing or habitat loss.

To address these needs, researchers at NOAA’s National Marine Fisheries Service’s (NFMS) Alaska Fisheries Science Center (AFSC) developed the Cam-trawl. The Cam-trawl is a self-contained stereo-camera system fitted to the aft end of a trawl in place of the cod-end (i.e., capture bag). The absence of the cod-end allows animals to return unharmed to the environment after being imaged, and the image data provide much of the information that is typically collected from animals that are retained by traditional trawl methods.


System Overview
The stereo-camera system consists of two high-resolution machine vision cameras, a series of light-emitting diode (LED) strobes, a computer, a microcontroller, sensors and a battery power supply. The cameras and battery pack are housed in separate four-inch-diameter titanium pressure housings, and the computer, microcontroller and sensors are placed in a single six-inch-diameter aluminum housing. The components are mounted on an aluminum frame which is attached to the trawl and connected using SubConn Inc. (North Pembroke, Massachusetts) wet-pluggable marine connectors, including new combined power and gigabit-Ethernet-rated connectors for the cameras.

The Cam-trawl system uses the forward portion of the midwater trawl to aggregate fish past the stereo-camera system.

To facilitate the image analysis process, the trawl mesh panel sections were removed from the net in the region where the system’s camera frame was attached to the trawl, providing a uniform background for isolating targets. Flotation was used to help maintain the camera frame in a vertical position (i.e., cameras oriented horizontally) during towing. A drogue was attached to the trawl aft of the camera system that, in the absence of the cod-end, provides drag to stabilize the camera frame.

Cameras. The system uses two JAI Inc. (San Jose, California) RM-4200GE high-resolution, high-sensitivity cameras capable of capturing four-megapixel images at up to 15 frames per second. Machine-vision camera systems are more complex than camera systems utilizing consumer video or digital still cameras, but they provide far greater control over the image acquisition process. The cameras are paired with Samyang Optics Co. Ltd. (Changwon, Korea) eight-millimeter f/3.5 stereographic-projection lenses that, when combined with a domed viewport and a +5 diopter adapter, provide an 80° field of view with little distortion. The camera housings are fixed on a 22-by-85-centimeter frame with a 50-centimeter baseline distance (distance between optical axes at the image plane) and angled toward each other by 5°. This stereo-camera arrangement is then calibrated and the cameras are not removed from the frame, preserving the fixed camera geometry for stereo-triangulation and computation of target size and range.

LED Strobes. Six ODS75 LED-based strobes manufactured by Smart Vision Lights (Muskegon, Michigan) provide light. A wide-input-range direct current (DC)-to-DC converter was added to the strobe assembly to allow the strobes to operate using a range of battery configurations from nine to 36 volts DC. The ODS75 strobes were not designed for underwater use, so each was placed in a 12-by-12-by-five-centimeter aluminum housing and encapsulated in epoxy. The DC-to-DC converter and back of the ODS75 strobe circuit board were encapsulated in thermally conductive epoxy to provide a path for heat dissipation, and the rest of the strobe was potted in clear epoxy. Power and trigger signal were provided via a four-pin connector.

Supporting Hardware. Unlike tape-based video cameras or digital cameras that store images internally in nonvolatile random access memory, machine vision cameras require external hardware to store images. The cameras are connected via gigabit Ethernet to a computer, which has software to control the camera’s operation and to store the image data to a solid-state hard disk drive. Heading, pitch, roll and depth information for the system are provided by an OceanServer Technologies Inc. (Fall River, Massachusetts) OS5000-USD solid-state tilt-compensated compass. Depth is monitored continuously by the microcontroller, and when it reaches the configurable turn-on depth, the image acquisition process starts by powering up the system, triggering the cameras and strobes, and logging the sensor data. Image acquisition is stopped and the system is shut down at either a prespecified turn-off depth or if the battery voltage falls below a specified threshold.

To support lowered camera operations independent of the modified trawl, the system outputs composite video for viewing images in real time through a conducting sea cable. This output can also provide users on-deck diagnostics in case of a system malfunction.

Image Acquisition Software. JAI Inc. provides a full-featured software development kit, which simplifies writing customized software for their cameras. The core acquisition and control routines are written in C++ to maximize performance, while general system operation routines, sensor-data logging and the graphical user interface (GUI) are written in Python.

The computer runs a customized Linux operating system, which allows precise control over what software and services are launched. When deployed autonomously, a limited set of software and services run, providing the image acquisition software maximum computing resources. If the system is connected by a conducting cable to the surface, the acquisition software presents a GUI that displays real-time images and system parameters. Finally, if an operator starts the system on deck, the full desktop operating system is started, allowing the operator to copy data over the network or to initiate a remote desktop connection to alter the system configuration or perform other maintenance.

The Cam-trawl system being retrieved after deployment from NOAA Ship Bell Shimada. (Photo courtesy of Tracy Shaw from NMFS NWFSC)


System Performance
The Cam-trawl was tested over a series of experimental deployments using NOAA ships Oscar Dyson and Bell Shimada in July and August 2010. These deployments demonstrated the potential value of this sampling method while highlighting many of the tradeoffs and decisions that had to be made to optimize Cam-trawl performance, such as the placement of LED strobes, exposure duration and appropriate frame rates.

To replace traditional trawl catch processing, the image set had to capture every animal passing though the trawl with sufficient resolution for identification. Thus, targets must be tracked to reduce the probability of double-counting. Test deployments sought to find the optimal balance between good tracking conditions and sufficient target resolution for classification by changing the image resolution and frame rate and moving the camera position within the trawl.

The cameras are capable of operating over a range of image resolution settings and frame acquisition rates. At the highest-resolution setting (2,048 by 2,048 pixels), the entire system operates at six frames per second, which is below the camera potential due to network and disk input-output limitations. At this rate, 21,600 image pairs can be collected per hour, which would occupy approximately 12 gigabytes of disk space when stored in JPEG format. At the lower-resolution setting (1,024 by 1,024 pixels), the frame rate could be increased to 10 frames per second, with the storage requirements reduced to 4.5 gigabytes per hour. The latter setting improved conditions for tracking targets, but provided less detail for identifying animals.

The system withstood the rigors of deployment and retrieval in moderate weather conditions without evidence of impact or vibration damage. Attitude sensors were used to ensure the cameras were oriented horizontally and the camera frame was stable when deployed.

Cam-trawl images shown with their corresponding locations on a 38-kilohertz echogram. Marker size indicates the relative abundance of organisms in the images.

Conclusions
Underwater camera systems have often been used to conduct surveys of marine resources on autonomous underwater vehicles, remotely operated vehicles and towed systems. These approaches are often limited by low densities of fish in the environment and animal avoidance behaviors to the camera lighting. By integrating a camera system behind the forward portion of a trawl, the Cam-trawl, unlike these other devices, can concentrate marine organisms and present this captive group to the cameras.

Cam-trawl image data from the field demonstrates how the system can more precisely place marine organisms in their spatial context. Image sequences allow adjacent fish schools to be independently analyzed for fish length distribution and demonstrate that there is significant small-scale variation in fish size, which would be difficult to resolve using traditional trawl methods. Images also captured animals too small or too fragile to be adequately represented in standard trawl catch samples, such as krill, gelatinous organisms and small fish. When the Cam-trawl is used on acoustic-trawl surveys, animals in images can be associated with acoustic layers, and the complementary information from these two data types readily provides quantitative information on the animals’ spatial distribution.

The Cam-trawl stereo arrangement is superior to other measurement approaches, such as parallel lasers, because every animal in the shared view can be measured. Parallel lasers, which provide a means of estimating animal length by projecting two parallel beams of light at a known distance apart, are limited to measuring at most one animal per frame. With Cam-trawl, more animals are measured per image, which results in more abundant and accurate information on the size structure of the sampled aggregations.

The Cam-trawl can collect data for longer periods than is possible with traditional trawling, as no animals are retained by the gear. A trawl needs to be large enough to reduce avoidance to the gear, but this large size means more fish are retained. Occasionally, only a small portion of a dense fish aggregation can be sampled with a trawl to avoid too large a catch. Cam-trawl allows more extensive sampling of these high-density aggregations, as fish are not retained.

Image-based sampling generates vast amounts of data, which present challenges to data analysis. These challenges can be reduced by using automated image-processing software routines. A collaborative project has been established with computer vision experts at the University of Washington to develop algorithms for automated tracking, matching targets in stereo image pairs, target measurements and classification. This software development, expected to be available within a year or two, will greatly ease one of the most onerous steps in image-based sampling.

To improve Cam-trawl data and power management and increase the efficiency of the system, new software developments are planned for real-time processing of images to retain only those containing targets of interest. An adaptive sampling system will also be implemented to adjust the frame rate relative to the target density.

The Cam-trawl represents a new class of sampler to study the marine environment. It will not completely remove the need for physical sampling in many situations, specifically where species identifications are particularly ambiguous, where biological specimens are required (such as age determination and diet analyses) or where water clarity is poor.

With ongoing development, the Cam-trawl is poised to become a standard marine surveying tool for AFSC surveys in the near future, providing a more holistic view of the marine environment and improving the management of our marine resources.


Acknowledgments
This project was made possible through funds from NMFS’s Advanced Sampling Technology Working Group. The assistance from colleagues Scott McEntire and Craig Rose is gratefully acknowledged. The authors are also indebted to the officers and crews of NOAA ships Oscar Dyson and Bell Shimada for their support in the field. Dr. Dezhang Chu and Larry Hufnagle of Northwest Fisheries Science Center (NWFSC) generously provided Shimada vessel time for field tests and the acoustics data.



Kresimir Williams is a fisheries biologist in the Midwater Assessment and Conservation Engineering Program at the Alaska Fisheries Science Center in Seattle, Washington. He has worked in the field of fisheries acoustics for eight years, where he has developed underwater imaging systems to study fish behavior and other advanced sampling tools.

Richard Towler is an information technology specialist in the Midwater Assessment and Conservation Engineering Program at the Alaska Fisheries Science Center in Seattle, Washington. He has developed software and electronics products for fisheries acoustics and other scientific applications for more than 13 years.

Dr. Christopher Wilson leads the Midwater Assessment and Conservation Engineering Program at the Alaska Fisheries Science Center in Seattle, Washington, and has worked in the field of fisheries acoustics for nearly 20 years. He has been particularly involved in the design and execution of large-scale acoustic-trawl surveys for stock assessments.




-back to top-

-back to to Features Index-

Sea Technology is read worldwide in more than 110 countries by management, engineers, scientists and technical personnel working in industry, government and educational research institutions. Readers are involved with oceanographic research, fisheries management, offshore oil and gas exploration and production, undersea defense including antisubmarine warfare, ocean mining and commercial diving.