Aquaculture Europe 2023

September 18 - 21, 2023


Add To Calendar 19/09/2023 14:15:0019/09/2023 14:30:00Europe/ViennaAquaculture Europe 2023OFF-THE-SHELF AI VISION TECHNOLOGY SUPPORTING SHRIMP BIOMASS ESTIMATION ACROSS THE ATLANTICSchubert 3The European Aquaculture Societywebmaster@aquaeas.orgfalseDD/MM/YYYYaaVZHLXMfzTRLzDrHmAi181982


A. Tré-Hardy, J. Prado, B. Guterres*, L. Ávila, A. Cabral, L. Cordova, D. Guimarães, A. Bezerra, V. Oliveira,  S. Botelho, P. Drews-Jr, N.  Duarte, M. Pias, L. Poersch


Federal University of Rio Grande



The strict control of animal biomass and physico-chemical parameters within cultivation tanks plays a fundamental role in shrimp farming activities. Accurate biomass estimation is crucial for efficient feeding planning and reducing environmental impacts. Off-the-shelf AI solutions may be an important ally for biomass estimation within aquaculture applications. This can avoid time-consuming animal weighting activities, free up professionals for other activities and contribute to more sustainable and efficient production. This work presents a case study on the practical deployment and validation of an off-the-shelf AI vision solution for shrimp biomass estimation within the Atlantic Area. The main challenges and advantages are discussed from a user and technological perspective.

Materials and Methods                        

Description of the data Acquisition setup

The video footage is acquired inside the cultivation tanks with a high-definition wide-angle camera tied to two lighting units protected by a waterproofed acrylic cage (Figure 1). This device is fully immersed in the water and the camera is positioned at a distance of 10cm from a checkered background. The system has been designed to mitigate the impact of high turbidity on the image quality as follows: (a) it used a short distance between the camera and background, and (b) two lighting units to improve conditions.

Dataset Description

Sixteen video recordings were acquired at the IMTA lab, totalling  37 hours of video footage. We have extracted 9032 images of shrimps, out of which  6180 images were annotated with bounding boxes and pixel coordinates for a set of anatomical landmarks such as the shrimps’ eyes and the root of their tails.                        

Description of the data processing architecture

Our solution’s software architecture can be splitinto three modules. The first module consumes images and produces pixel coordinates of anatomical landmarks. It relies on a Region-based Convolutional Neural Network (R-CNN) as described by Kaiming He et al. (2017). The second module consumes the pixel coordinates of the detected landmarks to assess the average length of the shrimps by combining the distances between several pairs of landmarks. The third module consumes the estimated average size of the shrimps to assess their average weight. This module relies on a length/weight polynomial model fitted on ground truth data. Called sequentially, these three modules can give an estimate of the shrimp’s average length and weight from a stream of images. The choice of this architecture has been driven by the fact that the shrimps are rarely entirely visible within video footage. Working with body segments allows us to extract size information even if the animals are partially masked. A user feedback session is conducted after the first set of technology deployments in a relevant environment.

Results and Discussions                        

The solution described above has been applied to estimate the animals’ weight and size in a cultivation tank. The models provided relative errors of 8% for the length and 17% for the weight. Some experiments are ongoing to investigate how  model performance is affected by environmental conditions and animal size. The user feedback meeting aimed to discuss technology usefulness and weaknesses to best consider the application needs during technology development. The main weakness is the lack of control over the number of animals passing in front of the camera. It prevents the exploitation of two-thirds of the resulting video footage. We discussed two possible approaches to circumvent this issue: (1) attaching the camera to a feeding plate usually used by the IMTA lab within the cultivation tank; (2) using a smaller tank custom-made to have optimal recording conditions (no turbidity, controlled lighting, 100% of the tank fitting in the camera’s field of view). The second approach requires sampling shrimps from the cultivation tanks and temporarily moving them to the “data acquisition tank” to carry out the biomass estimation. The IMTA lab highlighted the usefulness of the AI vision technology for shrimp biomass estimation, even considering a more controlled data acquisition setup.


Our work highlights that computer vision and deep machine learning are useful tools to perform biomass estimation even when reliable data collection  from cultivation tanks becomes challenging because of high level water turbidity.


This work is part of the ASTRAL (All Atlantic Ocean Sustainable, Profitable and Resilient Aquaculture) project. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement Nº 863034.


HE, Kaiming, GKIOXARI, Georgia, DOLLÁR, Piotr, et al. Mask r-cnn. In : Proceedings of the IEEE international conference on computer vision. 2017. p. 2961-2969