Advanced image processing combined with the development of artificial intelligence algorithms has impacted all industrial sectors, including aquaculture (Zhao et al, 2021). The usual harsh surrounding circumstances, salinity, humidity or ubiquity, among others, make it difficult, but not impossible, to increasingly integrate information and communication technologies (ICT) and AI in this sector. In hatcheries, knowing the biomass of the plant is a key parameter. In many use cases, the initial challenge faced by automation begins with the automatic identification of specimens (Fernandes et al, 2020). This is carried out by a surveillance system and a previously constructed database from which information can be inferred, thus providing an augmented reality-based industrial model. In this work, a n autonomous, real-time processing, and low-complexity system for turbot (scophthalmus maxima ) segmentation and weight estimation is presented. To train the segmentation model and the weight estimation model, a database of more than 6000 turbot images has been created.
Materials and Methods
Specifically, the developed system has been assembled and verified by installing it above a weight classification belt sorting machine used in a turbot breeding plant . The o perators transfer the fish one by one from a hopper to a conveyor belt, where they are weighed and distributed into weight-balanced bins. On that belt, the speed is up to one fish per second.
The system is equipped with a camera that records high-resolution video of the fish on the conveyor belt (Fig. 1 left) . There is a second camera that records the weight information displayed by belt balance . This recording process is automated by means of OCR identification of the images. This has allowed for the semiautomatic construction of a database that contains for each image of a fish its real weight. Once an image acquisition and processing system was available, a model was developed to detect and segment the fish within the images (Fig. 2). For this purpose, YOLOv5 (Ultralytics) was trained with a database of various species of fish (Roboflow) to self-detect and segment tasks. Then, the hue, saturation and value (HSV) parameters were extracted to perform and umbralisation of the images. The last step consists of applying an opening/closing shift of the pixels. With this post-processing, an accurate binary image of the shape of the fish can be extracted. Later, both the area and the length of the fish are obtained, measured as the total number of white pixels or the number of pixels between the head and the tail, respectively. To do this, the extrinsic parameters of the camera were considered.
Finally, six machine learning models have been compared in order to obtain the best relationship between weight and area or length. For this purpose, a total of 6095 images of fish between 250g and 2kg were used; Fig. 3 presents a graph were each dot corresponds to a fish, its real weight, and calculated length . Finally, e ach of the six models has been validated using the K-fold technique (Gufosowa).
In fish detection and segmentation tasks, YOLOv5 trained with the aforementioned database was able to detect fish with a measured precision of 0.90 and a recall of 0.92. Table 1 shows the root mean square error (RMSE) obtained inferring the weight from both length and area with 6 different algorithms trained with the collected images. As it can be seen, the 2nd and 3rd grade polynomic regressions and KNV outperforms the rest of algorithms in both cases, length and area. Considering the limitation of processing images at a high rate (less than 0.1s per frame) , 2nd degree polynomial regression is chosen as the best option. Thus, t he average processing time of the whole system inferring weight from length is 0.065ms using an ORIN NVIDIA platform. Using area-based models this time is up to 5 times higher. Finally, it is work mentioning that a lthough this work focused on turbot as a use case, the technique could be easily transferable to any other flatfish species.
S. Zhao, S. Zhang, J. Liu y col., «Application of machine learning in intelligent fish aquaculture: A review», Aquaculture, vol. 540, pág . 736 724, 2021, issn: 0044- 8486. doi: https://doi.org/10.1016/j.aquaculture.2021.736724
A. F. Fernandes, E. M. Turra , Érika R. de Alvarenga y col., «Deep Learning image segmentation for extraction of fish body measurements and prediction of body weight and carcass traits in Nile tilapia», Computers and Electronics in Agriculture, vol. 170, pág . 105 274, 2020, issn : 0168-1699.
Ultralytics , YOLOv5 Documentation, URL: https://docs.ultralytics.com/
Roboflow , asdqw , asdq Dataset , URL: https://universe.roboflow.com/asdqw/asdq
Gufosowa . K-fold cross validation EN.svg . (2019), URL:
This work has been funded by Ministerio de Agricultura, Pesca y Alimentación, Plan de Recuperación, Transformación y Resiliencia, NextGenerationEU, Real Decreto 685/2021. Project: Aplicación de tecnologías de visión e inteligencia artificial a la mejora del proceso productivo (Acuicultura 4.0).