林 森,劉美怡,陶志勇
采用注意力機制與改進YOLOv5的水下珍品檢測
林 森1,劉美怡2,陶志勇2
(1. 沈陽理工大學自動化與電氣工程學院,沈陽 110159;2. 遼寧工程技術(shù)大學電子與信息工程學院,葫蘆島 125105)
海膽、海參、扇貝等水下珍品在漁業(yè)中具有重要意義和價值,最近,利用機器人捕撈水下珍品成為發(fā)展趨勢。為了探測水下珍品的數(shù)量及分布情況,使水下機器人獲得更加可靠的數(shù)據(jù),該研究提出基于注意力機制與改進YOLOv5的水下珍品檢測方法。首先,使用K-means匹配新的錨點坐標,增加多個檢測尺度提升檢測精度;其次,將注意力機制模塊融入特征提取網(wǎng)絡(luò)Darknet-53中獲得重要特征;然后,利用Ghost模塊的輕量化技術(shù)優(yōu)勢,引入由Ghost模塊構(gòu)成的Ghost-BottleNeck代替YOLOv5中的BottleNeck模塊,大幅度降低網(wǎng)絡(luò)模型的參數(shù)與計算量;最后,將IOU_nms修改為DIOU_nms以優(yōu)化損失函數(shù)。采用基于實際水下環(huán)境建立的數(shù)據(jù)集,樣本數(shù)量為781幅圖像,按照9∶1的比例隨機劃分訓練與測試集,對改進的網(wǎng)絡(luò)進行驗證。結(jié)果表明,該研究算法可獲得95.67%平均準確率,相比YOLOv5算法可提升5.49個百分點,試驗效果良好,研究結(jié)果可以為水下珍品的檢測捕捉提供更加準確快捷的方法。
機器視覺;圖像識別;水下珍品;輕量化;YOLOv5;注意力機制;多尺度
在農(nóng)業(yè)養(yǎng)殖生產(chǎn)作業(yè)中,水下珍品(海參、海膽、扇貝等)一直深受漁民的喜愛。早期漁民們對其捕撈方式主要為撒網(wǎng)捕撈和人工抓取[1-2]兩種形式。撒網(wǎng)捕撈雖然可以有效減少漁民成本,但是長期如此會嚴重損害海底的生態(tài)環(huán)境。人工抓取雖然解決了海底環(huán)境大范圍被破壞的問題,但也給漁民帶來更高的捕撈成本,同時增加了人身安全隱患。近年來,中國海洋科技水平不斷提高,漁業(yè)、水產(chǎn)養(yǎng)殖等海洋經(jīng)濟也愈發(fā)依賴水下目標探測技術(shù)的發(fā)展。目前,有部分研究者把基于卷積神經(jīng)網(wǎng)絡(luò)的目標檢測框架應(yīng)用到漁業(yè)生產(chǎn)中,獲得了一定效果[3-5]。
傳統(tǒng)目標檢測中根據(jù)檢測對象的顏色、紋理和邊緣等特征進行識別。如Hsiao等[6]提出一種基于稀疏表示分類(Sparse Representation-based Classification,SRC)的最大概率局部排序方法,稱為SRC-MP,用于實際魚類識別。特征面和魚面通過魚類數(shù)據(jù)庫提取特征數(shù)據(jù),采用特征空間維數(shù)和部分排序值兩個參數(shù)對方案進行優(yōu)化,識別率達到81.8%。Fabic等[7]利用斑點計數(shù)和形狀分析從水下視頻序列中進行魚類檢測,采用預處理使珊瑚變黑進一步去除珊瑚背景,使用Canny邊緣檢測來提取魚類輪廓。吳一全等[8]提出一種基于Krawtchouk矩、灰度共生矩陣、蜂群優(yōu)化多核最小二乘支持向量機的識別方法,可以快速準確地識別淡水魚的種類,對5種淡水魚識別精度均達83.33%以上。崔尚等[9]提出基于Sobel改進算子的海參圖像識別研究,采用直方圖均衡對圖像進行預處理,利用Sobel改進算子將增強后的圖像進行分割處理,經(jīng)過多次膨脹、腐蝕處理和小目標移除算法處理,得到只含有海參目標的二值化圖像。馬國強等[10]提出改進的K-均值聚類算法可以精準識別人工養(yǎng)殖的石斑魚,該算法在輸入圖像清楚和干擾小等情況下分割準確率可以達到98%。此類方法識別效果較好,但通常僅能檢測單一目標且需要人工設(shè)計算子進行特征提取,工作量較大。
近年來,越來越多的深度學習方法被應(yīng)用于水下目標檢測。李艷君等[11]提出一種立體視覺下動態(tài)魚體尺寸測量方法,該研究使用雙目立體視覺技術(shù)獲取三維信息,通過Mask-RCNN(Mask-Region Convolution Neural Network)網(wǎng)絡(luò)進行魚體檢測與精細分割,試驗平均相對誤差分別在4.7%和9.2%左右。董鵬等[12]提出一種水下海參自動檢測與尺寸測量的方法,其在左目矯正圖像上,利用預先訓練的YOLOv3海參檢測模型,進行海參自動檢測和感興趣區(qū)域定位,所提方法在0.5~1.5 m范圍內(nèi)平均誤差為1.65%。趙德安等[13]采用優(yōu)化的Retinex算法提高了圖像對比度,增強圖像細節(jié),利用卷積神經(jīng)網(wǎng)絡(luò)YOLOv3識別出河蟹,準確率為96.65%。郭祥云等[14]提出一種基于深度殘差網(wǎng)絡(luò)的水下海參實時識別算法,采用顏色變換方法進行數(shù)據(jù)增強,該方法具有較高識別準確率,可達到97.25%。Li等[15]改進Faster R-CNN網(wǎng)絡(luò)結(jié)構(gòu),提出適用于水下魚類目標檢測的輕型R-CNN,準確率可達89.95%。徐建華等[16]提出一種基于YOLOv3算法的目標識別模型。通過降采樣重組、多級融合、優(yōu)化聚類候選框、重新定義損失函數(shù)等方式優(yōu)化網(wǎng)絡(luò)結(jié)構(gòu),水下目標識別的準確率為75.1%。王小宇等[17]提出了適用于水下目標檢測識別場景的卷積神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu),該方法水下目標識別準確率要高于傳統(tǒng)卷積神經(jīng)網(wǎng)絡(luò)和高階統(tǒng)計量特征的傳統(tǒng)方法,可達到91.7%。Mandal等[18]通過Faster R-CNN與3個分類網(wǎng)絡(luò)(ZFNet、CNN-M和VGG-16)相結(jié)合,進一步對50種魚類和甲殼類動物進行檢測,其平均準確率為82.4%。Chuang等[19]基于完全無監(jiān)督的特征學習以及錯誤彈性分類器提出水下魚類識別框架,可以較好地識別不同環(huán)境下的魚類,平均準確率為92.1%。Luo等[20]利用人工神經(jīng)網(wǎng)絡(luò)去除圖像中的噪音并準確識別魚群,準確率為89.6%。
以上基于卷積神經(jīng)網(wǎng)絡(luò)的方法檢測單一品種時,準確率較高,但針對多品種檢測時效果不理想,平均準確率較低。為了解決上述問題,實現(xiàn)水下珍品的精確捕撈,在YOLOv5的基礎(chǔ)上提出一種基于注意力機制與改進YOLOv5的水下珍品檢測方法,稱為CG-YOLOv5。本文方法的優(yōu)勢主要在于:1)在特征提取網(wǎng)絡(luò)Darknet-53上融合注意力機制(Convolutional Block Attention Module,CBAM)結(jié)構(gòu),提升特征提取網(wǎng)絡(luò)性能;2)使用K-means匹配新的錨點坐標,將YOLOv5算法中的3個檢測尺度擴展為4個,提高模型對水下目標的檢測精度;3)利用Ghost模塊的輕量化技術(shù)優(yōu)勢,引入由Ghost模塊構(gòu)成的Ghost-BottleNeck代替YOLOv5中的BottleNeck模塊,大幅度降低網(wǎng)絡(luò)模型的參數(shù)與計算量。本文算法針對多品種檢測提高平均準確率,可通過大量水下珍品圖像檢測試驗進行驗證,為后續(xù)的現(xiàn)代化珍品捕撈提供參考。
YOLOv5具有速度快、靈活性高的特點,網(wǎng)絡(luò)結(jié)構(gòu)主要包括Darknet-53主干網(wǎng)絡(luò)、路徑聚合網(wǎng)絡(luò)(Path Aggregation network,PANet)[21],如圖1所示。
主干網(wǎng)絡(luò)采用CSP1_結(jié)構(gòu),主要包括兩個分支,分支一由個Bottleneck模塊串聯(lián),分支二為卷積層,然后兩個分支拼接到一起,使網(wǎng)絡(luò)深度增加,特征提取能力大幅增強。
PANet結(jié)構(gòu)是由卷積操作、上采樣操作、CSP2_X構(gòu)成的循環(huán)金字塔結(jié)構(gòu),可以使圖像不同特征層之間相互融合,以進行掩模預測,經(jīng)非極大值抑制(Non-Maximum Suppression,NMS)[22]獲得最終預測框。
為了有效提取檢測目標的輪廓特征,獲取檢測目標的主要內(nèi)容,引入通道注意力模塊,其計算方法如下
為了精準定位檢測目標的位置,提高目標檢測準確率,引入空間注意力模塊關(guān)注重點特征,其計算方法如下
本文基于Ghost模塊的輕量化優(yōu)勢,提出Ghost-BottleNeck模塊,如圖2所示,該模塊類似于ResNet[23]中的基本剩余塊,由兩個堆疊的Ghost模塊組成。左半部分充當Ghost-BottleNeck的擴展層,用于增加通道數(shù)量,從而增加特征維度,右半部分是為減少特征維度使其與輸入一致,通過跳躍連接將左右兩部分的輸入與輸出相加,可以清晰的看到左右兩部分的區(qū)別在于左半部分引入了Relu激活函數(shù),目的是防止在輸入數(shù)據(jù)后網(wǎng)絡(luò)向后傳播的過程中產(chǎn)生梯度消失現(xiàn)象。右半部分沒有引入Relu激活函數(shù)的原因是:經(jīng)過Relu激活函數(shù)后下一層和前一層輸入數(shù)據(jù)的分布不同,從而需要不斷適應(yīng)不同的輸入分布,導致網(wǎng)絡(luò)訓練速度降低。
雖然YOLOv5致力于水下目標檢測,但在復雜水下環(huán)境中,許多目標區(qū)域信息容易丟失,不利于水下珍品的檢測。為了提高檢測精度,提出CG-YOLOv5水下目標檢測算法,參數(shù)如表1所示,模型如圖3所示。本文對于該模型的主要改進為:將注意力機制CBAM、Ghost-BottleNeck與DarkNet-53融合組成新的特征提取網(wǎng)絡(luò)CGDarkNet-53;使用K-means匹配新的錨點坐標,將YOLOv5檢測尺度擴展為4個,提高水下目標檢測精度。本文采用CGDarkNet-53作為CG-YOLOv5的主干網(wǎng)絡(luò),與YOLOv5相比,CG-YOLOv5僅有一種CSP_X結(jié)構(gòu),可將梯度變化完整的集成到特征圖中,加強網(wǎng)絡(luò)特征融合能力,從而保證準確率。CG-YOLOv5增加一個新的檢測尺度用于提升目標檢測精度,即網(wǎng)絡(luò)層15輸出得到的Yolo head1。
為了抑制網(wǎng)絡(luò)中無用特征,CGDarkNet-53引入CBAM增加網(wǎng)絡(luò)深度并提升特征提取能力。CBAM利用通道注意力機制和空間注意力機制結(jié)合的方法對特征向量進行篩選加權(quán),其中通道注意力機制重點描述檢測目標的內(nèi)容,空間注意力機制重點描述檢測目標的位置,通過兩者結(jié)合來體現(xiàn)重要特征信息,弱化一般特征信息,進一步對水下珍品進行更加精確的定位和識別。Ghost-BottleNeck具有更簡易的線性運算,在輕量化的同時保持準確性,與CBAM結(jié)合生成CGCSP_單元,如圖3所示,Ghost-BottleNeck右側(cè)增加Leaky Relu函數(shù),避免負值輸入的梯度為0,進而解決部分神經(jīng)元不學習的問題,更充分地學習圖像特征。
YOLOv5算法使用K-means對數(shù)據(jù)集中的bounding box聚類以獲取合適錨點,錨點框的選取會直接影響目標檢測的效果。為進一步提高檢測精度,需要對水下數(shù)據(jù)集的標簽重新聚類獲取新的錨點。CG-YOLOv5將YOLOv5中的3個檢測尺度擴展為4個檢測尺度,在執(zhí)行多尺度檢測時,經(jīng)過15層得到第一個檢測尺度Yolo head1。將所得Yolo head1上采樣的結(jié)果與第5層進行特征融合,得到第二個檢測尺度Yolo head2。接著將Yolo head2進行卷積運算與Yolo head1進行特征融合,得到第三個檢測尺度Yolo head3。最后將Yolo head3進行卷積運算與第11層進行特征融合,得到第4個檢測尺度Yolo head4。因此,CG-YOLOv5具有更好檢測不同尺度目標的性能。
表1 網(wǎng)絡(luò)參數(shù)
式中IoU表示為預測框重疊區(qū)域,表示預測框,表示真實框,表示、最小包圍框。
在目標檢測的后處理過程中,針對很多目標框的篩選,通常需要NMS操作。YOLOv4針對邊界框中心點的位置信息,在CIOU_Loss[25]的基礎(chǔ)上采用DIOU_nms,在重疊目標的檢測中,DIOU_nms的效果優(yōu)于傳統(tǒng)的NMS。本文采用加權(quán)NMS的方式,在同樣的參數(shù)情況下,將NMS中IOU_nms修改為DIOU_nms,對于某些遮擋重疊的目標,除了考慮預測框重疊區(qū)域的IoU外,還考慮兩個預測框中心點之間的距離,有效提升檢測精度。
試驗基于Ubuntu18.04、Python3.7.7和PyTorch1.6.0搭建的深度學習框架,試驗相關(guān)硬件配置和模型參數(shù)如表2所示。CG-YOLOv5可以自適應(yīng)圖片縮放,選取640×640大小的圖像作為輸入,可獲得等比例大小的特征圖作為檢測尺度。通過多次試驗得出,學習率選取0.01可以較快達到局部收斂,批量大小為16時訓練速度較快。
表2 試驗相關(guān)硬件配置和模型參數(shù)
本文試驗采用湛江水下機器人比賽數(shù)據(jù)集(http://uodac.pcl.ac.cn/),該數(shù)據(jù)集共有三個類別:海參、海膽、扇貝,數(shù)據(jù)集中的圖像是由水下機器人在真實海底環(huán)境中拍攝的視頻通過按幀截取所得,其圖像分辨率為1 920×1 080像素,部分圖像由于拍攝角度或沒有水下珍品被人工刪除,挑選后的數(shù)據(jù)集共包括781張圖像,以9∶1比例隨機劃分訓練集和測試集,抽樣后再次統(tǒng)計標注信息、類別比例和大小分布,使訓練集與驗證集分布相似,達到劃分目的。為了滿足試驗所需要求,首先,把數(shù)據(jù)集轉(zhuǎn)變成VOC2007格式;然后,借助Lableimg軟件對轉(zhuǎn)換的數(shù)據(jù)集進行標注,手動設(shè)置類別為Sea cucumber(海參)、Sea urchin(海膽)、Scallop(扇貝)三類。此外,為體現(xiàn)算法魯棒性,即可在復雜水下環(huán)境中進行珍品的檢測抓捕,數(shù)據(jù)集中圖像為原始圖像,沒有進行任何清晰化等預處理。
為了驗證提出模型的有效性,從定性和定量兩方面進行評估。對于定性評價,通過對比CG-YOLOv5和比較方法的檢測圖像差異來評估模型性能,即比較目標框的定位精確度,以及是否存在漏檢、誤檢情況。定量評價方面,主要選取的指標為:準確率(Precision,)、召回率(Recall,)、平均準確率(Average Precision,AP)、平均精度均值mAP(mean Average Precision)。公式如下
3.4.1 定性結(jié)果與分析
為了更直觀的體現(xiàn)CG-YOLOv5的性能,試驗中隨機抽取圖像,將本文算法與SSD[26]、Faster R-CNN[27]、YOLOv4[28]、YOLOv5、PP-YOLO[29]、PP-YOLOv2[30]、YOLOX[31]等目標檢測算法進行對比,8種檢測算法均在同一試驗平臺進行訓練與測試,算法結(jié)果對比如圖4所示。
由圖4觀察可知CG-YOLOv5有效降低了漏檢,提高了精度。如Image3所示,SSD算法針對于小目標的檢測效果較差,圖像上方所示的海膽并未被準確檢測;Image2中,F(xiàn)aster R-CNN在小目標檢測上優(yōu)于SSD算法,但對海參的檢測效果并不好,Image3中,圖像右下方海參存在漏檢情況,Image1中,海參檢測正確但僅檢測出某一小部分;YOLOv4和PP-YOLO對海膽檢測效果較差,如Image3和Image4上方海膽未被準確檢測;YOLOv5的海參檢測效果優(yōu)于Faster R-CNN算法,對小目標檢測優(yōu)于SSD算法,但也存在誤檢和漏檢的情況,如Image4右上方扇貝存在誤檢;PP-YOLOv2未準確檢測海膽和扇貝,如Image3和Image4左上方,Image1和Image2右上方均存在漏檢情況;YOLOX在Image1上方漏檢海膽,Image3左側(cè)漏檢海參;與其他算法相比,CG-YOLOv5成功檢測出Image2中下方海參和Image3中左上方較小海參和海膽,不但檢測精度較高,而且能夠適應(yīng)復雜的水下環(huán)境,提升小目標和被遮擋目標的檢測率,魯棒性強。
3.4.2 消融試驗
為了更好地驗證本文算法的有效性,進行了消融試驗,共驗證8組網(wǎng)絡(luò),同樣使用湛江水下機器人比賽數(shù)據(jù)集進行測試,試驗?zāi)P涂陀^評價對比結(jié)果如表3所示,加粗字體為算法最優(yōu)值。
模型的網(wǎng)絡(luò)復雜度可以用參數(shù)量(Parameters)和浮點運算量(Floating Point operations,F(xiàn)LOPs)來衡量,兩者共同描述了數(shù)據(jù)經(jīng)過復雜網(wǎng)絡(luò)的計算量,參數(shù)量和浮點運算量數(shù)值越小,網(wǎng)絡(luò)復雜度越低。如表3所示,采用Ghost模塊的檢測網(wǎng)絡(luò),其參數(shù)量和浮點運算量均低于其他相同參數(shù)條件下未采用Ghost模塊的檢測網(wǎng)絡(luò),從而說明Ghost模塊具有輕量化作用,可有效提升網(wǎng)絡(luò)性能。
表3 消融試驗
由表3數(shù)據(jù)可知,引入CBAM的目標檢測網(wǎng)絡(luò)相比于YOLOv5網(wǎng)絡(luò)mAP值提升了4.62個百分點,檢測時間增加0.006 s,雖檢測時間有所增加,但精度較高,說明注意力機制網(wǎng)絡(luò)能抑制無用特征,有效提高CNN性能。將YOLOv5的3個檢測尺度擴展為4個,可提升模型檢測精度,mAP值提升3.72個百分點。引入Ghost- BottleNeck替代BottleNeck可降低網(wǎng)絡(luò)參數(shù)量和計算量,mAP提升2.03個百分點。試驗結(jié)果表明,本文提出的每個措施在性能方面均有所提升,CG-YOLOv5與YOLOv5相比mAP值提升5.49個百分點,海膽、扇貝、海參的平均準確率提升7.48、6.90、2.09個百分點。雖然檢測效率有少量降低,但檢測精度得到較大的提升。
3.4.3 定量結(jié)果與分析
CG-YOLOv5算法檢測結(jié)果的P-R曲線如圖5所示,直角坐標系中橫坐標是召回率,縱坐標是準確率。通過計算坐標系中P-R曲線下方部分面積便可以獲得該類別的AP值,海參、扇貝和海膽三種類別的AP值依次為96.93%、94.35%、95.73%,未完全識別主要原因為復雜的水下環(huán)境對檢測造成干擾,因此后續(xù)工作將考慮增加圖像的預處理,用于提升圖像的清晰率和識別率。
本文算法與SSD、Faster R-CNN、YOLOv4、YOLOv5、PP-YOLO、PP-YOLOv2、YOLOX檢測算法的性能指標對比結(jié)果如表4所示,加粗字體為算法最優(yōu)值,從表中數(shù)據(jù)可知,CG-YOLOv5算法可以獲得更高的檢測精度。在時間上,100張圖片平均檢測時間雖高于YOLOv5和SSD,但相比其他5種算法,具有良好的檢測速度,以犧牲較少的時間為代價,獲取更高的精度。因此,結(jié)合精度和速度綜合考慮,本文算法更適合完成水下機器人對珍品的檢測任務(wù)。
根據(jù)上述結(jié)果分析,與其他7種算法相比,CG-YOLOv5算法在性能上具有更高的優(yōu)勢。改進模型更加充分利用了低層次的特征信息,進而提升在小目標檢測方面的檢測率;同時模型的注意力機制縮減無用特征對模型的干擾和影響,改善了遮擋目標檢測效果,較大幅度提高模型的性能。
表4 不同檢測算法性能指標對比
本文提出基于注意力機制與改進YOLOv5的水下珍品檢測方法,以便漁民利用水下機器人對水下珍品進行識別和捕撈。首先采用注意力機制CBAM對YOLOv5的特征提取網(wǎng)絡(luò)進行改進。然后,利用K-means匹配新的錨點坐標,將YOLOv5算法中的3個檢測尺度擴展為4個,提高了模型對水下目標的檢測精度。最后,利用Ghost的優(yōu)勢,引入Ghost-BottleNeck代替YOLOv5中的BottleNeck模塊,降低YOLOv5卷積神經(jīng)網(wǎng)絡(luò)的計算成本。
試驗結(jié)果表明,所提出的CG-YOLOv5算法平均精度均值可以達到95.67%,具有更好的準確性,故應(yīng)用于水下珍品的檢測時具有良好成效,與SSD、Faster R-CNN、YOLOv4、YOLOv5、PP-YOLO、PP-YOLOv2、YOLOX算法相比,本文算法平均檢測時間為0.023s ,檢測速度較快。結(jié)合速度和精度綜合考慮,具有較高應(yīng)用價值,為后續(xù)的自動化珍品捕撈提供參考。
[1] Choe S, Ohshima Y. On the morphological and ecologica differences between two commercifl forms, “Green”and “Red”, of the Japan common sea cucumber, Stichopus japonicus Selenka[J]. Bull Jpn Soc Sci Fish, 1961, 27: 97-106
[2] Mitsunaga N, Matsumura S. Growth and survival of hatchery produced juveniles of sea cucumber Apostichopus japonicus in different size[J]. Bull Nagasaki Prefect Inst Fish, 2004, 30: 7-13.
[3] 鄭一力,張露. 基于遷移學習的卷積神經(jīng)網(wǎng)絡(luò)植物葉片圖像識別方法[J]. 農(nóng)業(yè)機械學報,2018,49(S):354-359.
Zheng Yili, Zhang Lu. Image recognition method of plant leaves based on transfer learning by convolutional neural network[J]. Transactions of the Chinese Society for Agricultural Machinery, 2018, 49(S): 354-359. (in Chinese with English abstract)
[4] 薛金林,閆嘉,范博文. 多類農(nóng)田障礙物卷積神經(jīng)網(wǎng)絡(luò)分類識別方法[J]. 農(nóng)業(yè)機械學報,2018,49(S1):42-48.
Xue Jinlin, Yan Jia, Fan Bowen. Convolutional neural network classification and recognition method for multi-class farmland obstacles[J]. Transactions of the Chinese Society for Agricultural Machinery, 2018, 49(S1): 42-48. (in Chinese with English abstract)
[5] 姜紅花,王鵬飛,張昭,等. 基于卷積網(wǎng)絡(luò)和哈希碼的玉米田間雜草快速識別方法[J]. 農(nóng)業(yè)機械學報,2018,49(11):30-38.
Jiang Honghua, Wang Pengfei, Zhang Zhao, et al. Quick recognition method of weeds in corn field based on convolutional network and hash code[J]. Transactions of the Chinese Society for Agricultural Machinery, 2018, 49(11): 30-38. (in Chinese with English abstract)
[6] Hsiao Y H, Chen C C, Lin S I, et al. Real-world underwater fish recognition and identification, using sparse representation[J]. Ecological informatics, 2014, 23: 13-21.
[7] Fabic J N, Turla I E, Capacillo J A, et al. Fish population estimation and species classification from underwater video sequences using blob counting and shape analysis[C]// 2013 IEEE International Underwater Technology Symposium (UT). IEEE, 2013: 1-6.
[8] 吳一全,殷駿,戴一冕,等. 基于蜂群優(yōu)化多核支持向量機的淡水魚種類識別[J]. 農(nóng)業(yè)工程學報,2014,30(16):312-319.
Wu Yiquan, Yin Jun, Dai Yimian, et al. Identification method of freshwater fish species using multi-kernel support vector machine with bee colony optimization[J]. Transactions of the Chinese Society for Agricultural Engineering (Transactions of the CSAE), 2014, 30(16): 312-319. (in Chinese with English abstract)
[9] 崔尚,段志威,李國平,等. 基于Sobel改進算子的海參圖像識別研究[J]. 電腦知識與技術(shù):學術(shù)交流,2018,14(22):145-146.
Cui Shang, Duan Zhiwei, Li Guoping, et al. Research on sea cucumber image recognition based on Sobel improved operator[J]. Computer Knowledge and Technology: Academic Exchange, 2018, 14(22): 145-146. (in Chinese with English abstract)
[10] 馬國強,田云臣,李曉嵐. K-均值聚類算法在海水背景石斑魚彩色圖像分割中的應(yīng)用[J]. 計算機應(yīng)用與軟件,2016,33(5):192-195.
Ma Guoqiang, Tian Yunchen, Li Xiaolan. Application of K-means clustering algorithm in color image segmentation of grouper on sea water background[J]. Computer Applications and Software, 2016, 33(5): 192-195. (in Chinese with English abstract)
[11] 李艷君,黃康為,項基. 基于立體視覺的動態(tài)魚體尺寸測量[J]. 農(nóng)業(yè)工程學報,2020,36(21):220-226.
Li Yanjun, Huang Kangwei, Xiang Ji. Measurement of dynamic fish dimension based on stereoscopic vision[J]. Transactions of the Chinese Society for Agricultural Engineering (Transactions of the CSAE), 2020, 36(21): 220-226. (in Chinese with English abstract)
[12] 董鵬,周烽,趙悰悰,等. 基于雙目視覺的水下海參尺寸自動測量方法[J]. 計算機工程與應(yīng)用,2021,57(8):271-278.
Dong Peng, Zhou Feng, Zhao Congcong, et al. Automatic measurement method of underwater sea cucumber size based on binocular vision[J]. Computer Engineering and Application, 2021, 57(8): 271-278 (in Chinese with English abstract)
[13] 趙德安,劉曉洋,孫月平,等. 基于機器視覺的水下河蟹識別方法[J]. 農(nóng)業(yè)機械學報,2019,50(3):151-158.
Zhao Dean, Liu Xiaoyang, Sun Yueping, et al. Underwater crab recognition method based on machine vision[J]. Transactions of the Chinese Society for Agricultural Machinery, 2019, 50(3): 151-158. (in Chinese with English abstract)
[14] 郭祥云,胡敏,王文勝,等. 基于深度學習的非結(jié)構(gòu)環(huán)境下海參實時識別算法[J]. 北京信息科技大學學報:自然科學版,2019(3):27-31.
Guo Xiangyun, Hu min, Wang Wensheng, et al. Real time identification algorithm of sea cucumber in unstructured environment based on deep learning[J]. Journal of Beijing University of Information Technology: Natural Science Edition, 2019(3): 27-31 (in Chinese with English abstract)
[15] Li X, Tang Y, Gao T. Deep but lightweight neural networks for fish detection[C]//OCEANS 2017-Aberdeen. IEEE, 2017: 1-5.
[16] 徐建華,豆毅庚,鄭亞山. 一種基于YOLO-V3算法的水下目標識別跟蹤方法[J]. 中國慣性技術(shù)學報,2020,28(1):129-133.
Xu Jianhua, Dou Yigeng, Zheng Yashan. An underwater target recognition and tracking method based on YOLO-V3 algorithm[J]. Journal of Chinese Inertial Technology, 2020, 28(1): 129-133. (in Chinese with English abstract)
[17] 王小宇,李凡,曹琳,等. 改進的卷積神經(jīng)網(wǎng)絡(luò)實現(xiàn)端到端的水下目標自動識別[J]. 信號處理,2020,36(6):958-965.
Wang Xiaoyu, Li Fan, Cao Lin, et al. Improved convolutional neural network to realize end-to-end automatic recognition of underwater targets[J]. Signal Processing, 2020, 36(6): 958-965. (in Chinese with English abstract)
[18] Mandal R, Connolly R M, Schlacher T A, et al. Assessing fish abundance from underwater video using deep neural networks[C]// 2018 International Joint Conference on Neural Networks (IJCNN). Rio de Janeiro: IEEE, 2018: 1-6.
[19] Chuang M C, Hwang J N, Williams K. A feature learning and object recognition framework for underwater fish images[J]. IEEE Transactions on Image Processing, 2016, 25(4): 1862-1872.
[20] Luo S, Li X, Wang D, et al. Automatic fish recognition and counting in video footage of fishery operations[C]// 2015 International Conference on Computational Intelligence and Communication Networks (CICN). Jabalpur: IEEE, 2015: 296-299.
[21] Liu S, Qi L, Qin H, et al. Path aggregation network for instance segmentation[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 8759-8768.
[22] Neubeck A, Van Gool L. Efficient non-maximum suppression[C]// 18th International Conference on Pattern Recognition (ICPR'06). Hong Kong: IEEE, 2006, 3: 850-855.
[23] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770-778.
[24] Rezatofighi H, Tsoi N, Gwak J Y, et al. Generalized intersection over union: A metric and a loss for bounding box regression[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 658-666.
[25] Zheng Z, Wang P, Liu W, et al. Distance-IoU loss: Faster and better learning for bounding box regression[C]// Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). New York Hilton Midtown: 2020: 12993-13000.
[26] Liu W, Anguelov D, Erhan D, et al. Ssd: Single shot multibox detector[C]// European Conference on Computer Vision. Amsterdam: Springer, Cham, 2016: 21-37.
[27] Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks[J]. Advances in Neural Information Processing Systems, 2015, 28: 91-99.
[28] Ghiasi G, Cui Y, Srinivas A, et al. Simple copy-paste is a strong data augmentation method for instance segmentation[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 2918-2928.
[29] Zheng Z, Zhao J, Li Y. Research on Detecting Bearing-Cover Defects Based on Improved YOLOv3[J]. IEEE Access, 2021, 9: 10304-10315.
[30] Li J, Zhang Z, Tian Y, et al. Target-Guided Feature Super-Resolution for Vehicle Detection in Remote Sensing Images[J]. IEEE Geoscience and Remote Sensing Letters, 2021: 1-5.
[31] Zhu X, Lyu S, Wang X, et al. TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-captured Scenarios[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 2778-2788.
Detection of underwater treasures using attention mechanism and improved YOLOv5
Lin Sen1, Liu Meiyi2, Tao Zhiyong2
(1.,,110159,; 2.,,125105,)
Underwater treasures, such as sea urchins, sea cucumbers, and scallops, have always been preferred in fish production, due mainly to the high value-added industry. However, two conventional approaches, including net fishing and manual catching, cannot meet the application requirements of rapid detection in the actual large-scale cultivation in modern agriculture, particularly on time-consuming, labor-intensive, and severe destruction of submarine environments in the early days. Alternatively, deep learning has widely been characterized by high resolution and fast speed in recent years. Therefore, it is a promising application potential to the target detection framework using the convolutional neural network in fishery production. It is also highly necessary to improve the detection performance in complex underwater environments. In this study, a YOLOv5 detection of underwater treasure was proposed using the attention mechanism, referred to as CG-YOLOv5, in order to provide a more accurate dataset for underwater robots. The main advantages were as follows: 1) DarkNet-53 was introduced the CBAM to deepen the network for the better performance of feature extraction, further to suppress the worthless features in the network. Specifically, the CBAM combined the channel and spatial attention to filter and weight the feature vectors. The channel attention focused mainly on what the detection target was, whereas, spatial attention was used to determine where the detection target was. As such, the prominent feature information was represented via two combined mechanisms, while weakening the general features. 2) The lightweight Ghost-Bottleneck module was introduced to replace the Bottleneck in YOLOv5. A simpler linear operation in Ghost-Bottleneck was utilized to maintain a higher accuracy with light weights. 3) New anchor points were obtained by clustering the labels of underwater datasets. A new detection scale was also added to the original three detections for higher detection accuracy. CG-YOLOv5 network mainly included CGDarknet-53 backbone network, Focus structure, Spatial Pyramid Pooling structure (SPP), and Path Aggregation Network (PANet). Focus served as a benchmark network with down sampling to change the input size of 640×640×3 to 320×320×32. Only one CSP structure was involved in the CG-YOLOv5 to integrate gradient changes completely into the feature map for feature fusion enhancement. The SPP structure was used to maximize the pooling of the feature layer. Four scales were utilized in the pooling layers with the pooling core sizes of 1×1, 5×5, 9×9, and 13×13, respectively. As such, the SPP effectively increased the perception field, while isolating significant contextual features. Furthermore, path aggregation networks were used to fuse different feature layers of an image. A specific dataset was also selected to verify the model using the actual underwater environment. There were 781 underwater images, 90% of which were employed as training datasets, and the rest were for testing. The experimental results demonstrated that the model fully met the requirement of detection and recognition for the treasures in complex underwater environments, compared with the current deep learning. The average accuracy was 95.67%. Compared with YOLOv5, the average precision of sea urchin, scallop and sea cucumber increased by 7.48, 6.90 and 2.09 percentage points, and mAP increased by 5.49 percentage points base point. Compared with other classical algorithms, the method has better accuracy and lower complexity. The finding can provide a more accurate and fast way to detect and capture aquatic products.
computer vision; image recognition; underwater treasures; lightweight; YOLOv5; attention mechanism; multi-scale
林森,劉美怡,陶志勇. 采用注意力機制與改進YOLOv5的水下珍品檢測[J]. 農(nóng)業(yè)工程學報,2021,37(18):307-314.doi:10.11975/j.issn.1002-6819.2021.18.035 http://www.tcsae.org
Lin Sen, Liu Meiyi, Tao Zhiyong. Detection of underwater treasures using attention mechanism and improved YOLOv5[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2021, 37(18): 307-314. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2021.18.035 http://www.tcsae.org
2021-03-02
2021-09-09
國家重點研發(fā)計劃(2018YFB1403303)
林森,博士,副教授。研究方向:圖像處理與機器視覺,模式識別與人工智能等。Email:lin_sen6@126.com
10.11975/j.issn.1002-6819.2021.18.035
TP391
A
1002-6819(2021)-18-0307-08