高 云,郁厚安,雷明剛,黎 煊,郭 旭,刁亞萍
?
基于頭尾定位的群養(yǎng)豬運(yùn)動(dòng)軌跡追蹤
高 云1,2,郁厚安1,雷明剛2,3,黎 煊1,2,郭 旭1,刁亞萍1
(1. 華中農(nóng)業(yè)大學(xué)工學(xué)院,武漢 430070;2. 生豬健康養(yǎng)殖協(xié)同創(chuàng)新中心,武漢 430070;3. 華中農(nóng)業(yè)大學(xué)動(dòng)物科技學(xué)院動(dòng)物醫(yī)學(xué)院,武漢 430070)
豬的頭/尾位置直觀反映了豬的進(jìn)食、飲水、爭(zhēng)斗、追逐等日?;顒?dòng)。從群養(yǎng)豬俯視視頻中有效分割粘連的豬個(gè)體,找出豬的頭/尾部,并以頭/尾坐標(biāo)實(shí)現(xiàn)較精準(zhǔn)的運(yùn)動(dòng)軌跡追蹤有著較大的難度。該研究采用改進(jìn)分水嶺分割算法分割視頻圖像幀中的粘連豬個(gè)體;對(duì)分割后的豬體提取頭/尾輪廓,分別用類Hough聚類和圓度識(shí)別算法識(shí)別每頭豬的頭/尾,用運(yùn)動(dòng)趨勢(shì)算法修正頭/尾識(shí)別的誤差,生成以頭/尾部為定位坐標(biāo)的運(yùn)動(dòng)軌跡。運(yùn)算結(jié)果和人工標(biāo)記對(duì)比證明類Hough聚類和圓度識(shí)別算法的頭尾識(shí)別正確率分別為71.79%和79.67%;經(jīng)過(guò)運(yùn)動(dòng)趨勢(shì)修正后,以頭部為定位坐標(biāo)生成的運(yùn)動(dòng)軌跡與人工標(biāo)記生成運(yùn)動(dòng)軌跡吻合良好;對(duì)比頭/尾軌跡和質(zhì)心軌跡可以發(fā)現(xiàn),頭/尾軌跡能夠更多獲取豬個(gè)體和群體活動(dòng)、運(yùn)動(dòng)信息。該研究對(duì)于實(shí)現(xiàn)自動(dòng)記錄和分析豬個(gè)體和群體的活動(dòng)行為提供新的思路和方法。
算法;圖像識(shí)別;圖像分割;豬群;豬個(gè)體;頭/尾識(shí)別;改進(jìn)分水嶺;運(yùn)動(dòng)軌跡
在高養(yǎng)殖密度的現(xiàn)代化豬場(chǎng)中,研究豬個(gè)體和群體的行為習(xí)性,可為評(píng)價(jià)豬福利、提高豬肉質(zhì)量提供重要參考依據(jù)[1-3]。長(zhǎng)期以來(lái),采用人工方式觀察和記錄群養(yǎng)豬的行為活動(dòng)[4-6],費(fèi)時(shí)費(fèi)力,且難以實(shí)現(xiàn)準(zhǔn)確、長(zhǎng)期的觀察記錄。機(jī)器視覺(jué)作為一種實(shí)現(xiàn)自動(dòng)化跟蹤監(jiān)測(cè)重要的方法和手段,具有攝像頭安裝方便、圖像直觀準(zhǔn)確等特點(diǎn),有著較大的發(fā)展空間。
機(jī)器視覺(jué)技術(shù)推動(dòng)了自動(dòng)監(jiān)測(cè)、追蹤豬的運(yùn)動(dòng)軌跡的研究。Kashiha等[7-8]先后采用豬體標(biāo)記的方式,及圖像直方圖匹配和橢圓近似分析的手段,監(jiān)控群養(yǎng)豬的行為軌跡,豬的坐標(biāo)位置取近似橢圓的中心。Lind等[9]研究了單頭豬在不同劑量阿樸嗎啡下的運(yùn)動(dòng)軌跡,豬的坐標(biāo)位置取豬體的加權(quán)質(zhì)心。在實(shí)際豬場(chǎng)中,一些重要個(gè)體或群體的活動(dòng)、行為可以通過(guò)辨別豬的頭/尾方位進(jìn)行判斷,如單頭豬的進(jìn)食、飲水、排泄,多頭豬間的爭(zhēng)斗、追逐等。Nasirahmadi等[10]2016年研究豬的爬跨行為時(shí),用近似橢圓來(lái)定位豬只,用兩只豬運(yùn)動(dòng)的方向來(lái)確定爬跨行為的不同方式。確定豬頭部或尾部位置有助于豬行為姿態(tài)的判定,因此采用豬頭/尾來(lái)追蹤豬的運(yùn)動(dòng)軌跡,對(duì)于研究單頭豬或豬群運(yùn)動(dòng)自動(dòng)追蹤、行為的自動(dòng)識(shí)別有著重要的意義。
由于飼養(yǎng)密集,豬群喜歡聚集等客觀存在的特點(diǎn),在群養(yǎng)豬俯視視頻中將豬個(gè)體從粘連的豬群中分割識(shí)別出來(lái)有著較大的難度。馬麗等[11-20]采用自適應(yīng)分區(qū)和多閾值分割,對(duì)豬舍內(nèi)進(jìn)食、飲水等不同區(qū)域的豬群進(jìn)行背景去除和個(gè)體身份識(shí)別,獲得豬體分割圖像,分割效果及識(shí)別結(jié)果受到豬體粘連的影響。為了定位出豬的頭/尾,在分割粘連豬體時(shí)需要保留足夠的頭尾輪廓信息,加大了分割的難度。分割粘連物體的研究多見(jiàn)于小顆粒粘連物體[21-24]及交通標(biāo)志的分割[25],這些分割方法直接運(yùn)用于大型粘連豬體的分割,易造成頭/尾輪廓信息丟失嚴(yán)重。以分辯頭/尾為目標(biāo)的群養(yǎng)豬粘連豬體的分割方法還未見(jiàn)報(bào)道。
本研究運(yùn)用機(jī)器視覺(jué)技術(shù),以定位每頭豬頭/尾部坐標(biāo)位置為目標(biāo),從群養(yǎng)豬俯視視頻圖像中準(zhǔn)確分割粘連豬個(gè)體,保留較完整的頭/尾輪廓。對(duì)頭/尾進(jìn)行分辨,確定出豬頭/尾的位置坐標(biāo),并以豬頭/尾坐標(biāo)生成豬的運(yùn)動(dòng)軌跡。該研究可為更準(zhǔn)確地實(shí)現(xiàn)豬的個(gè)體和群體行為的自動(dòng)追蹤、準(zhǔn)確記錄,為進(jìn)一步實(shí)現(xiàn)豬個(gè)體或群體活動(dòng)、行為的自動(dòng)分析、理解和識(shí)別提供了參考。
1.1 視頻采集
在湖北金林原種畜牧有限公司的商品保育舍中一個(gè)豬欄中拍攝群養(yǎng)豬俯視視頻,欄內(nèi)共有30 kg左右的大長(zhǎng)白保育豬10頭。拍攝時(shí)間為2016年1月12日上午,拍攝條件為自然采光。將高清攝像機(jī)(型號(hào):CL03,深圳沃世達(dá),分辨率為1 280 pixí720 pix)固定于豬欄上方天花板中央,距離豬欄地面3.2 m處,垂直俯拍。豬欄長(zhǎng)4.7 mí寬2.6 m,拍攝面積比豬欄面積略小。豬欄內(nèi)為半漏縫地板,漏縫地板材料為綠色PVC,無(wú)漏縫地面為混泥土。豬欄為半機(jī)械通風(fēng),裝有風(fēng)機(jī)、濕簾和玻璃窗。
1.2 粘連豬體分割方法
對(duì)視頻進(jìn)行分幀處理后的圖像如圖1a所示。人工分析豬體顏色范圍,圖像幀中10頭豬RGB(red, green, blue)分量的總均值為[213.774 0,203.512 3,204.712 3]。采用歐式距離法對(duì)圖像幀進(jìn)行閾值分割去除背景。分割冗余范圍值設(shè)為=100得到最優(yōu)效果,獲得如圖1b所示二值化圖像。RGB分量總均值與豬品種和顏色密切相關(guān),當(dāng)被拍攝豬群為其他品種或顏色時(shí),需要重新計(jì)算RGB分量的總均值設(shè)置相應(yīng)的分割冗余范圍值。
a. 原始圖像幀(編號(hào)為手工標(biāo)記)a. Original image frame (with manual marks)b. 背景去除后的二值圖像b. Binary image with background removed c. 剔除小面積后二值圖像c. Binary image after small areas elimination
對(duì)圖1b去除小區(qū)域。以正常豬體面積的20%為基準(zhǔn),去除掉圖中面積小于20%豬體面積的連通域,剔除結(jié)果如圖1c所示,只剩下大面積的豬體。但圖1c中豬體多處粘連,為了在盡量保持豬體輪廓,且有效分開(kāi)粘連豬體,使每個(gè)豬體成為單一的連通域,對(duì)圖像進(jìn)行形態(tài)學(xué)預(yù)處理。
采用不同形狀的開(kāi)運(yùn)算結(jié)構(gòu)元素[26-27]對(duì)圖1c進(jìn)行形態(tài)學(xué)開(kāi)運(yùn)算,圖2所示是分別用30×30的圓形、方形、菱形和斜45°線型的結(jié)構(gòu)元素進(jìn)行開(kāi)運(yùn)算的結(jié)果??梢钥闯?,圖2a中圓形結(jié)構(gòu)元素效果最好,可分割開(kāi)各豬體,且保留了每個(gè)豬體的大概輪廓。而圖2c菱形結(jié)構(gòu)使粘連豬體分開(kāi),但是對(duì)比原圖1豬體輪廓加入了一些尖角,不利于后期進(jìn)行頭部輪廓識(shí)別。圖2b、2d中方形和線形元素結(jié)構(gòu)沒(méi)有徹底把粘連豬體分開(kāi)。
傳統(tǒng)的分水嶺分割過(guò)程類似于尋找“山脊”的分割線[28]。對(duì)圖2a取反,得到豬體部分為0,背景部分為1的二值圖像。二值圖像中豬體連通區(qū)域沒(méi)有灰度梯度(全為0),難以找到“山谷”。采用距離變換法[29]計(jì)算豬體連通域的極小值區(qū)域。計(jì)算單個(gè)連通域中每個(gè)像素點(diǎn)到背景(值為1的像素點(diǎn))間的最短距離,以該距離值取代原來(lái)的0作為該像素點(diǎn)的灰度值,形成該豬體的極小值區(qū)域。當(dāng)找到圖中所有連通域的極小值區(qū)域時(shí),即得到距離變換灰度圖像。圖3為距離變換灰度圖像取反后的結(jié)果,距離邊界點(diǎn)遠(yuǎn)的像素點(diǎn)的灰度值大,距離近的灰度值小。
a. 圓形結(jié)構(gòu)a. Circular structureb. 方形結(jié)構(gòu)b. Square structure c. 菱形結(jié)構(gòu)c. Diamond structured. 斜45°線形結(jié)構(gòu)d. 45°oblique line structure
以每只豬體連通區(qū)域的極小值區(qū)域?yàn)椤吧焦取?,尋找分水嶺脊線,得到圖3b和圖4a中所示分割脊線。用圖4a所示脊線分割圖1c,即將圖4a取反后(使脊線處像素對(duì)應(yīng)值為0)與圖1c進(jìn)點(diǎn)乘計(jì)算。圖1c中與圖4a脊線對(duì)應(yīng)像素被置0,其余像素保持不變,結(jié)果如圖4b 所示,脊線恰好把粘連豬體用一根脊線分開(kāi),每頭豬自成一個(gè)連通域,且各豬體輪廓保持較好。
a. 分割脊線a. Ridge lines for segmentationb. 分割結(jié)果b. Results of segmentation
1.3 頭尾定位
1.3.1 頭尾邊緣輪廓的截取
以圖4b中的#1豬為例。提取該豬體的連通域(連通域內(nèi)像素點(diǎn)值為1,背景為0),遍歷連通域中所有像素點(diǎn),找到所有與背景八鄰域相接的像素點(diǎn),生成豬體邊緣像素點(diǎn)集,得到如圖5a所示輪廓曲線。對(duì)輪廓曲線計(jì)算最小外接矩形,如圖5b所示,其中矩形交豬體兩頭/尾于兩圓點(diǎn),即矩形與豬體最遠(yuǎn)端交點(diǎn)。
豬體中部?jī)蓤A形點(diǎn)為矩形短中軸與豬體輪廓交點(diǎn),以這兩交點(diǎn)為起點(diǎn),分別沿豬體輪廓向左、向右移動(dòng)1/6豬體輪廓總長(zhǎng)距離,分別得到4個(gè)截取點(diǎn),如圖5b中三角點(diǎn)所示。分別取兩交點(diǎn)之間且過(guò)矩形與豬體最遠(yuǎn)端交點(diǎn)的輪廓段,得到圖5c、圖5d所示的頭/尾輪廓。依次對(duì)圖4b中豬體連通域進(jìn)行相同處理,直至得到每頭豬的頭/尾輪廓。
1.3.2 頭尾識(shí)別算法
1)類Hough聚類識(shí)別
將頭/尾輪廓坐標(biāo)集合映射到圓參數(shù)空間后進(jìn)行聚類分析,判斷頭尾,稱類Hough聚類[30-31]。根據(jù)任意不共線三點(diǎn)可唯一確定一個(gè)圓,在豬體頭尾輪廓曲線(坐標(biāo)集合)上連續(xù)采樣,具體為分5步:1)將輪廓上的像素點(diǎn)看作一個(gè)二維數(shù)列,在輪廓上任取一點(diǎn)端點(diǎn),從端點(diǎn)起,用連續(xù)3個(gè)像素點(diǎn)的采樣窗截取數(shù)列;2)判斷該采樣窗內(nèi)三點(diǎn)是否共線;3)若共線則采樣窗口向后平移一個(gè)像素的距離,執(zhí)行2);4)若不共線,保存采樣窗內(nèi)三點(diǎn)為一組,采樣窗向后平移一個(gè)像素點(diǎn),執(zhí)行2);5)直至所有輪廓點(diǎn)采集完畢,保存所有采樣組。
三像素點(diǎn)坐標(biāo)(1,1),(2,2),(3,3)確定一圓,見(jiàn)式(1)。圖6a所示為三點(diǎn)確定的圓在平面坐標(biāo)上的顯示。以圓心坐標(biāo)和半徑為映射到參數(shù)參數(shù)空間,如圖6b所示,其中(,)表示圓心坐標(biāo),表示半徑。
注:(,)表示圓心坐標(biāo),表示圓半徑。
Note: (,) is center coordinates of circle, andis radius.
圖6 圖像空間到參數(shù)空間的轉(zhuǎn)換
Fig.6 Transformation from image-area to parameter space
取圖6b中空間點(diǎn)在水平面上的投影,及圓心坐標(biāo)點(diǎn),生成圖7的輪廓曲線的參數(shù)聚類點(diǎn)。圖7a為頭部輪廓曲線聚類,圖7b為尾部輪廓曲線聚類,尾部輪廓聚類點(diǎn)明顯較頭尾輪廓點(diǎn)集中。計(jì)算兩輪廓聚類點(diǎn)的聚集度,如式(2)所示,聚集度表征點(diǎn)集聚集程度,其中D為聚類點(diǎn)到點(diǎn)集的聚類中心的歐式距離,為圓心點(diǎn)的個(gè)數(shù),聚類點(diǎn)越集中,聚集度的值越小,當(dāng)曲線為正圓時(shí),所有聚類點(diǎn)重合于聚類中心,=0。
由于完整的單只豬俯視輪廓中豬尾部輪廓更接近圓形,尾部輪廓曲線聚類點(diǎn)的聚集度比頭部輪廓小,因此取聚集度值較小的曲線輪廓為尾部輪廓,另一輪廓即為頭部輪廓。
2)圓度識(shí)別
對(duì)于任意封閉的曲線,圓度的計(jì)算公式見(jiàn)式(3)[32]。其中面積表示封閉曲線包圍像素點(diǎn)的個(gè)數(shù),周長(zhǎng)為輪廓曲線邊緣像素點(diǎn)個(gè)數(shù)。當(dāng)曲線越趨近于圓時(shí),越接近1,曲線為標(biāo)準(zhǔn)圓時(shí),=1。
將如圖5c、5d所示頭/尾輪廓以端點(diǎn)連線為對(duì)稱軸翻轉(zhuǎn)得到新曲線,將新曲線與原輪廓線拼接,形成封閉輪廓,如圖8所示,中間虛直線為輪廓端點(diǎn)連線。拼接輪廓曲線的方法在計(jì)算圓度時(shí)不易引入額外誤差。計(jì)算封閉輪廓的圓度,通常情況下豬尾輪廓更接近圓,因此取圓度值更接近1的閉合輪廓為豬尾,另一輪廓即為豬頭。
a. 頭部封閉輪廓a. Head closed curveb. 尾部封閉輪廓b. Tail closed curve
圖8 頭尾封閉輪廓
Fig.8 Closed curves of head/tail
1.4 視頻標(biāo)記及運(yùn)動(dòng)軌跡識(shí)別
1.4.1 視頻標(biāo)記
對(duì)視頻進(jìn)行分幀處理,以時(shí)間Δ為間隔在視頻中截取圖像幀。每幀圖像采用改進(jìn)分水嶺分割算法處理至每頭豬分割成單一連通域,如圖4b所示效果。手工標(biāo)記首幀圖像中的每頭豬,編號(hào)如圖9a所示。從第2幀圖像起,找尋圖像中與上一幀圖像中同一頭豬并標(biāo)記。以#3豬為例,標(biāo)記的過(guò)程分為4步:1)計(jì)算首幀中#3豬質(zhì)心位置(手工標(biāo)記后,機(jī)器算法計(jì)算質(zhì)心);2)計(jì)算第2幀圖像中所有豬體質(zhì)心位置(每個(gè)連通域的質(zhì)心);3)以首幀中#3豬的質(zhì)心坐標(biāo)為圓心,體長(zhǎng)為半徑,在第2幀圖像中畫圓,找到圓范圍內(nèi)所有的豬體質(zhì)心;4)若圓范圍內(nèi)只有一個(gè)質(zhì)心,則直接標(biāo)記為#3,若范圍內(nèi)存在多個(gè)質(zhì)心,則將距離圓心最近的質(zhì)心標(biāo)記為#3。用該方法可標(biāo)記出第2幀圖像中的每頭豬。依次用同樣方法標(biāo)記第3幀圖像、第4幀圖像,直至標(biāo)記完所有圖像幀。
分幀時(shí)間Δ的選取與豬的移動(dòng)速度密切相關(guān)。如果Δ取得過(guò)大,當(dāng)某頭豬在兩幀之間移動(dòng)范圍較大時(shí),有可能會(huì)造成上一幀中距離圓心最近的質(zhì)心并非同一頭豬,導(dǎo)致標(biāo)記錯(cuò)誤。與白鼠等速度較快的小動(dòng)物不同,豬的運(yùn)動(dòng)速度相對(duì)較慢,合理選擇分幀時(shí)間,即可保證兩幀間同一頭豬的移動(dòng)范圍不至過(guò)大,又可得到一定的移動(dòng)距離,方便顯示豬的移動(dòng)軌跡。本研究中選擇分幀時(shí)間為Δ=1 s。
圖9中,圖9a為起始幀10只豬標(biāo)記;圖9b為下一幀中10只豬質(zhì)心位置(藍(lán)色點(diǎn));圖9c為起始幀10只豬質(zhì)心(紅色點(diǎn))在下一幀中的位置;圖9d為在下一幀中標(biāo)記的起始幀10只豬質(zhì)心位置。
a. 起始圖像幀標(biāo)記a. Markers in first frameb. #3豬下一幀標(biāo)記搜索b. Marking #3 pig in next frame c. 下一幀中搜素c. Searching all pigs in next framed. 下一幀標(biāo)記d. Marking in next frame
1.4.2 運(yùn)動(dòng)趨勢(shì)修正
由于頭尾識(shí)別算法中存在識(shí)別誤差,需要進(jìn)行修正,確保準(zhǔn)確的頭部運(yùn)動(dòng)軌跡識(shí)別。運(yùn)動(dòng)趨勢(shì)修正,即每識(shí)別出一幀圖像中的頭/尾坐標(biāo)后,即對(duì)照前兩幀中當(dāng)前豬的頭部位置計(jì)算偏差。當(dāng)尾部坐標(biāo)更接近前兩幀頭部坐標(biāo),進(jìn)行修正,將當(dāng)前頭部和尾部位置標(biāo)記進(jìn)行互換;反之,則認(rèn)為識(shí)別算法正確。對(duì)整個(gè)運(yùn)動(dòng)軌跡進(jìn)行修正運(yùn)算,得到準(zhǔn)確的頭部運(yùn)動(dòng)軌跡曲線。
采集到的豬群活動(dòng)視頻(15 mins)通過(guò)Matlab軟件進(jìn)行處理,處理硬件設(shè)備為華碩臺(tái)式機(jī),配置為IntelCore i7-4790cpu,3.60 GHz,內(nèi)存8 G。
2.1 視頻分幀及標(biāo)記
連續(xù)等時(shí)間間隔(Δ=1s)截取視頻中的圖像幀共計(jì)750幀,進(jìn)行背景去除和個(gè)體分割,用視頻標(biāo)記處理標(biāo)記出每幀圖像中各豬的編號(hào)。再通過(guò)人工視頻對(duì)照驗(yàn)證標(biāo)記和正確性,結(jié)果表明,每幀圖像中的豬個(gè)體被正確標(biāo)記。
2.2 頭尾識(shí)別
為了較好地顯示豬的運(yùn)動(dòng)路線,從750幀圖像中提取等時(shí)間間距63幀圖像進(jìn)行頭尾識(shí)別。分別用類Hough聚類算法和圓度識(shí)別算法進(jìn)行頭部識(shí)別,如圖10所示,紅色圓點(diǎn)為頭部位置結(jié)果。圖10中有兩頭豬超出拍攝范圍導(dǎo)致2種識(shí)別算法均識(shí)別錯(cuò)誤。除去這兩頭豬,圓度識(shí)別全部正確。
統(tǒng)計(jì)63幀圖像,類Hough聚類識(shí)別平均正確率為71.79%,范圍為33%~100%;圓度識(shí)別平均正確率為79.67%,范圍為63%~100%。類Hough聚類中識(shí)別率為33.3%的一幀圖像是由于豬運(yùn)動(dòng)速度過(guò)快造成了豬體嚴(yán)重拖尾,致使圖像處理后邊緣輪廓畸變嚴(yán)重其余圖像中識(shí)別率均在56%以上。圖像拖尾問(wèn)題在后續(xù)研究中可通過(guò)換用高速攝像頭拍攝解決。若排除成像不完全的豬個(gè)體(超出鏡頭拍攝范圍),類Hough聚類識(shí)別和圓度識(shí)別算法的平均識(shí)別正確率分別為75.00%和85.70%。
a. 類Hough聚類識(shí)別a. Clustering recognition based on analogous Houghb. 圓度識(shí)別b. Roundness recognition
由于在粘連豬體分割過(guò)程中,豬體邊緣輪廓經(jīng)過(guò)腐蝕和膨脹后出現(xiàn)輪廓曲線畸變。類Hough聚類識(shí)別算法基于分割后頭尾部輪廓曲線上的像素點(diǎn),對(duì)輪廓畸變較為敏感,因此輪廓曲線畸變對(duì)類Hough聚類頭尾識(shí)別正確率影響較大;圓度識(shí)別算法建立在整體目標(biāo)像素基礎(chǔ)上,在一定程度上平均了邊緣輪廓畸變帶來(lái)的不利影響。因此在識(shí)別正確率上圓度識(shí)別優(yōu)于類Hough聚類算法。2種算法在計(jì)算時(shí)間上有所差異,類Hough聚類識(shí)別平均每幀計(jì)算耗時(shí)3.063 6 s,圓度平均每幀計(jì)算耗時(shí)7.105 9 s,類Hough聚類算法較快。
在相似的研究中,Nasirahmadi等[10]在識(shí)別豬相互間爬跨行為時(shí),根據(jù)豬的運(yùn)動(dòng)趨勢(shì)對(duì)頭/尾進(jìn)行判斷,基于兩豬頭尾或頭與體側(cè)之間的距離來(lái)判斷兩豬之間的行為是否為爬跨。該算法監(jiān)測(cè)兩相互接近的豬只并基于運(yùn)動(dòng)趨勢(shì)對(duì)其頭/尾進(jìn)行判斷,不涉及豬群中其他無(wú)明顯運(yùn)動(dòng)趨勢(shì)的豬只。關(guān)于采用頭尾識(shí)別算法對(duì)群養(yǎng)豬中豬個(gè)體的頭尾進(jìn)行判斷的文獻(xiàn)尚未見(jiàn)報(bào)道。
2.3 軌跡生成
計(jì)算頭/尾輪廓以及頭/尾輪廓的端點(diǎn)連線所圍成區(qū)域的質(zhì)心,以該質(zhì)心作為頭/尾的坐標(biāo)位置。采用圓度算法識(shí)別頭尾后,以頭部坐標(biāo)生成豬的運(yùn)動(dòng)軌跡,如圖11a所示為#2號(hào)豬軌跡。紅色上三角折線表示圓度識(shí)別后#2號(hào)豬的頭部運(yùn)動(dòng)軌跡,藍(lán)色下三角折線為其尾部運(yùn)動(dòng)軌跡,黑色實(shí)心圓折線為采用人工方法對(duì)63幀圖像中#2號(hào)豬的頭部位置標(biāo)記生成的頭部位置運(yùn)動(dòng)軌跡。數(shù)字按時(shí)間各幀順序排列,1為第一幀圖像中的坐標(biāo),63為最后一幀圖像中的坐標(biāo)。
圖11a中豬的頭部運(yùn)動(dòng)軌跡與尾部運(yùn)動(dòng)軌跡差異較大,紅色上三角折線與黑色實(shí)心圓折線的重合度較高。圓度算法對(duì)頭/尾識(shí)別的誤差可以從圖11a中明顯看出。如圖中的紅色上三角60號(hào)點(diǎn)離人工標(biāo)記的黑色實(shí)心圓60號(hào)點(diǎn)較遠(yuǎn),藍(lán)色下三角60號(hào)點(diǎn)卻與黑色實(shí)心圓60號(hào)點(diǎn)很接近,說(shuō)明在第60幀圖像中,算法錯(cuò)誤地把#2號(hào)豬的頭部識(shí)別成了尾部。
a. 頭部/尾部位置運(yùn)動(dòng)軌跡
a. Trajectory tracking based on head/tail locations
經(jīng)過(guò)運(yùn)動(dòng)趨勢(shì)修正后,自動(dòng)識(shí)別頭部坐標(biāo)軌跡如圖11b所示。紅色上三角折線為算法識(shí)別頭部軌跡,藍(lán)色實(shí)心圓為人工標(biāo)記的頭部軌跡,兩者較好地吻合。由于自動(dòng)識(shí)別的頭部坐標(biāo)為頭部輪廓的質(zhì)心,人工標(biāo)記方法為人為主觀判斷,造成圖11b中相應(yīng)位置偏差,如4號(hào)點(diǎn),58號(hào)點(diǎn)等。從#2豬頭部活動(dòng)軌跡可以看出,在試驗(yàn)時(shí)間段內(nèi),#2豬一直活動(dòng)在豬圖左半邊漏縫地板區(qū)域。
2.4 頭/尾運(yùn)動(dòng)軌跡與質(zhì)心軌跡對(duì)比
結(jié)合頭/尾運(yùn)動(dòng)軌跡可更加精確對(duì)豬的運(yùn)動(dòng)過(guò)程和運(yùn)動(dòng)趨勢(shì)進(jìn)行判斷,增加了用機(jī)器算法對(duì)豬個(gè)體和群體的運(yùn)動(dòng)、行為理解的可能性。對(duì)比頭/尾軌跡與相應(yīng)的質(zhì)心軌跡,如圖12所示。圖12a為63幀圖像中的第37幀到第42幀,連續(xù)6幀圖像中#2號(hào)豬(藍(lán)色虛箭頭)和#3號(hào)豬(紅色實(shí)箭頭)的頭/尾軌跡,其中箭頭為豬的頭部坐標(biāo),箭尾為豬的尾部坐標(biāo)。圖12b中為同樣6幀圖像內(nèi)兩豬體的質(zhì)心坐標(biāo)生成的軌跡,藍(lán)色實(shí)心圓為#2號(hào)豬,紅色上三角為#3號(hào)豬。從圖12b中可以觀察到#2號(hào)豬有從中間位置往豬欄邊緣運(yùn)動(dòng)的趨勢(shì),#3號(hào)豬在原地略微向左運(yùn)動(dòng)。但是從圖12a中可以明確地看出#2豬圍繞#3號(hào)豬轉(zhuǎn)了一個(gè)大約120°的彎,#3號(hào)豬也順勢(shì)轉(zhuǎn)了一個(gè)約20°的角度。兩頭豬的交互動(dòng)作在圖12a非常明確地顯現(xiàn)。
a. 頭/尾軌跡
a. Head/Tail trajectory
b. 質(zhì)心軌跡
b. Centroid trajectory
注:1~6為63幀圖像中的第37幀到第42幀;圖12a中,箭頭為豬的頭部坐標(biāo),箭尾為豬的尾部坐標(biāo)。
Note: 1-6 represent images from 37th picture to 42nd picture of 63 images. In Fig.12a, heads of arrows represent coordinates of pig’s heads, and tails of arrows represent coordinates of pig’s tails.
圖12 頭/尾軌跡與質(zhì)心軌跡對(duì)比
Fig.12 Comparision between trajectories on head/tail locations and centroids
本研究運(yùn)用機(jī)器視覺(jué)技術(shù),以定位每頭豬頭/尾部坐標(biāo)位置為目標(biāo),實(shí)現(xiàn)豬的運(yùn)動(dòng)軌跡自動(dòng)追蹤。
1)采用改進(jìn)分水嶺分割算法有效分割粘連豬體,保留豬頭/尾輪廓特征;
2)采用類Hough聚類和圓度識(shí)別算法分別識(shí)別豬的頭/尾位置,平均識(shí)別正確率分別為71.79%和79.67%;經(jīng)過(guò)運(yùn)動(dòng)趨勢(shì)修正后,自動(dòng)識(shí)別的頭部軌跡與人工標(biāo)記的頭部軌跡較好吻合。該方法可以正確辨別頭尾,并自動(dòng)追蹤出頭/尾部運(yùn)動(dòng)軌跡;
3)頭/尾軌跡和質(zhì)心軌跡,能夠反應(yīng)更多、更精確豬個(gè)體和群體活動(dòng)、運(yùn)動(dòng)信息。
該方法對(duì)于自動(dòng)地記錄和研究豬個(gè)體以及群體的行為和習(xí)性提供了參考。目前該方法用于處理離線視頻,實(shí)時(shí)視頻的處理方法,還需要在計(jì)算時(shí)間上進(jìn)一步優(yōu)化。
[1] 施正香,李保明,張曉穎,等. 集約化飼養(yǎng)環(huán)境下仔豬行為的研究[J]. 農(nóng)業(yè)工程學(xué)報(bào),2004,20(2):220-225.
Shi Zhengxiang, Li Baoming, Zhang Xiaoying, et al. Behaviour of weaning piglets under intensive farm environment[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2004, 20(2): 220-225. (in Chinese with English abstract)
[2] 高婭俊,李保明,李明麗,等. 舍溫對(duì)母豬行為與仔豬保溫箱利用率的影響[J]. 農(nóng)業(yè)工程學(xué)報(bào),2011,27(12):191-194.
Gao Yajun, Li Baoming, Li Mingli et al. Impacts of room temperature on sow behaviour and creep box usage for pre-weaning piglet[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2011, 27(12): 191-194. (in Chinese with English abstract)
[3] 郁厚安,高云,黎煊,等. 動(dòng)物行為監(jiān)測(cè)的研究進(jìn)展-以舍養(yǎng)商品豬為例[J]. 中國(guó)畜牧雜志,2015,51(20):66-70.
Yu Houan, Gao Yun, Li Xuan, et al. Research review of animal behavior monitoring technologies: Commercial pigs as realistic example[J]. Chinese Journal of Animal Science, 2015, 51(20): 66-70. (in Chinese with English abstract)
[4] Scott K, Chennells D J. The welfare of finishing pigs in two contrasting housing systems: Fully-slatted versus straw-bedded accommodation[J]. Livestock Science, 2006, 103(1/2): 104-115.
[5] Jeffery E, William G V A, David H B,et al. Behaviour of pigs with viral and bacterial pneumonia[J]. Applied Animal Behaviour Science, 2007, 105(1): 42-50.
[6] Monica R P E, Joseph P G, Anna K J, et al. A flooring comparison: the impact of rubber mats on the health, behavior, and welfare of group-housed sows at breeding[J]. Applied Animal Behaviour Science, 2010, 123(2): 7-15.
[7] Kashiha M, Bahr C, Ott S, et al. Automatic identification of marked pigs in a pen using image pattern recognition[J]. Computers and Electronics in Agriculture, 2013, 93(2): 111-120.
[8] Kashiha M, Bahr C, Ott S, et al. Automatic monitoring of pig locomotion using image analysis[J]. Livestock Science, 2014, 159(1): 141-148.
[9] Lind N M, Vinther M, Hemmingsen R P, et al. Validation of digital video tracking system for recording pig locomotor behavior[J]. Journal of Neuroscience Methods, 2005, 143(2): 123-132.
[10] Nasirahmadi A, Hensel O, Edwards S A, et al. Automatic detection of mounting behaviours among pigs using image analysis[J]. Computer and Electronics in Agriculture, 2016, 124: 295-302.
[11] 馬麗,紀(jì)濱,劉宏申,等. 單只豬輪廓圖的側(cè)視圖識(shí)別[J].農(nóng)業(yè)工程學(xué)報(bào),2013,29(10):168-174.
Ma Li, Ji Bin, Liu Hongshen, et al. Differentiating profile based on single pig contour[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2013, 29(10): 168-174.
[12] 王勇. 基于輪廓不變矩特征的豬行為姿態(tài)識(shí)別研究[D]. 鎮(zhèn)江:江蘇大學(xué),2011. Wang Yong. Research on the Recognition of Pig Behavior Posture Based on Contour Invariant Moments Features[D]. Zhenjiang: JiangSu University, 2011. (in Chinese with English abstract)
[13] 周金金. 基于小波矩及概率神經(jīng)網(wǎng)絡(luò)的豬的姿態(tài)識(shí)別研究[D]. 鎮(zhèn)江:江蘇大學(xué),2015.
Zhou Jinjin. The Gesture Recognition of Pigs Based on Wavelet Moment and Probabilistic Neural Network[D]. Zhenjiang: JiangSu University, 2015. (in Chinese with English abstract)
[14] 郭依正,朱偉興,馬長(zhǎng)華,等. 基于Isomap和支持向量機(jī)算法的俯視群養(yǎng)豬個(gè)體識(shí)別[J]. 農(nóng)業(yè)工程學(xué)報(bào),2016,32(3):182-187.
Guo Yizheng, Zhu Weixing, Ma Changhua, et al. Top-view recognition of individual group-housed pig based on isomap and SVM[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2016, 32(3): 182-187. (in Chinese with English abstract)
[15] 陳佳黎. 基于優(yōu)化特征提取的群養(yǎng)豬個(gè)體身份識(shí)別方法[D]. 鎮(zhèn)江:江蘇大學(xué),2015.
Chen Jiali. Individual Identification Method for Group-housed Pigs Based on Optimal Feature Extraction[D]. Zhenjiang: JiangSu University, 2015. (in Chinese with English abstract)
[16] 朱偉興,浦雪峰,李新城,等. 基于行為監(jiān)測(cè)的疑似病豬自動(dòng)化識(shí)別系統(tǒng)[J]. 農(nóng)業(yè)工程學(xué)報(bào),2010,26(1):188-192.
Zhu Weixing, Pu Xuefeng, Li Xincheng, et al. Automatic identification system of pigs with suspected case based on behavior monitoring[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2010, 26(1): 188-192. (in Chinese with English abstract)
[17] 浦雪峰,朱偉興,陸晨芳. 基于對(duì)稱像素塊識(shí)別的病豬行為監(jiān)測(cè)系統(tǒng)[J]. 計(jì)算機(jī)工程,2009,35(21):250-252.
Pu Xuefeng, Zhu Weixing, Lu Chenfang. Sick pig behavior monitor system based on symmetrical pixel block recognition[J]. Computer Engineering, 2009, 35(21): 250-252. (in Chinese with English abstract)
[18] 朱偉興,紀(jì)濱,秦鋒. 基于偽球算子邊緣模型的豬前景幀監(jiān)測(cè)[J]. 農(nóng)業(yè)工程學(xué)報(bào),2012,28(12):189-194.
Zhu Weixing, Ji Bin, Qin Feng. Detection of foreground- frame of pig using edge model based on pseudosphere- operator[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2012, 28(12): 189-194. (in Chinese with English abstract)
[19] Guo Yizheng, Zhu Weixing, Jiao Pengpeng, et al. Foreground detection of group-housed pigs based on the combination of mixture of gaussians using prediction mechanism and threshold segmentation[J]. Bio-systems Engineering, 2014, 125(3): 98-104.
[20] Guo Yizheng, Zhu Weixing, Jiao Pengpeng, et al. Multi-object extraction from top-view group-housed pig images based on adaptive partitioning and multilevel thresholding segmentation[J]. Bio-systems Engineering, 2015, 135: 54-60.
[21] Zhou Kaijun, Yang Chunhua, Gui Weihua, et al. Clustering- driven watershed adaptive segmentation of bubble image[J]. Journal of Central South University of Technology, 2010, 17(5): 1049-1057.
[22] Malpica N, Solorzano C O, Vaquero J J, et al. Applying watershed algorithms to the segmentation of clustered nuclei[J]. Cytometry, 1997, 28(4): 289-297.
[23] Salman N H, Lin C Q. Image segmentation and edge detection based on watershed techniques[J]. International Journal of Computers and Applications, 2003, 25(4): 69-74.
[24] Sri N A, Varma G P S, Govardhan A. An improved iterative watershed and morphological transformation techniques for segmentation of microarray images[J]. Computer Aided Soft Computing Techniques for Imaging and Biomedical Applications, 2010, CASCT(2): 77-87.
[25] 周廣波. 基于顏色和形狀特性的交通標(biāo)志檢測(cè)[D]. 大連:大連理工大學(xué),2013.
Zhou Guangbo. Traffic Sign Detection Based on Color and Shape Features[D]. Dalian: Dalian University of Technology, 2013. (in Chinese with English abstract)
[26] Soille P. Morphological image analysis: principles and applications[J]. Computer Physics Communications, 2003, 49(5): 94-103.
[27] Vincent L. Morphological grayscale reconstruction in image analysis: applications and efficient algorithms[J]. IEEE Trans. Image Processing, 1993, 2(2): 176-201.
[28] 丁偉杰,范影樂(lè),龐全. 一種改進(jìn)的基于分水嶺算法的粘連分割研究[J]. 計(jì)算機(jī)工程與應(yīng)用,2007,43(10):70-72.
Ding Weijie, Fan Yingle, Pang Quan. Improved research for overlapping segmentation based on watershed algorithm[J]. Computer Engineering and Applications, 2007, 43(10): 70-72. (in Chinese with English abstract)
[29] Vincent L, Soille P. Watersheds in digital spaces: An efficient
based on immersion simulations[J]. IEEE Trans. Pattern Anal. Machine Intell, 1991,13(6): 583-598.
[30] 許曉麗. 基于聚類分析的圖像分割算法研究[D]. 哈爾濱:哈爾濱工程大學(xué),2012.
Xu Xiaoli. Research of Image Segmentation Algorithm Based on Clustering Analysis[D]. Harbin: Harbin Engineering University, 2012. (in Chinese with English abstract)
[31] 張新野. 基于聚類分析的圖像分割方法研究[D]. 大連:大連海事大學(xué),2012.
Zhang Xinye. Study on Image Segmentation Based on Clustering Analysis[D]. Dalian: Dalian Maritime University, 2012. (in Chinese with English abstract)
[32] 朱殿堯,卞紅雨. 矩形和(橢)圓區(qū)域目標(biāo)的分類識(shí)別[J].激光與紅外,2009,39(11):1228-1232.
Zhu Dianrao, Bian hongyu. Classification between rectangular and ellipsoid/circular areas[J]. Laser and Nfrared, 2009, 39(11): 1228-1232. (in Chinese with English abstract)
Trajectory tracking for group housed pigs based on locations of head/tail
Gao Yun1,2, Yu Hou’an1, Lei Minggang2,3, Li Xuan1,2, Guo Xu1, Diao Yaping1
(1.430070,; 2.430070,;3.430070,)
Observing animal’s individual and social behaviors is the most effective way to assess animal welfare and healthy. Automated trajectory tracking based on head/tail locations is supposed to be extremely helpful for the realization of pig behavior recognition, especially for group housed pigs in the commercial pig facility. The methods of trajectory tracking for group housed pigs based on head/tail location were described in this paper. The video of group housed nurseries was taken in a commercial pig breeding farm of Hubei Jinlin Original Breeding Swine Co. Ltd. on January 12th, 2016. A high resolution camera (Woshida CL03) was used to record 15 min video. Afterwards, image frames were extracted from the original video in a one-second time interval. Image frames were processed in a computer (configured with IntelCore i7-4790 CPU (central processing unit), 3.6 GHz, 8 G memory) with MATLAB software platform. The image processing for each image frame included 4 steps: background removal, pigs division, head/tail identification and trajectory tracking modification. The background removal was based on the RGB (red, green, blue) color space, from which a vector of RGB mean values of the pig’s body was calculated. If the Euclidean distance between the RGB values of one pixel and the RGB mean values vector was less than a small threshold of 100, the pixel was involved in a pig body area and set as 1. Otherwise, it was outside any pig body area and set as 0. When all pixels of the image frame were scanned and calculated by this method, a binary image was acquired. The white area referred to pig’s body area, while the black area referred to the background. After that, the morphology erosion and expansion were utilized before the watershed segmentation algorithm to improve the dividing effect for the pigs with adhesion. Pigs division was implemented on the binary images with the improved watershed segmentation algorithm. To discriminate each pig in each image frame, a video tracking and marking method was necessary to be implemented in the video. After being manually marked with the identify number in the first frame, each pig had a unique number and was labelled automatically throughout the video. Abstracting image frames from the video with a very short time interval (1 s), the distance of 2 centroids of the identical pig between 2 continuous image frames would be sufficiently small. Therefore, the video tracking was to find the pig with the closest distance in the next image frame and mark it with the same identify number of the current pig until all the pigs were marked. After each pig was marked throughout the video, using the head/tail location as the coordinates of the pig, the trajectory of each pig in herd could be tracked by the trajectory calculation. Extracting the outline of each pig in frames, the head and the tail outlines were divided from the whole outline, after a sixth of whole outline distance was moved along the outline in 2 opposite directions from the 2 intersection points of the outline and short axis of the minimum bounding rectangle. After the head/tail outline curve was gained from each pig outline, 2 recognition algorithms, the analogous Hough clustering recognition algorithm and the roundness recognition algorithm, were employed to identify the head/tail of each pig. Thus the location of the pig’s head/tail could be spotted by locating the centroid of the heat/tail curve. Then the trajectory tracking of the pigs was calculated based on the location of head/tail, and corrected by the motion trends of pigs. Experiment showed that the background was successfully removed from each image frame using the Euclidean distance of RGB values between the pixels and the mean value vector. The improved watershed segmentation algorithm has been verified as an effective tool to divide the pigs with adhesion. The identify number of each pig was tracked from the first frame to the end. The average recognition rate of analogous Hough clustering algorithm was 71.79% for the identification of pig’s heads/tails, while the roundness algorithm was 79.67%, which was less sensitive to the distortion of head outline curve. If not including the pigs outside the camera range, the recognition rates would be up to 75% and 85.7% respectively. The roundness algorithm shows an obvious advantage in comparison. The modified trajectory of each pig shows a high agreement with the manually labelled trajectory. More understanding for pigs’ behaviors can be acquired from the trajectory of head/tail locations. This trajectory tracking method provides a good reference for further research of behavior recognition.
algorithms; image recognition; image segmentation; pig herd; individual pig; identification of head/tail; improved watershed segmentation; trajectory tracking
10.11975/j.issn.1002-6819.2017.02.030
TP391
A
1002-6819(2017)-02-0220-07
2016-06-15
2016-11-20
“十三五”國(guó)家重點(diǎn)研發(fā)計(jì)劃項(xiàng)目(2016YFD0500506);湖北省自然科學(xué)基金(2014CFB317);現(xiàn)代農(nóng)業(yè)技術(shù)體系(CARS-36)
高 云,博士,副教授,碩士生導(dǎo)師,主要從事農(nóng)業(yè)智能檢測(cè)與控制方面的研究。武漢 華中農(nóng)業(yè)大學(xué)工學(xué)院,430070。 Email:angelclouder@mail.hzau.edu.cn。美國(guó)農(nóng)業(yè)工程學(xué)會(huì)會(huì)員ASABE(1049530);中國(guó)農(nóng)業(yè)工程學(xué)會(huì)會(huì)員(E041700006M)
高 云,郁厚安,雷明剛,黎 煊,郭 旭,刁亞萍. 基于頭尾定位的群養(yǎng)豬運(yùn)動(dòng)軌跡追蹤[J]. 農(nóng)業(yè)工程學(xué)報(bào),2017,33(2):220-226. doi:10.11975/j.issn.1002-6819.2017.02.030 http://www.tcsae.org
Gao Yun, Yu Hou’an, Lei Minggang, Li Xuan, Guo Xu, Diao Yaping. Trajectory tracking for group housed pigs based on locations of head/tail[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2017, 33(2): 220-226. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2017.02.030 http://www.tcsae.org
農(nóng)業(yè)工程學(xué)報(bào)2017年2期