• 
    

    
    

      99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

      Determination of rice panicle numbers during heading by multi-angle imaging

      2015-11-24 12:23:55LingfengDuanChenglongHuangGuoxingChenLizhongXiongQianLiuWannengYanga
      The Crop Journal 2015年3期

      Lingfeng Duan,Chenglong Huang,Guoxing Chen,Lizhong Xiong, Qian Liu,Wanneng Yanga,,*

      aCollege of Engineering,Huazhong Agricultural University,Wuhan 430070,China

      bBritton Chance Center for Biomedical Photonics,Wuhan National Laboratory for Optoelectronics,Huazhong University of Science and Technology, Wuhan 430074,China

      cNational Key Laboratory of Crop Genetic Improvement and National Center of Plant Gene Research,Huazhong Agricultural University, Wuhan 430070,China

      dMOA Key Laboratory of Crop Ecophysiology and Farming System in the Middle Reaches of the Yangtze River,Huazhong Agricultural University, Wuhan 430070,China

      Determination of rice panicle numbers during heading by multi-angle imaging

      Lingfeng Duana,b,Chenglong Huanga,b,Guoxing Chend,Lizhong Xiongc, Qian Liub,Wanneng Yanga,c,*

      aCollege of Engineering,Huazhong Agricultural University,Wuhan 430070,China

      bBritton Chance Center for Biomedical Photonics,Wuhan National Laboratory for Optoelectronics,Huazhong University of Science and Technology, Wuhan 430074,China

      cNational Key Laboratory of Crop Genetic Improvement and National Center of Plant Gene Research,Huazhong Agricultural University, Wuhan 430070,China

      dMOA Key Laboratory of Crop Ecophysiology and Farming System in the Middle Reaches of the Yangtze River,Huazhong Agricultural University, Wuhan 430070,China

      A R T I C L E I N F O

      Article history:

      Received 12 October 2014

      Received in revised form

      1 March 2015

      Accepted 10 March 2015

      Available online 11 April 2015

      Plant phenotyping

      Rice panicle number

      Multi-angle imaging

      Image analysis

      Plant phenomics has the potential to accelerate progress in understanding gene functions and environmental responses.Progress has been made in automating high-throughput plant phenotyping.However,few studies have investigated automated rice panicle counting.This paper describes a novel method for automatically and nonintrusively determining rice panicle numbers during the full heading stage by analyzing color images of rice plants taken from multiple angles.Pot-grown rice plants were transferred via an industrial conveyer to an imaging chamber.Color images from different angles were automatically acquired as a turntable rotated the plant.The images were then analyzed and the panicle number of each plant was determined.The image analysis pipeline consisted of extracting the i2 plane from the original color image,segmenting the image,discriminating the panicles from the rest of the plant using an artificial neural network,and calculating the panicle number in the current image.The panicle number of the plant was taken as the maximum of the panicle numbers extracted from all 12 multi-angle images.A total of 105 rice plants during the full heading stage were examined to test the performance of the method.The mean absolute error of the manual and automatic count was 0.5,with 95.3%of the plants yielding absolute errors within±1.The method will be useful for evaluating rice panicles and will serve as an important supplementary method for high-throughput rice phenotyping.

      ?2015 Crop Science Society of China and Institute of Crop Science,CAAS.Production and hosting by Elsevier B.V.This is an open access article under the CC BY-NC-ND license

      (http://creativecommons.org/licenses/by-nc-nd/4.0/).

      1.Introduction

      According to the recent Declaration of the World Summit on Food Security,70%more food is needed by 2050 to meet the demands of the increasing population(www.fao.org/wsfs/ world-summit/en/).Global climate change and demand for biofuel feedstocks have exacerbated this problem,resulting in growing pressure on crop breeding.Rapid screening for crops with high yield and increased tolerance to abiotic and biotic stresses could be an important tool to help meet these demands[1].

      The genome sequencing of Arabidopsis and other crop varieties has resulted in the accumulation of terabytes of sequence information that need to be linked with function[2]. However,identifying links between genotype and phenotype is hampered by inefficient,destructive,and often subjective manual phenotyping [3,4].High-throughput phenotyping has become the new bottleneck in plant biology and crop breeding[5].

      Plant phenomics promises to accelerate progressin understanding gene function and environmental responses [6].There has been progress in automating plant phenotyping, including automated counts of plant parts[7—9]and whole adult plants[10—12].Efforts have also been made to develop automated growth and observation facilities,such as at the High Resolution Plant Phenomics Centre in Australia,the Jülich Plant Phenotyping Centre in Germany,the Leibniz Institute of Plant Genetics and Crop Plant Research in Germany,and the French National Institute for Agricultural Research.

      Rice is the staple food for a large proportion of the world's population[13]and is an important model system for plant science research[14].Pressure on rice supplies has increased significantly over the past decade.The rice panicle is closely associated with yield,given that it directly regulates the grain number[15].Substantial effort has been expended on quantitative trait locus(QTL)analyses for rice panicle traits [16,17].However,few contributions have been made in automating rice panicle counts.Liu et al.[18]applied hyperspectral reflectance and principal component analysis to discriminate fungal infection levels in rice panicles.Liu et al.[19]used hyperspectral reflectance data to discriminate the health conditions of rice panicles.Ikeda et al.[15]developed image analysis software to extract panicle traits,including values of the length and number of the various branches and grain numbers. However,in all of these studies,the rice panicles were cut from the rice plants,preventing the achievement of dynamic screening of rice panicles.To our knowledge,no publication has reported a noninvasive,in vivo determination of rice panicle numbers.

      The panicle number is a key indicator of rice yield,and counting panicles at an early stage would provide useful information for estimating rice yield.Panicle identification is the first step in panicle assessments such as by panicle counting,panicle length calculation,maturity degree assessment,and biomass prediction.However,because the color of the panicle at early stages(for example,the heading stage)is similar to the rest of the plant(green),identifying green panicles is highly challenging.This paper presents a novel method for nonintrusive detection of panicle numbers of rice plants during the full heading stage by analyzing color images of rice plants taken from multiple angles.The specific goals were to:(1)differentiate rice panicles from other organs and (2)calculate rice panicle numbers.

      2.Materials and methods

      2.1.Automatic image acquisition platform

      Because the panicles and leaves of rice plants usually overlap, visible light imaging from a single angle cannot detect all of the panicles.For this reason,multi-angle imaging was adopted in this study.Previously,our group developed a high-throughput rice phenotyping facility(HRPF)to measure 15 rice phenotypic traits, excluding panicle number[10].The HRPF used an industrial conveyor to transfer pot-grown rice plants to an imaging area for image acquisition.A turntable was used to rotate the rice plants. A barcode scanner read the barcode of each pot for indexing. Plants were illuminated by fluorescent tubes from both the side and top.Images were taken at 30°intervals by a charge-coupled device(CCD)camera(Stingray F-504C,Applied Vision Technologies,Germany)as the turntable rotated.For each rice plant,12 images(2452×2056 pixels)were taken from different angles. Lighting conditions were constant throughout the process.Image acquisition was performed by NI-IMAQ Virtual Instruments(VI) Library for LabVIEW(National Instruments Corporation,USA). More details about the HRPF system can be found in Yang et al. 2014[10].

      2.2.Automatic image analysis pipeline

      An image analysis pipeline was developed to analyze images from each angle.One image at a time was analyzed.The image analysis software was complemented with NI Vision for LabVIEW8.6(National Instruments).The image analysis pipeline consisted of extracting i2 planes from the original color image, segmenting the image,discriminating the panicles from the rest of the plant using an artificial neural network(ANN),and then calculating panicle numbers in the current image(Fig.1).

      In the first step,the original images were pre processed using an IMAQ(Image Acquisition System,National Instruments) low-pass filter to remove noise.After image filtering,the RGB image was converted into the i1i2i3 color space,a commonly used color space based on the Karhunen—Loeve transformation [20].Philipp and Rath[21]compared discriminant analysis, canonical transformation,i1i2i3,HSI,HSV,and Lab color spaces to separate plants and background,and concluded that i1i2i3 represented the best method.The relationship between the i1i2i3 and RGB color spaces is shown in Eq.(1).

      Fig.1-Image analysis pipeline for analysis of images from a single angle.(a)An RGB image of a rice plant.(b)The i2 component of (a).(c)A labeled image with candidate panicles.(d)The labeled image with detected panicles.

      After the tests on all of the images acquired from the rice samples used in this study,we found that the i2 plane was effective in segmenting panicles from the rest of the plant.We selected the hysteresis thresholding method[22]forsegmenting the panicles because it removed the noise without breaking the contours.The lower threshold was obtained using the OTSU algorithm[23],and the upper threshold was set as twice the lower threshold.The OTSU algorithm assumes that the image follows a bimodal(foreground and background pixels)histogram.It determines the optimum threshold by discriminating the foreground and background so that their intra-class variance is minimal.Because the panicles are normally positioned at the top of the plant,a“remove boundary particles”operation was executed to remove the regions at the bottom of the image.Specifically,the bounding rectangular bottom of each particle(a bounding rectangle is defined as the smallest rectangle whose sides is parallel to the x-axis and y-axis that completely encloses the particle.And bounding rectangular bottom is the Y-coordinate of the lowest particle point.)was calculated by the Lab VIEW IMAQ function particle analysis.Then the distance from the bounding rectangular bottom to the top surface of the pot was computed.If this distance was less than 200 pixels,the corresponding particle was removed.

      Pixels belonging to the same panicle may appear in a slightly different color(Fig.2-a).Consequently,some panicles in the segmented image may include only parts of panicles(Fig.2-b). Local region growing was applied to iteratively add neighboring pixels that met the criterion of color homogeneity(Fig.2-c). Because the RGB color space is sensitive to illumination,we transformed the RGB color space into the normalized rgb space using Eq.(2).Local region growing was initiated with each original candidate panicle region and iteratively grown until it exceeded the predefined local window(11×11 in this study).A neighboring pixel was added to the region if Eq.(3)was satisfied.

      where(r,g,b)was the value of the r,g,b component of a given neighboring pixel,was the average of the r,g,b component in a given candidate panicle region,and T was the threshold(0.1 in this study).The grayscale range of the R,G,B component was[0,255]and the grayscale range of the r,g,b component was[0,1].

      Fig.2-Examples of local region-growing and region merging.(a)An original RGB image of a rice plant.(b)The binary image after hysteresis thresholding.(c)The result of local region growing.After region growing,the objects in the boxes were extracted more precisely.(d)The result of region merging.The two regions in the orange box were merged into one region after region merging.For better visualization,the images in(b)and(c)are labeled.

      After the region-growing process,some panicles might appear oversegmented into several regions.This situation wasusually caused by inhomogeneity of color among the pixels of a panicle,owing to overlap by other organs,inhomogeneity of maturity condition among spikelets,and nonuniform lighting conditions.To correct this situation,a region-merging step was performed to join adjacent regions(Fig.2-d).Region merging was an iterative process that started from two adjacent regions having the smallest sum of areas.In each iteration,the following tasks were performed:(1)the normalized area of each region(the region area divided by the average area of all of the regions in the image)was recalculated,(2)the sum of the normalized area(SNA)of every two adjacent regions was calculated,and(3)the two adjacent regions that had the smallest SNA value were determined.If the smallest SNA value was smaller than a predefined threshold(3 in this study), the two regions were merged.The iteration process stopped when the smallest SNA value was greater than the threshold.

      After region merging,small particles with areas lower than a predefined area threshold were removed.Owing to natural variation in heading time,panicle area varies greatly among different panicles both within the same panicles and among different plants.We accordingly selected an adaptive area threshold based on the average of region areas in the current image,instead of a fixed area threshold.After preliminary tests,the threshold was defined as 0.2 multiplied by the average of the region areas in the current image.Organs other than the panicles might remain in the segmented image (Fig.1-c).In the next step,automatic feature extraction for each candidate panicle region in the segmented image was performed.These features were then transferred to a back-propagation ANN to identify the panicle regions.

      2.3.Feature extraction and feature subset selection

      This work used several color spaces(RGB,HSL,HSV,normalized rgb,and i1i2i3)to describe color features of the extracted candidate panicle regions.For each region,the original extracted color features represented the average and standard deviation of the R,G,B,H,S,L,V,r,g,b,and i2 components in the region.Some regions that were not panicles had colors similar to those of panicles;accordingly,five morphological features and one location-related feature were calculated to distinguish more precisely the panicles from other organs. The measurements of morphological features included area, information fractal dimension(IFD),elongation factor,orientation,ratio of width,and Waddell disk diameter(diameter of a disk with the same area as the particle).The center of mass (y)was extracted as the location-related feature.

      To select the effective features,we performed feature subset selection using a stepwise selection method(SAS stepdisc procedure;method=stepwise,SLE=0.15,SLE specifies the significance level for adding variables,SLS=0.15,SLS specifies the significance level for retaining variables).This process extracted 12 effective features from the original 28 features of the candidate panicles.The 12 features extracted were:(1)the standard deviation of the R component(Rs.d.)was computed as the standard deviation of the R component of all the pixels in the region,(2)the average of the H component (Ha),computed as the average of the H components of all the pixels in the region,(3)the standard deviation of the S component(Ss.d.),(4)the average of the r component(ra), (5)the standard deviation of the g component(gs.d.),(6)the average of the b component(ba),(7)the standard deviation of the b component(bs.d.),(8)the standard deviation of the L component(Ls.d.),(9)the standard deviation of the i2 component(i2s.d.),(10)the elongation factor(EF).A rectangle with the same perimeter and area as a given region was called the equivalent rectangle.The elongation factor was calculated as the max ferret diameter(F)divided by the equivalent rectangle short side(RFb).And max Feret diameter was the distance between the two perimeter points that are the furthest apart.The elongation factor indicated the extent of the elongation of the region.More details concerning the definition of EF can be found in the NI Vision Concepts Manual(National Instruments),(11)the area(A)was computed as the number of pixels in the region,and(12)the IFD.The fractal dimension is widely used as a description of shape complexity[24].The most commonly used method for calculating the fractal dimension is the box-counting method because it is simple to compute.In this study,we adopted IFD because it provides a more precise estimate of the fractal dimension than the box-counting method and is still easy to compute[25].The IFD is calculated by plotting information Iε(defined in Eq.(4))against the natural logarithm of the box size ε;IFD is computed as the slope of the regression line.

      where Nεis the number of boxes,ε is the box size,and piis the probability of the foreground pixels falling into the ith box.

      2.4.Panicle region identification with ANN

      ANN is widely used for pattern recognition tasks.In this study,a three-layered back-propagation ANN with 12 input neurons,h hidden neurons,and 2 output neurons was used to discriminate panicle regions from other organs.

      As a pre-processing step,a total of 907 samples with 650 panicle regions and 247 regions of other organs were collected to construct the neural network.The 907 samples were split randomly into three subsets:a training set(428 samples with 315 panicle regions and 113 regions of other organs),a validation set(95 samples with 65 panicle regions and 26 regions of other organs),and atestset(384 samples with 276 panicle regions and 108 regions of other organs).The training set was used to update the network parameters.The validation set was used to prevent the network from overfitting the data.The error in the validation set was monitored during training.Generally,the validation error decreased at the beginning of the training but increased when the network began to overfit.The network parameters were determined at the minimum of the validation error. The test set was used to evaluate the performance of the network.After monitoring the performance of the samples with h varying from 5 to 25,h was set as 13 based on the best performance.For each h value,the network was trained more than 100 times.In the end,the network that generated the highest accuracy(93.68%for the validation set and 94.27%for the test set,Table 1)was chosen for panicle identification.This learning phase of panicle identificationwas accomplished using the artificial neural network toolbox in MATLAB R2009a(MathWorks,USA).

      Table 1-Classification accuracy(%)for panicle regions, other organ regions,and overall regions using an artificial neural network.

      After the learning phase,the network was constructed using LabVIEW.The network was then used to identify panicle regions from the candidate panicle regions during the experiments.

      2.5.Panicle number determination

      After panicle identification with ANN,the panicle number in the current image was determined as the number of regions remaining in the image.Some panicles may be hidden by other organs at one angle.Consequently,the panicle number determined from the image at that angle will be smaller than the actual panicle number.However,there are images at certain angles in which all of the panicles(or at least the maximum number of panicles)can be seen.The panicle number determined from the image at that angle will be the best estimation of the actual panicle number.Accordingly,the panicle number of the plant was defined as the maximum of the panicle numbers extracted from all 12 multi-angle images.

      To test the performance of the method developed in this study,105 greenhouse rice plants belonging to a popular rice variety(Zhonghua 11)at the full heading stage were imaged and analyzed.The mean absolute error(MAE)was calculated using Eq.(5).

      where PNi.automaticrepresents the panicle number counted automatically using the method described,PNi.manualrepresents the panicle number determined manually,and n represents the number of rice plant samples.

      3.Results and discussion

      3.1.Panicle identification

      Identifying panicles from the extracted candidate panicle regions is problematic,owing to natural variation in panicle shape,size,and color both within the same plant and among different plants(Fig.3-a,c,e,f).Fig.4 shows examples of panicle identification using the ANN.The green regions indicate regions that were identified as panicles,and the red regions indicate other organs.Although ANN performed well in most cases(Fig.3-a,b,c,d),the performance of ANN decreased in the following situations:(1)only part of the other organ was extracted,and some of these extracted regions were similar in appearance to a panicle and consequently misclassified as panicles(Fig.3-e,f);and(2)some partly exserted panicles appeared very similar to other organs and thus were mistakenly treated as other organs(Fig.3-g,h).To improve the accuracy of panicle identification,more imaging technologies may be incorporated in the future.

      For comparison,we also investigated discriminant analysis using the feature set selected in ANN to classify panicles and other organs.Table 2 illustrates the comparison of the classification accuracy using ANN,linear discriminant analysis(LDA),and quadratic discriminant analysis(QDA).Note that the test set was generated by combining the validation set and test set used for the construction of ANN;additionally, the same training set was used as described for the construction of the ANN.

      As shown in Table 2,in general,ANN outperformed LDA and QDA in discriminating panicles.However,LDA and QDA performed better than ANN in identifying other organs.In view of the overall classification accuracy,ANN was the optimal classifier and was used in this study.

      3.2.Panicle number determination

      The MAE of the manual and automatic count was 0.5.Fig.4-a presents a scatter plot of the automatic count against the manual count.Fig.4-b illustrates the distribution of the difference between the two counts(defined as PNi.automatic?PNi.manual). Among the 105 tested rice plants,2.8%,21.0%,54.3%,20.0%,and 1.9%of the plants generated differences of?2,?1,0,1,and 2, respectively,between the automated and manual counts.The variance of the difference was 0.6.

      The discrepancy between the manual and automatic count was caused chiefly by errors in image segmentation and panicle identification.Inaccurate image segmentation may lead to incorrect discrimination of the panicle from other organs and consequently introduce bias into the panicle number determination.Additionally,oversegmentation(segmentation of one panicle region into two or more regions) caused overestimation of panicle numbers,whereas failure in segmenting the panicle region from the rest of the plant may cause underestimation of the panicle number.Identification of panicles as other organs may result in underestimation of panicle number and identification of other organs as panicles may result in overestimation of panicle number.

      Segmenting panicles from the rest of the plant during the full heading stage was problematic,owing to color similarity between panicles and other organs.Panicle exsertion differs not only between plants but also within tillers of the same plant.Moreover,the growth conditions of individual spikelets vary with the positions of the spikelets within the same panicle[26].For these two reasons,color varies among different panicles within the same plant and even within different pixels belonging to the same panicle.In some cases, the panicles were overlapped by other organs.Color variation and organ overlap may result in false segmentation.In addition to i2 plane segmentation,we investigated segmentation based on other transformations of the RGB color space (including HSI,HSL and discriminant analysis[21])and the gray-level co-occurrence matrix(GLCM)[27].Discriminant analysis and GLCM segmentation generated results similar to those of i2 plane segmentation.However,discriminant analysis is a supervised method and thus has thedisadvantage of poor generalization relative to i2 plane segmentation.The time required for computing GLCM was large,especially for large images.In view of these limitations,we selected i2 component segmentation for segmenting panicles.

      Fig.3-Examples of panicle identification with ANN.(a),(c),(e),and(g)show the original RGB images of a rice plant.(b),(d),(f), and(h)illustrate the result of organ classification,in which green regions and red regions represent panicles and other organs, respectively.(a)/(b)and(c)/(d)show instances in which the panicles are correctly classified.(e)/(f)displays an instance in which a region of other organs is misclassified as a panicle region,whereas(g)/(h)illustrates an instance in which a panicle region is misidentified as a region of other organs.

      Fig.4-Comparison of automatic image-based measurements with manual measurements.(a)Scatter plot of panicle numbers evaluated automatically against manual measurements.Regression line:y=0.8511x+0.5175,coefficient of determination (R2)=0.8644.(b)Distribution of the differences between the two measurements.

      In contrast to previous studies where rice panicles were cut from the rice plants and then imaged for further analysis,in this study we coupled image processing and pattern recognition,thereby developing a new image analysis pipeline for precise discrimination of panicles from the rest of the plant in vivo.The candidate panicle regions were extracted by integration of hysteresis thresholding based on the i2 component,region growing,and region merging.An ANN combining color,morphological,and location-related features was then adopted to identify panicle regions.Based on panicle region identification,the panicle number was determined in vivo in a non-intrusive manner.

      Generally,the presented method based on image analysis produced satisfactory results for rice varieties whose visual overlap between leaves and panicles was not large.For rice varieties having more than 20 panicles inserted deeply into the canopy,counting panicles from multi-angle RGB images of the plant is not practicable even for humans.In this case,none of the images obtained from different angles can display all of the panicles without overlap.Thus,the paniclenumber measured by our method(maximum panicle number among the 12 images)was always smaller than the actual value.Thus,for these varieties our method was useless.X-ray computed tomography(CT)technology is capable of detecting objects that are obscured,and thus may have potential for counting panicles for these varieties.However,its computation cost is relatively large.Compared with counting panicles, predicting panicle biomass or yield of the plant from RGB images taken from multiple angles may be more meaningful and feasible for these rice varieties.

      Table 2-Comparison of classification accuracy for panicle regions,other organ regions,and overall regions of the test set using an artificial neural network(ANN),linear discriminant analysis(LDA),and quadratic discriminant analysis(QDA).

      4.Conclusion

      This paper described the use of multi-angle imaging and image analysis to facilitate rice panicle counting.Using specific image analysis software,the method described was able to determine the panicle number of pot-grown rice plants in vivo in a nonintrusive manner.We succeeded in largely overcoming the hurdle of spectral similarity between panicles,leaves,and stems by exploiting shape and textural differences among organs.A total of 105 rice plants during the full heading stage were examined to test the performance of the method.The mean absolute error of the manual and automatic counts was 0.5,with 95.3%of the plants yielding absolute errors within±1.Future work will include monitoring the growth conditions of panicles,including panicle length,area,and degree of maturity.The photonics-based technology described here will be useful for predicting rice yield and screening candidate rice plants.

      Acknowledgments

      This work was supported by grants from the National High Technology Research and Development Program of China(2013AA102403),the National Natural Science Foundation of China(30921091,31200274),the Program for New Century Excellent Talents in University(NCET-10-0386),and the Fundamental Research Funds for the Central Universities(2013PY034, 2014BQ010).

      [1]ü.Kolukisaoglu,K.Thurow,Future and frontiers of automated screening in plant sciences,Plant Sci.178(2010)476—484.

      [2]H.Holtorf,M.C.Guitton,R.Reski,Plant functional genomics, Naturwissenschaften 89(2002)235—249.

      [3]R.T.Furbank,Plant phenomics:from gene to form and function,Funct.Plant Biol.36(2009)5—6.

      [4]W.Yang,L.Duan,G.Chen,L.Xiong,Q.Liu,Plant phenomics and high-throughput phenotyping:accelerating ricefunctional genomics using multidisciplinary technologies,Curr.Opin. Plant Biol.16(2013)180—187.

      [5]L.Cabrera-Bosquet,J.Crossa,J.von Zitzewitz,M.D.Serret,J. Luis Araus,High-throughput phenotyping and genomic selection:the frontiers of crop breeding converge,J.Integr. Plant Biol.54(2012)312—320.

      [6]R.T.Furbank,M.Tester,Phenomics-technologies to relieve the phenotyping bottleneck,Trends Plant Sci.16(2011) 635—644.

      [7]M.Bylesj?,V.Segura,R.Y.Soolanayakanahally,A.M.Rae,J. Trygg,P.Gustafsson,S.Jansson,N.R.Street,LAMINA:a tool for rapid quantification of leaf size and shape parameters, BMC Plant Biol.8(2008)82.

      [8]A.French,S.Ubeda-Tomás,T.J.Holman,M.J.Bennett,T. Pridmore,High-throughput quantification of root growth using a novel image-analysis tool,Plant Physiol.150(2009) 1784—1795.

      [9]M.R.Golzarian,R.A.Frick,K.Rajendran,B.Berger,S.Roy,M. Tester,D.S.Lun,Accurate inference of shoot biomass from high-throughput images of cereal plants,Plant Methods 7 (2011)2.

      [10]W.Yang,Z.Guo,C.Huang,L.Duan,G.Chen,N.Jiang,W.Fang, H.Feng,W.Xie,X.Lian,et al.,Combining high-throughput phenotyping and genome-wide association studies to reveal natural genetic variation in rice,Nat.Commun.5(2014)5087.

      [11]C.Reuzeau,J.Pen,V.Frankard,J.de Wolf,R.Peerbolte,W. Broekaert,W.van Camp,TraitMill:a discovery engine for identifying yield-enhancement genes in cereals,Mol.Plant Breed.3(2005)753—759.

      [12]C.Granier,L.Aguirrezabal,K.Chenu,S.J.Cookson,M.Dauzat, P.Hamard,et al.,PHENOPSIS,an automated platform for reproducible phenotyping of plant responses to soil water deficit in Arabidopsis thaliana permitted the identification of an accession with low sensitivity to soil water deficit,New Phytol.169(3)(2006)623—635.

      [13]Q.Zhang,Strategies for developing green super rice,Proc. Natl.Acad.Sci.U.S.A.104(2007)16402—16409.

      [14]Y.Xing,Q.Zhang,Genetic and molecular bases of rice yield, Annu.Rev.Plant Biol.61(2010)421—442.

      [15]M.Ikeda,Y.Hirose,T.Takashi,Y.Shibata,T.Yamamura,T. Komura,K.Doi,M.Ashikari,M.Matsuoka,H.Kitano,Analysis of rice panicle traits and detection of QTLs using an image analyzing method,Breed.Sci.60(2010)55—64.

      [16]T.Liu,D.Mao,S.Zhang,C.Xu,Y.Xing,FinemappingSPP1,aQTL controlling the number of spikelets per panicle,to a BAC clone in rice(Oryza sativa),Theor.Appl.Genet.118(2009)1509—1517.

      [17]Y.Zhang,L.Luo,T.Liu,C.Xu,Y.Xing,Four rice QTL controlling number of spikelets per panicle expressed the characteristics of single Mendelian gene in near isogenic backgrounds,Theor.Appl.Genet.118(2009)1035—1044.

      [18]Z.Y.Liu,H.F.Wu,J.F.Huang,Application of neural networks to discriminate fungal infection levels in rice panicles using hyperspectral reflectance and principal components analysis, Comput.Electron.Agric.72(2010)99—106.

      [19]Z.Y.Liu,J.J.Shi,L.W.Zhang,J.F.Huang,Discrimination of rice panicles by hyperspectral reflectance data based on principal component analysis and support vector classification,J. Zhejiang Univ.Sci.B 11(2010)71—78.

      [20]Y.I.Ohta,T.Kanade,T.Sakai,Colour information for region segmentation,Comput.Graphics Image Process.13(1980) 222—241.

      [21]I.Philipp,T.Rath,Improving plant discrimination in image processing by use of different colour space transformations, Comput.Electron.Agric.35(2002)1—15.

      [22]J.Canny,A computational approach to edge detection,IEEE Trans.Pattern Anal.6(1986)679—698.

      [23]N.Otsu,A threshold selection method from gray level histograms,IEEE Trans.Syst.Man Cybern.A 9(1)(1979)62—66.

      [24]S.Buczkowski,S.Kyriacos,F.Nekka,L.Cartilier,The modified box-counting method:analysis of some characteristic parameters,Pattern Recogn.31(1998)411—418.

      [25]J.D.Farmer,E.Ott,J.A.Yorke,The dimension of chaotic attractors,Phys.D 7(1983)153—180.

      [26]S.Yoshida,Fundamentals of Rice Crop Science,International Rice Research Institute,Philippines,1981.

      [27]R.M.Haralick,Statistical and structural approaches to texture, Proc.IEEE 67(1979)786—804.

      *Corresponding author.Tel.:+86 27 87282120;fax:+86 27 87287092.

      E-mail address:ywn@mail.hzau.edu.cn(W.Yang).

      Peer review under responsibility of Crop Science Society of China and Institute of Crop Science,CAAS.

      http://dx.doi.org/10.1016/j.cj.2015.03.002

      2214-5141/?2015 Crop Science Society of China and Institute of Crop Science,CAAS.Production and hosting by Elsevier B.V.This is an open access article under the CC BY-NC-ND license(http://creativecommons.org/licenses/by-nc-nd/4.0/).

      静乐县| 红原县| 天门市| 南投县| 武乡县| 广汉市| 孝义市| 丹凤县| 定州市| 阳江市| 武汉市| 阿勒泰市| 肥城市| 牙克石市| 仁寿县| 栾川县| 香河县| 兴文县| 噶尔县| 佛学| 浦城县| 诏安县| 象山县| 左云县| 延边| 崇仁县| 姚安县| 安西县| 阜阳市| 辽源市| 出国| 紫云| 延安市| 汉寿县| 桦川县| 登封市| 永宁县| 嘉义县| 南皮县| 台安县| 微博|