• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An effective graph and depth layer based RGB-D image foreground object extraction method

    2018-01-08 05:10:10ZhiguangXiaoHuiChenChangheTuandReinhardKlette
    Computational Visual Media 2017年4期
    關(guān)鍵詞:小商店零錢塊錢

    Zhiguang Xiao,Hui ChenChanghe Tu,and Reinhard Klette

    ?The Author(s)2017.This article is published with open access at Springerlink.com

    An effective graph and depth layer based RGB-D image foreground object extraction method

    Zhiguang Xiao1,Hui Chen1Changhe Tu2,and Reinhard Klette3

    ?The Author(s)2017.This article is published with open access at Springerlink.com

    We consider the extraction of accurate silhouettes of foreground objects in combined color image and depth map data.This is of relevance for applications such as altering the contents of a scene,or changing the depths of contents for display purposes in 3DTV,object detection,or scene understanding.To identify foreground objects and their silhouettes in a scene,it is necessary to segment the image in order to distinguish foreground regions from the rest of the image,the background. In general,image data properties such as noise,color similarity,or lightness make it difficult to obtain satisfactory segmentation results.Depth provides an additional category of properties.

    1 Proposed method

    Our approach includes four steps:see Fig.1.Firstly,graphs are built independently from color and depth information such that each node represents a pixel,and an edge ep,q∈E connects nodes p and q.We transform the color space into CIELUV space to measure differences between pixels,and use as a region merging predicate:two regions are merged if and only if they are clustered in both color and depth graphs,providing more accurate over-segmentation results. Secondly,depth mapsare partitioned into layers using a multi-threshold method.In this step,objects belonging to different depth ranges are segmented into different layers.Thirdly,seed points are specified manually to locate the foreground objects,and to decide which depth layer they belong to.Finally,we merge the over segmented scene according to cues obtained in the previous three steps,to extract foreground objects with their accurate silhouettes from both color and depth scenes.

    1.1 Improved graph-based algorithm with depth

    Although there have been related studies over the past 20 years,image segmentation is still a challenging task.To obtain foreground objects,the first step of our approach is to obtain an over segmented scene.We improve upon the graph-based approach of Ref.[1]in the following two ways:

    Selection of color space.The first improvement concerns the color space used.RGB color space is often used because of its compatibility with additive color reproduction systems.In our approach,dissimilarity between pixels is measured by edge weights,which is calculated using Euclidean distance in CIELUV color space.Color differences,measured by Euclidean distances in RGB color space are not proportional to human visual perception;CIELUV color space is considered to be perceptually more uniform than other color spaces.

    Fusion of color and depth.The second important aspect is that we combine color and depth information to provide more over-segmented results.In Ref.[1],the merging predicate is de fined for regions R1and R2as where the minimuminternal difference diis de fined by

    Fig.1 (a)Input color scene,(b)input depth map,(c)over-segmentation result,(d)selection of seed points(red line),(e)selected depth layer,and(f,g)extracted foreground object in color and depth.

    Here,dbandware thebetween-region differenceand thewithin-region maximum weight,respectively,τ(R)is a threshold function which is based on the area of regionR,anddbandware de fined as follows:

    where edge weightω(e)is a measure of dissimilarity between two pixels connected by edgee.An edgee∈ERconnects two pixels in regionR.

    Exclusive use of color information is very likely to lead to under-segmentation,and this needs to be avoided. Conversely,depth information may provide additional clues for providing more accurate silhouettes of objects.Thus,we build a separate graph based on depth information. During the segmentation process,two regions are clustered if and only if they are allowed to cluster both in the color image graph and the depth map graph.

    1.2 Seed point specification

    Seed points are used to locate the foreground objects in both color image and depth map.Our approach allows a user to specify an individual object to be extracted as a foreground object by roughly drawing a stroke on the object.We sample points on the trajectory of the stroke as seed points.

    Typically our approach can extract the specified object by indicating seed points in this way only once,but in some cases,to obtain a satisfactory result, repeated interaction might be needed.Therefore we define two kinds of seed points,those inside and outside an intended object,which we callpositiveandnegative seed points(PSPs and NSPs),respectively.

    Regions containing positive seed points are calledpositive seed regions.When we unify an over segmented color image,we remove regions which contain negative seed points(negative seed regions)to break continuity of regions,which are connected under constraints defined by depth layers; we maintain positive seed point regions as well as regions which are connected to them.Therefore,for each extraction task,a user may draw one or more strokes inside the foreground object for extraction,and pixels from the stroke are used as PSPs.Next,our approach provides an extraction result.If the result contains regions which should not be merged,the user may draw an NSP stroke in the joined regions to separate them,like using a knife to cut off redundant parts.

    1.3 Depth layering

    Given a depth map of a 3D scene,the purpose of depth layering is to segment the continuous depth map into several depth layers.We assume in this paper that depth values for a single indoor object are only distributed within a small range. This assumption may not be true in general but appears to be acceptable in our application.

    We partition the depth map into depth layers in the form of binary images.A depth layer contains pixels in a range of depth values,and we consider these pixels as the foreground region(white)of the chosen depth layer.One depth layer is used to extract one foreground object.Therefore,the specified foreground object for extraction should lie inside the foreground region of the selected depth layer,as our approach merges an over-segmented scene based on the selected depth layer.If depth values of one object are not in a small range then the depth value interval of this object is out of the considered range of one depth layer,and have an integral object which is divided into more than one depth layer.In such a case,our approach is unable to select a proper depth layer to extract the integral object.

    Inpainting for depth map.Before depth layering,we do some preprocessing of the depth map,calledinpainting,to reduce artefacts caused by the capturing procedure.

    Time-of- flight cameras or structured lighting methods(as used by Kinect)cannot obtain depth information in over or under-exposed areas,and are often inaccurate at silhouettes of objects.This results in an incomplete depth map that has inaccurate depth for some pixels,and no depth at all for others. Estimating the depth for these missing regions is an ill-posed problem since there is very little information to use.Recovering the true depth would only be possible with very detailed prior knowledge of the scene(or by using an improved depth sensor,such as replacing one stereo matcher by another one).

    We use an intensive bilateral filter,proposed in Ref.[2],to repair the depth map.The depth of each pixel with missing depth information is determined by searching thek×kneighboring pixels for ones with similar color in the color image,and with a non-zero depth value.The search range varies until a fixed number of satisfactory neighboring pixels is reached.After completing all depth values of such pixels,a median filter is applied to the depth map,to smooth outliers in the depth map.

    Segmentation of depth map.After inpainting in the depth map,next we segment the depth map into different layers such that each layer contains pixels in a range of depth values.The problem here is to decide how many layers should be used.If too few segmented layers are used,many over-segmented regions produced in the last step are probably in the same layer.Contrariwise,if the segmented layers are too many,a single over-segmented region is probably spread across more than one layer. Either case will make it difficult to distinguish whether a region belongs to a foreground object or not.

    Our goal is that,in this step,those regions which overlap according to the seed points specified by the users,should be contained in the same layer.This agrees with the assumption that the user will usually specify most of the regions that belong to a foreground object.With this constraint,we segment the depth map into layers using an extended multithreshold algorithm as proposed in Ref.[3].Equation(5)outlines how to segment a depth map into a given numbernof layers:

    whereD(i,j)is the depth value at pixel(i,j),andTm,for 0≤m≤n?1,is themth threshold computed by an extended Otsu’s multi-threshold method.

    We propose a method to find a proper depth layer automatically for a foreground object in given depth map.First,we initialisenas being the maximum number of layers that sufficient for any 3D scene considered. Thus,for any given depth map,the proper number of layers should be in the range of 2–n.Then we split the depth map repeatedly from 2 tonlayers,in an ordered way,and obtain a series of segmented layers each represented by one binary image.Thus,for any given depth map,an optimal layer for a specified object should be in this set of segmented layers.

    We define the pixels with value 1 in one binary image as being the foreground pixels;they comprise the foreground region of this layer. A layer is defined to be avalid layerif and only if all the positive seed points are in the foreground region in its corresponding binary image.

    We sort all valid layers according to the total number of foreground pixels in the binary image.Our experimental results indicate that choosing the middle valid layer from the sequence defines a good choice,allowing the proper depth layer of the specified foreground object.

    1.4 Merging over-segmented color regions

    Actually,one object often contains a variety of colors while it connects to the background region at the same depth.Therefore,we propose to group the regions on the basis of regional continuity which is established under the constraint of depth layers.Our regional continuity function is de fined as follows:

    whereAd(k)is the area of overlap of the foreground of the selected depth layer with regionk,Ac(k)is the total area of regionk,andTAis an adjustable coefficient.

    Based on this criterion,the region merging step starts with region labeling,to distinguish and count the area of each region.Firstly,each region is relabeled(approximately)for initialization.Secondly,for each pixelp,we find those pixels among its 8-adjacent neighbors which belong to the same region asp.We then updatepby assigning the minimum label among those of the detected 8-adjacent neighbors andpitself. We repeat this procedure until no update occurs.After that,each region has a unique label,and the area of each region,as well as the area of the region overlapping with the foreground region of the selected depth layer,can be determined by counting.Next,regional continuity is constructed on the basis of Eq.(6).We modify the regional continuity to remove mis-connected regions:negative seed regions and regions that are connected to positive seed regions via negative seed regions,should be disconnected from positive seed regions. Finally,semantically meaningful object results are obtained by merging positive seed regions and regions connected to them.

    2 Experimental evaluation

    2.1 Qualitative analysis

    Our approach was evaluated mainly on a largescale hierarchical multi-view RGB-D object dataset collected using a Kinect device.A recently published dataset,the RGB-D Scenes Dataset v2(RGB-D v2),includes 14 scenes which cover common indoor environments.Depth maps for this dataset were recovered by the intensive bilateral filter mentioned in Section 2.3 before the depth-layering step.The MSR 3D Video Dataset(MSR 3D),and more complex RGB-D images used by Zeng et al.[4],were also employed to test our approach.

    Objects and their silhouettes extracted by our approach are shown in Fig.2. Although Kinect devices provide depth maps with large holes and significant noise,a well restored depth map and the segmented results demonstrate the robustness of our algorithm to noise in the depth images.From our results we conclude that our approach is able(for the test data used)to extract foreground objects from the different background scenes.

    2.2 Quantitative analysis

    Metrics including precision,recall,andF-measure(see Eqs.(7)–(9)) were also computed and interpreted to analyze our results quantitatively:

    Fig.2 Extraction results for scenes from different datasets.(A,B)Extracted silhouettes in color and depth images.(C,D)Extracted foreground objects in color and depth images.

    Fig.3 Quantitative analysis of different methods.MW:magic wand,GC:grab-cut[6],BL:graph-based on color information only(i.e.,baseline method),FM:the fast mode of Ref.[5]with depth layer,MS:mean-shift color-depth with depth layer,Our:our approach.The horizontal axis represents different datasets.

    whereTpis the number of correctly detected foreground object pixels,Fprepresents nonforeground object pixels detected as foreground object pixels,andFnmeans undetected foreground object pixels.

    老常突然肯定地說:“對了,我想起來一點,那個客人肯定是一個男人,但是具體多大年紀(jì)說不上,我記得當(dāng)時加上寄存費和搬運(yùn)費一起是80塊錢,可是我們把寄存單給他的時候,那人在身上摸了半天零錢只有75塊,他要求我們只收75塊算了,我們當(dāng)然不同意。他只好掏出一張100的,可是我們又沒錢找。我說我拿到那邊小商店去換一下,他猶豫了半天說好的,可是等我換好零錢再來到停車場,當(dāng)時只有車在,沒看到人了。我還等了好久的?!?/p>

    We extract ground truth manually to evaluate the results,and setβ=1 to calculate theF-measure as we consider that the recall rate is as important as precision.Performance measures were computed for different datasets to evaluate the approach’s effectiveness on those different datasets.See Fig.3 for quantitative analysis results for our approach(yellow).

    2.3 Comparison with other methods

    For a comparative evaluation of our approach,we also tested five other methods designed for extracting objects from scenes in datasets as used above.They are the magic wand in PhotoShop,grab-cut,the original graph-based algorithm in Ref.[1]with depth layers,a multistage,hierarchical graph-based approach[5]with depth layers,and an improved mean-shift algorithm with depth layers.See Fig.4 for comparative results.

    Qualitative results.Compared to magic wand,shown in Fig.4(A),our approach(Fig.4(F))is able to reduce the amount of user interaction considerably,only with a single initialisation needed to complete an extraction task.

    Grab-cut[6],in Fig.4(B),is excellent in terms of simplicity of the user input,but for colorful scenes,the extraction process is difficult and more interactions are needed to indicate the foreground and the background. Moreover,the results lack discrimination in the presence of color similarity.

    The above methods only use color information to extract foreground objects.For a further illustration of the performance of our approach,extraction results provided by methods in which color and depth are both applied are also compared with our approach.First,we take the original graph-based algorithm[1]with depth layers as abaseline methodin our experiments:see Fig.4(C).The graph-based algorithm generates over-segmented results.Then,regions are merged based on depth layer constraints and seed points.Comparing results shows the effectiveness of our improved graph-based method.

    We also compare with results obtained by using an algorithm published in Ref.[5]which combines depth,color,and temporal information,and uses a multistage,hierarchical graph-based approach for segmenting 3D RGB-D point clouds. Because the scenes in our applications are static,we are able to use the fast mode(i.e.,removing temporal information)of the algorithm of Ref.[5]for providing over-segmented results.The 3D point cloud data,as generated by the color scene and depth map,are used as input for this method.The foreground objects are extracted based on the previous result,seed points,and the depth layers.See Fig.4(D)for results of this method following Ref.[5].

    An improved mean-shift algorithm with depth layers,shown in Fig.4(E),is another candidate used for testing. Depth information is first added to amend the mean-shift vector to over-segment the color scene.The over-segmented results are merged based on the seed points and depth layers.

    Quantitative results.Figure 3 presents the precision rate,recall rate,andF-measure for the above methods on three different datasets.One of the merging constraints of our approach is based on the depth layer,and as the edges of objects in the depth map are not so accurate(usually a little outside the objects compared to the ground truth),in the extraction results,our approach may merge some pixels that do not belong to the ground truth.Some methods provide higher precision because their extraction results are not integral and are almost contained within the ground truth. Thus,the precision rate of our approach is lower than that of some other methods.However,our approach offers more integral extraction results,which makes the recall rate higher than that of the others.TheF-measure withβ=1 demonstrates that our approach performs better.

    Fig.4 Foreground objects and silhouettes extracted by different methods in both color and depth.(A)Magic wand,(B)grab-cut[6],(C)graph-based on color information only(i.e.,baseline method),(D)the fast mode of Ref.[5]with depth layers,(E)mean-shift color-depth with depth layers,and(F)our approach.(a)Interactions,(b,c)extraction results in color and depth.

    Amount of interaction.Figure 4 shows the interaction need by each method for each scene.For the magic wand,the red spots show the seed points specified by the users.The sizes and locations of the red spots should be chosen according to different foregrounds.

    In grab-cut,a rectangular box is drawn around the foreground object.Red lines are seed points in the foreground,and blue lines are seed points in the background.When applying the grab-cut method to colorful scenes,for example,the scenes used by Zeng et al.,more iterations and seed points are needed.We do not show all of the iterations of the grab-cut method on a scene used by Zeng et al.in Fig.4;it is difficult to follow them visually.Seed points for the other four methods are specified by roughly drawing a stroke on the foreground. Red lines represent the seed points for the foreground,and blue lines represent the background.

    There is no limitation on seed points in our method;we usually draw a stroke around the center of the specified foreground object,but this is not necessary.If the automatically selected depth layer is appropriate for extracting foreground objects,then no further seed points are needed. If not,then more positive seed points are required to specify other positions to be extracted as parts of foreground objects.Positions can be located according to the previously selected depth layer;therefore a user can coarsely add positive seed points around the located positions to obtain a proper depth layer.The user is able to obtain expected results by applying positive and negative seed points flexibly.

    The extracted results of our approach remain fairly robust:the integrity of the objects is mostly retained while silhouettes are better preserved.Our approach outperforms in general the other approaches regarding the quality of results,with a reduced need for interaction.

    Acknowledgements

    The authors thank the editors and reviewers for their insightful comments.This work is supported by Key Project No. 61332015 of the National Natural Science Foundation of China,and Project Nos. ZR2013FM302 and ZR2017MF057 of the Natural Science Found of Shandong.

    [1]Felzenszwalb,P.F.;Huttenlocher,D.P.Efficient graph-based image segmentation.International Journal of Computer VisionVol.59,No.2,167–181,2004.

    [2]Li,Y.;Feng,J.;Zhang,H.;Li,C.New algorithm of depth hole filling based on intensive bilateral filter.Industrial Control ComputerVol.26,No.11,105–106,2013.

    [3]Otsu,N.A threshold selection method from gray-level histograms.IEEE Transactions on Systems,Man,and CyberneticsVol.SMC-9,No.1,62–66,2007.

    [4]Zeng,Q.;Chen,W.;Wang,H.;Tu,C.;Cohen-Or,D.;Lischinski,D.;Chen,B.Hallucinating stereoscopy from a single image.Computer Graphics ForumVol.34,No.2,1–12,2015.

    [5]Hickson,S.;Birch field,S.;Essa,I.;Christensen,H.Efficient hierarchical graph-based segmentation of RGBD videos.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,344–351,2014.

    [6]Rother,C.;Kolmogorov,V.;Blake,A. “Grabcut”:Interactive foreground extraction using iterated graph cuts.ACM Transactions on GraphicsVol.23,No.3,309–314,2004.

    1 School of Information Science and Engineering,Shandong University,Jinan 250100,China. E-mail: Z.Xiao,xiaozhg@live.com;H.Chen,huichen@sdu.edu.cn

    2 School of Computer Science and Technology,Shandong University,Jinan 250100,China. E-mail:chtu@sdu.edu.cn.

    3 School of Engineering,Computer and Mathematical Sciences,Auckland University of Technology,Auckland 1142,New Zealand.E-mail:rklette@aut.ac.nz.

    2017-02-17;accepted:2017-07-12

    Zhiguang Xiaois a postgraduate at the School of Information Science and Engineering,Shandong University.He received his B.E.degree in electronics and information engineering from the College of Electronics and Information Engineering, Sichuan University. His research interests are in graph algorithms,computer stereo vision,and image segmentation.

    Hui Chenis a professor at the School of Information Science and Engineering,Shandong University. She received her Ph.D.degree in computer science from the University of Hong Kong,her bachelor and master degrees in electronics engineering from Shandong University. Her research interests include computer vision,3D morphing,and virtual reality etc.

    Changhe Tucurrently is a professor and the associate dean at the School of Computer Science and Technology,Shandong University.He obtained his bachelor,master,and Ph.D.degrees all from Shandong University.His research interests include geometric modelling and processing, computational geometry,data-driven visual computing,etc.He published papers on SIGGRAPH,Eurographics,ACM TOG,IEEE TVCG,CAGD,etc.

    Reinhard Kletteis a Fellow of the Royal Society of New Zealand and a professor at Auckland University of Technology.He was on the editorial board oftheInternational Journal of Computer Vision(2007–2014),the founding Editor-in-Chief ofthe Journal of Control Engineering and Technology(2011–2013),and an Associate Editor of IEEE PAMI(2001–2008).He(co-)authored more than 300 publications in peer-reviewed journals or conferences,and books on computer vision,image processing,geometric algorithms,and panoramic imaging. He presented more than 20 keynotes at international conferences. Springer London published his book entitledConcise Computer Visionin 2014.

    Open AccessThe articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095.To submit a manuscript,please go to https://www.editorialmanager.com/cvmj.

    猜你喜歡
    小商店零錢塊錢
    數(shù)零錢
    8塊錢還在
    “1塊錢的困惑”
    零錢探測器
    馴鹿零錢包
    童話世界(2018年35期)2018-12-03 05:23:10
    社區(qū)邊上開個小商店,難!難!難!
    金點子生意(2017年2期)2017-02-22 16:07:58
    取錢
    扭扭棒小籃子
    小氣鬼
    社區(qū)邊上開個小商店,難!難!難!
    自拍偷自拍亚洲精品老妇| 免费看a级黄色片| 日韩欧美一区二区三区在线观看| 两个人视频免费观看高清| 国产片特级美女逼逼视频| 校园春色视频在线观看| 国产伦在线观看视频一区| 边亲边吃奶的免费视频| 少妇的逼好多水| 春色校园在线视频观看| 精品99又大又爽又粗少妇毛片| 午夜视频国产福利| 亚洲一区二区三区色噜噜| 国产蜜桃级精品一区二区三区| 嫩草影院新地址| 乱系列少妇在线播放| 国产大屁股一区二区在线视频| 免费看光身美女| 国产av在哪里看| 麻豆国产97在线/欧美| 禁无遮挡网站| 午夜福利成人在线免费观看| 又爽又黄无遮挡网站| 麻豆精品久久久久久蜜桃| 欧美成人一区二区免费高清观看| 久久韩国三级中文字幕| 久久中文看片网| 成人三级黄色视频| 色噜噜av男人的天堂激情| 菩萨蛮人人尽说江南好唐韦庄 | 全区人妻精品视频| 中文亚洲av片在线观看爽| 成年女人看的毛片在线观看| 国产精品一区二区三区四区免费观看| 人妻制服诱惑在线中文字幕| 久久鲁丝午夜福利片| 在线a可以看的网站| 国产不卡一卡二| 欧美日韩乱码在线| 午夜精品国产一区二区电影 | 99热只有精品国产| 成人特级黄色片久久久久久久| 伦理电影大哥的女人| 日韩,欧美,国产一区二区三区 | 日韩成人伦理影院| 日韩国内少妇激情av| 国产精品一区二区三区四区免费观看| 人人妻人人澡人人爽人人夜夜 | 亚洲av熟女| 天天躁日日操中文字幕| 国产精品久久久久久久久免| 亚洲色图av天堂| 色噜噜av男人的天堂激情| 亚洲av熟女| 欧美极品一区二区三区四区| 人体艺术视频欧美日本| 国产一区亚洲一区在线观看| 1024手机看黄色片| 亚洲人成网站高清观看| 神马国产精品三级电影在线观看| 一个人免费在线观看电影| 美女高潮的动态| 国产精品99久久久久久久久| 国产不卡一卡二| 99国产精品一区二区蜜桃av| 色尼玛亚洲综合影院| 波野结衣二区三区在线| 99热只有精品国产| 韩国av在线不卡| 色5月婷婷丁香| 久久99精品国语久久久| 狠狠狠狠99中文字幕| 午夜福利在线观看吧| 久久精品国产清高在天天线| 国产精品一区www在线观看| 亚洲国产色片| 嫩草影院入口| 男人和女人高潮做爰伦理| 三级毛片av免费| 在现免费观看毛片| 中文字幕精品亚洲无线码一区| 久久久久性生活片| 级片在线观看| 国产欧美日韩精品一区二区| 啦啦啦观看免费观看视频高清| 国产91av在线免费观看| 午夜激情福利司机影院| 在线免费十八禁| 欧美3d第一页| 久久精品国产亚洲av天美| 国产精品电影一区二区三区| 色综合色国产| 成人毛片a级毛片在线播放| 国产 一区精品| 少妇的逼好多水| 丝袜美腿在线中文| 99在线视频只有这里精品首页| 成人二区视频| 亚洲av成人精品一区久久| 欧美激情久久久久久爽电影| 少妇丰满av| 精品久久久久久久人妻蜜臀av| 大型黄色视频在线免费观看| 观看美女的网站| 人妻久久中文字幕网| 白带黄色成豆腐渣| 国产一区二区激情短视频| 欧美xxxx性猛交bbbb| 免费av观看视频| 久久人人精品亚洲av| 日本爱情动作片www.在线观看| 午夜精品国产一区二区电影 | 国产伦精品一区二区三区视频9| 久久午夜福利片| 丝袜喷水一区| av在线观看视频网站免费| 少妇裸体淫交视频免费看高清| 91精品国产九色| 亚洲国产精品合色在线| 亚洲自偷自拍三级| 青春草国产在线视频 | 成人特级av手机在线观看| 久久人人爽人人片av| 91狼人影院| www日本黄色视频网| 午夜福利在线观看吧| 久久久精品大字幕| www日本黄色视频网| 一本久久精品| 亚洲精华国产精华液的使用体验 | 精品久久久久久久久久免费视频| 国产极品天堂在线| 日韩欧美精品免费久久| 在线免费观看的www视频| 春色校园在线视频观看| 久久久久久久久久成人| 日韩精品有码人妻一区| 国产人妻一区二区三区在| 欧美日韩国产亚洲二区| 色吧在线观看| 在线免费观看不下载黄p国产| 欧美+日韩+精品| 26uuu在线亚洲综合色| 午夜老司机福利剧场| 日韩视频在线欧美| 哪个播放器可以免费观看大片| 中国美女看黄片| 国产成人精品一,二区 | 99热这里只有是精品50| 亚洲自拍偷在线| 欧美成人一区二区免费高清观看| 日韩成人伦理影院| 日韩三级伦理在线观看| 又爽又黄无遮挡网站| 三级男女做爰猛烈吃奶摸视频| 波多野结衣高清作品| 精品无人区乱码1区二区| 日韩国内少妇激情av| 日日撸夜夜添| 国产亚洲欧美98| 欧美最新免费一区二区三区| 日韩强制内射视频| 在线播放国产精品三级| 村上凉子中文字幕在线| 亚洲最大成人手机在线| 99久久九九国产精品国产免费| 日韩三级伦理在线观看| 免费在线观看成人毛片| 国产亚洲精品久久久久久毛片| 国产一区二区在线av高清观看| 免费在线观看成人毛片| 永久网站在线| 偷拍熟女少妇极品色| 亚洲精品乱码久久久v下载方式| 永久网站在线| 欧美zozozo另类| 国产精品麻豆人妻色哟哟久久 | 边亲边吃奶的免费视频| 亚洲精品影视一区二区三区av| 精品久久久噜噜| 久久99蜜桃精品久久| 亚洲高清免费不卡视频| 欧美+日韩+精品| 99久久中文字幕三级久久日本| 一进一出抽搐动态| 免费无遮挡裸体视频| 九色成人免费人妻av| 国产v大片淫在线免费观看| 亚洲人成网站在线播放欧美日韩| 成年女人永久免费观看视频| 中文字幕制服av| 欧美三级亚洲精品| 91在线精品国自产拍蜜月| 欧美一级a爱片免费观看看| 只有这里有精品99| 欧美xxxx黑人xx丫x性爽| 99久久精品一区二区三区| 97人妻精品一区二区三区麻豆| 日韩高清综合在线| 亚洲在久久综合| 丰满人妻一区二区三区视频av| 午夜激情欧美在线| 午夜福利在线观看免费完整高清在 | 色哟哟·www| 国产精品久久久久久久久免| 此物有八面人人有两片| 男插女下体视频免费在线播放| 看黄色毛片网站| 草草在线视频免费看| 热99re8久久精品国产| 大香蕉久久网| 亚洲最大成人av| 天堂影院成人在线观看| 日本色播在线视频| 中国美女看黄片| 亚洲国产日韩欧美精品在线观看| 99在线人妻在线中文字幕| 人妻久久中文字幕网| 国内少妇人妻偷人精品xxx网站| 色播亚洲综合网| av在线老鸭窝| 日韩欧美精品v在线| 午夜免费激情av| 中文资源天堂在线| 一边亲一边摸免费视频| 人妻久久中文字幕网| 国内少妇人妻偷人精品xxx网站| 久久韩国三级中文字幕| 亚洲在久久综合| 国产av不卡久久| 欧美成人精品欧美一级黄| 日本黄大片高清| 成人午夜高清在线视频| 亚洲欧美成人精品一区二区| 亚洲一级一片aⅴ在线观看| 国产精品一二三区在线看| 午夜福利在线观看免费完整高清在 | 欧美最黄视频在线播放免费| 丰满的人妻完整版| 欧美色欧美亚洲另类二区| 精品不卡国产一区二区三区| av在线播放精品| 99精品在免费线老司机午夜| 亚洲欧美成人综合另类久久久 | 亚洲成av人片在线播放无| 国产精品国产三级国产av玫瑰| 美女高潮的动态| 亚洲一区二区三区色噜噜| 亚洲av熟女| 久久久欧美国产精品| 国产精品国产三级国产av玫瑰| 99久国产av精品| 12—13女人毛片做爰片一| 美女内射精品一级片tv| 伊人久久精品亚洲午夜| 国产一区二区三区在线臀色熟女| 干丝袜人妻中文字幕| 亚洲最大成人中文| 午夜福利在线在线| 丰满乱子伦码专区| 久久久久网色| 国内久久婷婷六月综合欲色啪| 伦精品一区二区三区| 色哟哟哟哟哟哟| 黑人高潮一二区| 一边摸一边抽搐一进一小说| 国产在视频线在精品| 国产一级毛片七仙女欲春2| 人妻少妇偷人精品九色| 天堂影院成人在线观看| 一区福利在线观看| 婷婷色av中文字幕| 天天躁夜夜躁狠狠久久av| 日韩高清综合在线| av福利片在线观看| 久久6这里有精品| 欧美变态另类bdsm刘玥| 亚洲av成人av| 国产色爽女视频免费观看| 久久精品久久久久久噜噜老黄 | 成人欧美大片| 亚洲av免费在线观看| 精品99又大又爽又粗少妇毛片| av天堂在线播放| 亚洲最大成人手机在线| av.在线天堂| 夜夜爽天天搞| 三级经典国产精品| 一个人观看的视频www高清免费观看| kizo精华| 久久久精品欧美日韩精品| 国产日韩欧美在线精品| 高清日韩中文字幕在线| 亚洲图色成人| 少妇被粗大猛烈的视频| 老司机福利观看| 欧美又色又爽又黄视频| 秋霞在线观看毛片| 在线观看av片永久免费下载| 午夜爱爱视频在线播放| 成人毛片60女人毛片免费| 亚洲精品影视一区二区三区av| 久久精品国产亚洲av涩爱 | 国产精品国产高清国产av| 听说在线观看完整版免费高清| 成人美女网站在线观看视频| 国产精品,欧美在线| 国产91av在线免费观看| 一区福利在线观看| 蜜桃久久精品国产亚洲av| 日本色播在线视频| 中出人妻视频一区二区| 深夜a级毛片| 久久99热6这里只有精品| 成人毛片a级毛片在线播放| 99久久精品国产国产毛片| 欧美最新免费一区二区三区| videossex国产| 老熟妇乱子伦视频在线观看| av视频在线观看入口| 欧美日本视频| 亚洲av一区综合| 91精品一卡2卡3卡4卡| 成年女人永久免费观看视频| 午夜视频国产福利| 日韩强制内射视频| 国产精品不卡视频一区二区| 91久久精品国产一区二区三区| 精品国内亚洲2022精品成人| 波多野结衣高清作品| 国产高清三级在线| 久久久久久久久久黄片| 久久久精品大字幕| 99在线人妻在线中文字幕| 久久久久久久亚洲中文字幕| 日韩欧美三级三区| 99热这里只有是精品50| 亚洲av成人av| 欧美潮喷喷水| 久久久精品94久久精品| kizo精华| 我要搜黄色片| 伦精品一区二区三区| 深爱激情五月婷婷| 两性午夜刺激爽爽歪歪视频在线观看| 免费观看人在逋| 国产午夜精品论理片| a级毛片免费高清观看在线播放| 欧美潮喷喷水| 九九热线精品视视频播放| 联通29元200g的流量卡| 国国产精品蜜臀av免费| 日韩在线高清观看一区二区三区| 国产一区二区三区在线臀色熟女| 欧美日韩国产亚洲二区| 最近中文字幕高清免费大全6| 国产午夜精品论理片| 99九九线精品视频在线观看视频| av在线亚洲专区| 99九九线精品视频在线观看视频| 特级一级黄色大片| 日韩成人av中文字幕在线观看| 免费av观看视频| 久久久精品欧美日韩精品| 人人妻人人看人人澡| 亚洲欧洲日产国产| 亚洲欧美成人精品一区二区| 少妇裸体淫交视频免费看高清| 亚洲,欧美,日韩| 舔av片在线| 日日摸夜夜添夜夜添av毛片| 嫩草影院新地址| 国产精品乱码一区二三区的特点| 一区二区三区四区激情视频 | 精品欧美国产一区二区三| 身体一侧抽搐| 亚洲av免费高清在线观看| 日本黄大片高清| 亚洲欧美日韩高清专用| 欧美一区二区精品小视频在线| 好男人在线观看高清免费视频| 国产午夜福利久久久久久| 一边摸一边抽搐一进一小说| 国产一级毛片七仙女欲春2| 国产精品嫩草影院av在线观看| 欧美三级亚洲精品| 国产精品一区www在线观看| 天天躁夜夜躁狠狠久久av| 少妇人妻精品综合一区二区 | 久久久色成人| 久久热精品热| 色视频www国产| 美女大奶头视频| 亚洲av不卡在线观看| 久久亚洲精品不卡| 午夜久久久久精精品| 高清日韩中文字幕在线| 亚洲三级黄色毛片| 亚洲最大成人av| 国产午夜精品论理片| 一夜夜www| 99久久九九国产精品国产免费| 天天躁日日操中文字幕| 天堂中文最新版在线下载 | 黄色日韩在线| 春色校园在线视频观看| 日韩人妻高清精品专区| 少妇人妻精品综合一区二区 | 精品人妻熟女av久视频| 久久人人精品亚洲av| 欧美成人精品欧美一级黄| 男女下面进入的视频免费午夜| 直男gayav资源| 男人舔女人下体高潮全视频| 最近最新中文字幕大全电影3| 亚洲国产欧美在线一区| 亚洲av中文字字幕乱码综合| 观看美女的网站| 少妇高潮的动态图| 能在线免费看毛片的网站| 国产极品精品免费视频能看的| 亚洲国产精品成人久久小说 | 极品教师在线视频| 两个人视频免费观看高清| 九草在线视频观看| 亚洲婷婷狠狠爱综合网| 国产精品一区二区性色av| 亚洲欧美日韩高清专用| 国产av在哪里看| 麻豆av噜噜一区二区三区| 国产一区二区在线观看日韩| 一本精品99久久精品77| 亚洲av成人av| 久久久久免费精品人妻一区二区| 免费搜索国产男女视频| 日本黄色视频三级网站网址| 最近的中文字幕免费完整| 国产午夜精品久久久久久一区二区三区| 成熟少妇高潮喷水视频| 99久久精品热视频| 一级毛片aaaaaa免费看小| 国产精品人妻久久久影院| 精品久久久久久久人妻蜜臀av| 青青草视频在线视频观看| 免费无遮挡裸体视频| 亚洲高清免费不卡视频| 悠悠久久av| 亚洲成a人片在线一区二区| 欧美bdsm另类| av在线老鸭窝| 国产精品爽爽va在线观看网站| 日韩国内少妇激情av| 国产精品久久视频播放| 麻豆久久精品国产亚洲av| 韩国av在线不卡| 少妇被粗大猛烈的视频| 99热这里只有是精品50| 亚洲精品色激情综合| 国产 一区 欧美 日韩| 日韩制服骚丝袜av| 免费看av在线观看网站| 久久精品人妻少妇| 男人和女人高潮做爰伦理| 色哟哟哟哟哟哟| 韩国av在线不卡| 看非洲黑人一级黄片| 日本三级黄在线观看| 美女黄网站色视频| 国产精品永久免费网站| 一区福利在线观看| 美女 人体艺术 gogo| 熟妇人妻久久中文字幕3abv| 一边亲一边摸免费视频| 日韩成人av中文字幕在线观看| 中文字幕av在线有码专区| 我要看日韩黄色一级片| 欧美高清成人免费视频www| av在线天堂中文字幕| 国产伦一二天堂av在线观看| 一夜夜www| 看十八女毛片水多多多| 欧美成人一区二区免费高清观看| 全区人妻精品视频| videossex国产| 欧美最新免费一区二区三区| 亚洲欧美日韩卡通动漫| 日韩国内少妇激情av| av视频在线观看入口| 婷婷亚洲欧美| 亚洲aⅴ乱码一区二区在线播放| 国产精品.久久久| 欧美zozozo另类| 网址你懂的国产日韩在线| 欧美成人a在线观看| 成人亚洲欧美一区二区av| 国内精品宾馆在线| 国内少妇人妻偷人精品xxx网站| 三级毛片av免费| 国产精品无大码| 蜜桃久久精品国产亚洲av| 色吧在线观看| 欧美潮喷喷水| 成人漫画全彩无遮挡| 亚洲欧美日韩东京热| 卡戴珊不雅视频在线播放| 在线a可以看的网站| 日韩亚洲欧美综合| 亚洲精品乱码久久久久久按摩| 少妇熟女欧美另类| 日日啪夜夜撸| 欧美成人精品欧美一级黄| www.色视频.com| 国产色爽女视频免费观看| 国产精品一区二区三区四区久久| 精品久久久噜噜| 国产一区二区激情短视频| 人人妻人人看人人澡| 成人漫画全彩无遮挡| 日本熟妇午夜| 卡戴珊不雅视频在线播放| 在线观看66精品国产| 高清毛片免费观看视频网站| 永久网站在线| 国产淫片久久久久久久久| 男人舔奶头视频| 久久人人爽人人片av| 中文欧美无线码| 又爽又黄无遮挡网站| 丰满的人妻完整版| kizo精华| 亚洲欧美清纯卡通| 亚洲欧美日韩无卡精品| 黄色欧美视频在线观看| 岛国在线免费视频观看| 成人毛片a级毛片在线播放| 亚洲av电影不卡..在线观看| 久久婷婷人人爽人人干人人爱| 欧美变态另类bdsm刘玥| 成人无遮挡网站| 此物有八面人人有两片| 亚洲欧美精品综合久久99| 美女黄网站色视频| 熟妇人妻久久中文字幕3abv| 日韩亚洲欧美综合| 欧美bdsm另类| 麻豆一二三区av精品| 18禁在线无遮挡免费观看视频| 日韩精品青青久久久久久| 高清在线视频一区二区三区 | 久久久久久久午夜电影| 久久久久久久久久久免费av| 中文字幕精品亚洲无线码一区| 国产精品久久久久久精品电影| 欧美日本视频| 日本欧美国产在线视频| 哪个播放器可以免费观看大片| 大又大粗又爽又黄少妇毛片口| 两个人视频免费观看高清| 国内少妇人妻偷人精品xxx网站| 亚洲美女搞黄在线观看| 亚洲最大成人av| 全区人妻精品视频| 99久国产av精品| 国产一区二区激情短视频| 波野结衣二区三区在线| 夜夜夜夜夜久久久久| 国产高清视频在线观看网站| 大香蕉久久网| 精品无人区乱码1区二区| 美女黄网站色视频| 亚洲最大成人中文| 亚洲人与动物交配视频| 国产私拍福利视频在线观看| 久久精品国产亚洲av涩爱 | 日韩中字成人| 亚洲人成网站在线观看播放| 高清毛片免费观看视频网站| 97人妻精品一区二区三区麻豆| 99精品在免费线老司机午夜| 美女内射精品一级片tv| 亚洲精品日韩在线中文字幕 | 欧美性猛交╳xxx乱大交人| 在线天堂最新版资源| 99久国产av精品| 久久久久久久午夜电影| 全区人妻精品视频| 晚上一个人看的免费电影| 欧美一区二区国产精品久久精品| 99视频精品全部免费 在线| 久久草成人影院| 久久久久久久午夜电影| 国产精品久久久久久久电影| 久久精品国产99精品国产亚洲性色| 国产精品一区二区三区四区久久| 伦理电影大哥的女人| 久久精品国产亚洲网站| 亚洲一级一片aⅴ在线观看| 黄色配什么色好看| 一进一出抽搐动态| 日韩欧美三级三区| 色综合色国产| 亚洲成a人片在线一区二区| 国产成人精品婷婷| 国产高清不卡午夜福利| 免费看日本二区| a级毛片免费高清观看在线播放| 成人特级黄色片久久久久久久| 成人亚洲精品av一区二区| 乱人视频在线观看| 亚洲欧美精品自产自拍| 亚洲精品乱码久久久v下载方式| 18禁在线播放成人免费| ponron亚洲| 十八禁国产超污无遮挡网站| 卡戴珊不雅视频在线播放|