• 
    

    
    

      99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

      Correcting Image Distortion for Adaptive Cruise Control

      2013-11-26 10:48:04YingChenGongjunYanDandaRawatAwnyAlnusairandBhedBista

      Ying Chen, Gongjun Yan, Danda B.Rawat, Awny Alnusair, and Bhed B.Bista

      1.Introduction

      In recent years, people have witnessed the emergence of adaptive cruise control in intelligent vehicles.Many adaptive cruise control systems adopt radar to detect other vehicles, pedestrian, or obstacles.In fact, there is few research focusing on lower-price camera-based adaptive cruise control systems.Comparing with radar-based cruise control systems, a camera-based adaptive control system has several advantages, such as lane departure warning,intelligent heading control, and traffic sign recognition.

      However, lower-price camera-based adaptive cruise controls systems have great challenges as well.One of the great challenges lies in the fact that the quality of lower-price camera is normally not ready to provide high quality adaptive cruise control which has extreme importance to driver’s life in a certain condition.The images of the lower-price camera are often distorted or blurred.To improve the quality of the adaptive cruise control system, we have to correct image distortion by an economical and efficient method.

      This paper presents a new method for camera image distortion correction by using optical flow techniques,which are normally applied in motion estimation and video compression research.As the optical flow techniques have a limited delay of processing time, they fit vehicular networks which are the delay-limited environment[1].We are the first to adopt optical flow techniques in image distortion correction.Two classic optical flow methods are introduced and the experiment shows that the Lucas-Kanade method has better errors control than the Horn-Schunck for our sinusoidal test signals.A simple test pattern pair is used to verify the optical flow method in three ways:

      · A pair of synthetic test images, including linear distortion-translation, rotation zoom distortion and nonlinear distortion-barrel distortion.

      · A pair of test images with independent Gaussian noise.

      · A pair of photographs that capture the ideal test images.

      2.Related Work

      For traffic monitoring applications, Trajkovic[2]presented an interactive approach to calibrate a pan-tilt-zoom camera by assuming that the camera height is known.Bas and Crisman[3]used the height and tilt of the measured camera and the road edges in an image.But Lai and Yung[4]presented an algorithm to extract complete multi-lane information by utilizing prominent orientation and length features of lane markings and curb structures to discriminate against other minor features.Fung et al.[5]proposed a method using the geometry properties of road lane makings.With the help of the known length and width of road lane markings, He and Yung[6]fixed the problem of ill-conditioned vanishing points.In an automatic approach,Schoepflin and Dailey[7]dynamically calibrated pan-tiltzoom (PTZ) cameras using lane activity maps to find the center of lanes and estimate the vanishing point of lines perpendicular to the road using dynamic images by detecting the bottom edges of the vehicles.Song and Tai[8]estimated the vanishing point by assuming the camera height and lane width are known in advance with using edge detection to find the lane markings in the static background image.To lower-price cameras, the most commonly encountered geometry distortions are radically symmetric.Currently, two kinds of software correction methods exist: correcting distortions by warping the image with a reverse distortion, which is an approximation the inverse distortion dominated by the low-order version of Brown’s distortion model in [9]; and an alternative method that iteratively computes the undistorted pixel position in[10].Both methods are complicated and have a very heavy burden on computational consumption.Furthermore, the image correction process of both methods also varies for each lens and its adjustable focus and zoom settings.In this paper, a new distortion correction method which is straightforward and independent of camera parameters is proposed.If the distortion value on the whole image is known, it is able to estimate the position change of each pixel and move this pixel to the ideal position.By subtracting the magnitude that each pixel shifts from the distortion position, the desired position of a pixel is estimated.This desired position is then converted to x, y coordinates which represent the desired position of a pixel in the image.If this process is applied to every pixel of the image, the result is a set of non-uniformly spaced points.These points are associated with the appropriate pixel values, so the corrected image can be interpolated.

      The most commonly encountered geometry distortions are radically symmetric, and arise from the symmetry of photographic lenses.The radial distortion is usually described as either barrel or pincushion distortion, depicted in Fig.1.Most camera lenses will introduce some distortion,which could only be corrected optically with other lenses before the emergence of digital cameras.But for digital cameras, any lens distortion can be corrected with in-camera image processing software[11].

      Since the optical flow methods are capable of calculating the motion between two frames which are taken at time t and t+δt at every voxel position, we use them in this paper for calculating the displacement between distortion image and ideal image.Sequences of ordered images are used in the estimation of motion as either instantaneous image velocities or discrete image displacement[12], so the desired (ideal) image and the distortion image are treated as two sequential images at different times with the same scene in this paper.And the texture mapping method is used to warp the new image with the result of optical flow.

      Fig.1.Description for different types of lens distortion: (a) none distortion, (b) barrel, and (c) pincushion.

      3.Image Correction with Optical Flow Technology

      In this paper, the image correction procedure using the optical flow method is described in Fig.2.

      In the flowchart, the first input image is named InputImage1 which is the current frame image in optical flow method, and the second input image is named InputImage2 which is the previous frame image in the optical flow method.The result of the optical flow method is the position change from the previous frame image to the current frame image.Then InputImage2 and the values of position change are put into the warping function, and the output of the warping function is the new image.The error is found by comparing the new image and Inputimage1.If the error equals to zero, the new image is the same image with InputImage1.

      3.1 Optical Flow Techniques

      The optical flow method is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and the scene[13],[14].Those optical flow methods, which are based on local Taylor series approximation of the image signal,estimate the motion of two frames of the image which are taken at times t and t + δt.

      where Vxand Vyare x and y components of velocity or optical flow; Ⅰ (x, y, t) denotes the image brightness at the point(x, y) in the image at time t; ?Ⅰ/?x, ?Ⅰ/?y, and ?Ⅰ/?t are derivatives of image at x direction, y direction, and t direction, respectively.Equation (1) cannot be solved since there are two unknown parameters.In order to get the second constraint, another set of optical flow equations are needed.

      Fig.2.Diagram of image correction with optical flow.

      Because we do not confirm the feasibility of optical flow methods in image distortion correction, the two different classical optical flow methods are employed to implement the image distortion correction in this paper.Vxand Vyare solved based on different assumptions.The precision of each method are checked by different image inputs in Section 4.

      3.2 Lucas-Kanade Method

      The Lucas-Kanade method is a widely used differential method for optical flow estimation that introduces additional conditions for estimating the actual flow.Assuming the flow (Vx, Vy) is a constant in the local neighborhood, a small window of size n×n (n>1) is defined around the central pixel (x, y).These pixels in the window are numbered by 1, 2,…, m, where m=n2.With these conditions,(1) is represented in the matrix form as

      where xiand yiare the coordinate values for pixels in the window, and i=1, 2,…, m.

      To solve those equations, the least squares method is applied:

      The above least squares solution gives the same attention to all n pixels i in the window.

      In practice, a weighted window is usually added to prominence the central pixel ρ of the window.The weighting function is

      where wiis usually a Gaussian function of the distance between i and ρ.

      3.3 Horn-Schunck Method

      The Horn-Schunck algorithm introduces an equation that relates the change in image brightness at a point to the motion of the brightness pattern[15].It assumes smoothness in the flow in the whole image.The equation is given for two-dimensional image as

      where Ⅰx, Ⅰy, and Ⅰtare the derivatives of the image intensity values with x, y and time dimensions, respectively; α is the weight coefficient.

      By solving the Euler-Lagrange equations, (5) can be simplified as

      where L is the integrand of the energy expression:

      here the Laplace operator Δ equals to ?2/?x2+?2/?y2, anda weighted average of u calculated in a neighborhood around the pixel at the location (x, y).

      With these notations, the above equation system may be written as

      Once the neighbors have been updated, (8) for each point in the image must be recalculated since the solution depends on the neighboring values of the flow field.According to Gauss-Seidel method[16], the following iterative scheme is derived:

      4.Test Image

      In order to compare the result images reconstructed by the Lucas-Kanade and the Horn-schunck methods, a pair of synthetic test images is generated as inputs of the these two methods.Only the grayscale digital image is used for reducing the effect of color distortion.And the sine wave signal is employed to generate these grayscale images,because sine wave has high gradients almost everywhere in the whole image and low spatial frequency means has no effects from blur and lens resolution.Besides, photographed images are also taken as input to compare two optical flow methods in real image distortion correction.

      We use LCD screen[17]to display the original image and capture the LCD screen as source photography in the camera image case.Comparing with traditional checkerboard patterns, our test pattern showed on the screen is stable, corrected and less manual interventions.

      Ten pairs of camera images are generated for testing.The object scene used in image capturing is also a sine wave image.For example, according to the original image in Section 3.1, the original object scene (Fig.3 (a)) of source photograph (Fig.3 (b)) is generated from (9) with parameters A = B = 2p/10, C = D = 0.

      Similarly, according to the shift image, the shift object scene (Fig.4 (a)) of destination photograph (Fig.4 (b)) is generated with parameters A = B = 2π/10, C = 0.3×2π/10, D= 0.

      Since the resolution of photograph is 640×480 pixels,the translation along y-axis is 0.3 pixel in object scenes will result in shifting 0.96 pixels along y-axis in photographs.

      Fig.3.Original camera image: (a) original object scene and (b)source photograph.

      Fig.4.Shift camera image: (a) shift object scene and (b)destination photograph.

      Table 1: Y dimension error of photographs

      Fig.5.Y dimension error curves of photographs in translation case.

      5.Experiments

      All experiments are conducted through our python implementation of the algorithm running on a PC workstation equipped with an AMD Athlon 64×2 dual core processor and 3 GB of RAM.The main time consumer in our experiments is a new image generating part.For a photograph with the resolution 640×480 pixels, new image warping spends around 15 seconds.The part of optical flow and error calculation takes less than 1 second.

      Five pairs of photographs were tested in experiments.The first input image in the Lucas-Kanade method and the Horn-Schunck method is the source photograph introduced in Section 4.The second input image is the destination photograph.

      In Table 1, dy with unit pixel means the translation between two input photographs.Column 3 gives the average error calculated by the result gotten from the Horn-Schunck method and Column 4 gives the average error with the Lucas-Kanade method.In general, the errors of the Lucas-Kanade method are smaller than the errors of the Horn-Schunck method.

      Fig.5 gives error curves with the Lucas-Kanade and the Horn-Schunck.Errors resulted from the Lucas-Kanade method are significantly less than errors resulted from the Horn-Schunck method.

      6.Conclusions

      In this paper, an image distortion correction algorithm has been presented to improve the quality of adaptive cruise control using lower-price cameras.The Lucas-Kanade method and the Horn-Schunck method were compared.The procedure of image distortion correction using optical flow method was tested by both synthetic test images and camera images.The experimental results show that image distortion correction using the Lucas-Kanade method achieves one hundred of one pixel error.

      Our work verified that the optical flow used in image distortion correction is feasible since errors are very small,nearly one hundreds of a pixel for test images.Although errors in photographs are slightly bigger than errors of synthetic test images, the analysis shows that the increase of error is due to the noised introduced by camera itself.

      [1]J.C.F.Li and S.Dey, “Outage minimisation in wireless relay networks with delay constraints and causal channel feedback,” Eur.Trans.Telecomm., vol.21, no.3, pp.251-265, 2010.

      [2]M.Trajkovic, “Interactive calibration of a PTZ camera for surveillance applications,” in Proc.of Asian Conf.on Computer Vision, Melbourne, 2002, pp.1-8.

      [3]E.K.Bas and J.D.Crisman, “An easy to install camera calibration for traffic monitoring,” in Proc.of ⅠEEE Conf.onⅠntelligent Transportation Symposium, Boston, 1997, pp.362-366.

      [4]A.H.S.Lai and N.H.C.Yung, “Lane detection by orientation and length discrimination,” ⅠEEE Trans.on Systems, Man, and Cybernetics, Part B, vol.30, no.4, pp.539-548, 2000.

      [5]G.S.K.Fung, N.H.C Yung, and G.K.H.Pang, “Camera calibration from road lane markings,” Optical Engineering,vol.2, no.10, pp.2967-2977, 2003.

      [6]X.C.He and N.H.C Yung, “New method for overcoming ill-conditioning in vanishing-point-based camera calibration,” Optical Engineering, vol.46, no.3, pp.1-12,2007.

      [7]T.N.Schoepflin and D.J.Dailey, “Dynamic camera calibration of road-side traffic management cameras for vehicle speed estimation,” ⅠEEE Trans.on Ⅰntelligent Transportation Systems, vol.4, no.2, pp.90-98, 2003.

      [8]K.-T.Song and J.-C.Tai, “Dynamic calibration of pan-tilt-zoom camerea for traffic monitoring,” ⅠEEE Trans.on Systems, Man, and Cybernetics Part B: Cybernetics, vol.36, no.5, pp.1091-1103, 2006.

      [9]D.C.Brown, “Decentering distortion of lenses,”Photogrammetric Engineering, vol.32, no.3, pp.444-462,1966.

      [10]J.Heikkila and O.Silven, “A four-step camera calibration procedure with implicit image correction,” in Proc.of 1997ⅠEEE Computer Society Conf.on Computer Vision and Pattern Recognition, San Juan, 1997, pp.1106-1112.

      [11]E.M.Mikhail, J.S.Bethel, and J.C.McGlone, Ⅰntroduction to Modern Photogrammetry, New York: Willey, 2001.

      [12]S.S.Beauchemin and L.J.Barron, The Computation of Optical Flow.New York: ACM, 1995.

      [13]A.Burton and J.Radford, Thinking in Perspective: Critical Essays in the Study of Thought Processes, London: Methuen,1978.

      [14]D.H.Harren and R.E.Strelow, Electronic Spatial Sensing for the Blind: Contributions from Perception, Berlin:Springer, 1985.

      [15]B.K.P Horn and B.G.Schunck, “Determining optical flow,” Artificial Ⅰntelligence, vol.17, no.1-3, pp.185-203,1981.

      [16]R.W.Hamming, Numerical Methods for Scientists and Engineers, New York: McGraw-Hill, 1962.

      [17]F.Yannick, H.Chris, and B.Philippe, “Screen-Camera Calibration using Gray Codes,” presented at Sixth Canadian Conference on Computer and Robot Vision, Kelowna, 2009.

      睢宁县| 应城市| 镇宁| 鄂托克旗| 闽侯县| 彭水| 兴隆县| 伊宁市| 临颍县| 梅河口市| 红安县| 澄迈县| 慈溪市| 周至县| 阜康市| 新闻| 石首市| 奉节县| 靖宇县| 阿拉善右旗| 江门市| 上林县| 瓦房店市| 望谟县| 陈巴尔虎旗| 石景山区| 和政县| 五大连池市| 西城区| 桂平市| 瓮安县| 辉南县| 嘉鱼县| 绥化市| 长泰县| 灵石县| 富蕴县| 怀仁县| 徐闻县| 资中县| 平乐县|