PatViewer专利搜索
中国 发明 在审

【中文】一种图像校正方法和装置
【EN】A kind of method for correcting image and device

申请(专利)号:CN201910123423.9国省代码:上海 31
申请(专利权)人:【中文】上海赫千电子科技有限公司【EN】Shanghai Heqian Electronic Technology Co., Ltd.
温馨提示:Ctrl+D 请注意收藏,详细著录项请登录检索查看。 Please note the collection. For details, please search the home page.

摘要:
【中文】本发明揭示了一种图像校正方法及装置,用于对采集到的畸变图像进行校正,包括如下步骤:定制单幅特征靶图,采用摄像头采集单幅特征靶图的畸变图像;采用畸变参数表对单幅特征靶图的畸变图像进行畸变校正后依次进行透视变换和分区域拟合,分别获取相对应的经典外参矩阵和分区域拟合函数;采用摄像头采集实际现场图像的畸变图像,依次采用光学畸变参数表进行畸变校正、采用经典外参矩阵进行透视变换、采用分区域拟合函数进行分区域拟合校正和拼接融合,生成实际现场图像的最终校正图像。采用了本发明的技术方案,实现了对个体摄像头采集图像的高精度畸变校正,提高了图像与实际摄像头视场内物体分布位置的精准匹配度。
【EN】Paragraph:Present invention discloses a kind of method for correcting image and devices to include the following steps: to customize single width feature target figure, using the fault image of camera acquisition single width feature target figure for being corrected collected fault image;Perspective transform is successively carried out after carrying out distortion correction using fault image of the distortion parameter table to single width feature target figure and subregion is fitted, and obtains corresponding classical outer ginseng matrix and subregion fitting function respectively;Using the fault image of camera acquisition actual field image, distortion correction is successively carried out using optical distortion parameter list, joins matrix progress perspective transform outside using classics, carry out subregion fitting correction and splicing fusion using subregion fitting function, generates the correction of a final proof image of actual field image.Using technical solution of the present invention, the high-precision distortion correction to individual camera collection image is realized, improves the accurate matching degree of object distributing position in image and practical camera visual field.

主权项:
【中文】1.一种图像校正方法,用于对采集到的畸变图像进行校正,其特征在于,包括如下步骤: 步骤S1,摆放单幅特征靶图,采用摄像头进行采集,生成所述单幅特征靶图的畸变图像; 步骤S2,采用光学畸变参数表对所述单幅特征靶图的畸变图像进行畸变校正,生成所述单幅特征靶图的虚拟物平面图像; 步骤S3,对所述单幅特征靶图的虚拟物平面图像进行外参标定得到经典外参矩阵后,采用经典外参矩阵进行透视变换生成所述单幅特征靶图的透视变换图像; 步骤S4,将所述单幅特征靶图的透视变换图像划分为不同区域,对所述单幅特征靶图的透视变换图像上的特征点与单幅特征靶图上的特征点分区域进行一一配对,分区域拟合后得到分区域拟合函数; 步骤S5,采用摄像头采集实际现场图像,生成实际现场图像的畸变图像,依次采用光学畸变参数表进行畸变校正、采用经典外参矩阵进行透视变换、按照划分的不同区域采用分区域拟合函数进行分区域拟合校正后再拼接融合,生成所述实际现场图像的最终校正图像。【EN】1. a kind of method for correcting image, for being corrected to collected fault image, which is characterized in that including walking as follows It is rapid: Step S1 is put single width feature target figure, is acquired using camera, and the distortion figure of the single width feature target figure is generated Picture; Step S2 carries out distortion correction using fault image of the optical distortion parameter list to the single width feature target figure, generates institute State the virtual object flat image of single width feature target figure; Step S3 carries out outer ginseng calibration to the virtual object flat image of the single width feature target figure and obtains after joining matrix outside classics, The perspective transform image that perspective transform generates the single width feature target figure is carried out using matrix is joined outside classics; The perspective transform image of the single width feature target figure is divided into different zones, to the single width feature target figure by step S4 Perspective transform image on characteristic point matched one by one with the characteristic point subregion on single width feature target figure, subregion fitting After obtain subregion fitting function; Step S5 acquires actual field image using camera, generates the fault image of actual field image, successively uses optics Distortion parameter table progress distortion correction carries out perspective transform using ginseng matrix outside classics, uses and divide according to the different zones of division Region fitting function splices fusion after carrying out subregion fitting correction again, generates the correction of a final proof figure of the actual field image Picture.


相似专利
【中文】一种褶皱畸变QR二维码的校正方法【EN】A kind of bearing calibration of fold distortion QR Quick Response Codes
【中文】基于多摄像头自标定的全景车辆安全系统的图像处理方法【EN】The image processing method of panorama Vehicle security system based on multi-cam self-calibration
【中文】图像拼接处理方法及装置【EN】Image mosaic processing method and processing device
【中文】基于共线特征点的摄像机畸变快速校正方法【EN】Method for quickly correcting distortion of camera based on collinear feature point
【中文】用于辅助驾驶的车辆多视角全景生成方法【EN】For assisting the vehicle multi-angle panorama generation method of driving
【中文】一种基于椭圆分割的鱼眼图像畸变校正方法【EN】A kind of fish eye images distortion correction method based on ellipse segmentation
【中文】用于辅助驾驶的车辆多视角全景生成方法【EN】Vehicle multi-angle panorama generating method for aided driving
【中文】眼神定位全景控制方法【EN】Expression in the eyes positions panorama control method
【中文】畸变参数的建模方法及装置、校正方法、终端、存储介质【EN】Modeling method and device, bearing calibration, terminal, the storage medium of distortion parameter
【中文】一种桶形畸变图像的校正方法及装置【EN】A kind of bearing calibration of barrel distortion image and device
【中文】一种图像校正方法和装置【EN】A kind of method for correcting image and device
【中文】一种基于结构光的投影校正方法及其系统【EN】A kind of projection correction's method and its system based on structure light
【中文】一种鱼眼镜头标定后鱼眼图像校正的方法【EN】Fisheye image correction method after calibration conducted by fisheye lens
【中文】双摄像头视频融合畸变校正与视点微调整方法及系统【EN】Double-camera video frequency merges distortion correction and viewpoint readjustment method and system
【中文】鱼眼相机的光心确定方法及装置【EN】The photocentre of fisheye camera determines method and device
【中文】一种全景泊车辅助系统【EN】Panoramic parking assist system
【中文】一种鱼眼镜头径向畸变的标定方法及标定装置【EN】Method and device for calibrating fish-eye lens radial distortion
【中文】车载环视系统及其立体标定方法和计算机可读存储介质【EN】Vehicle-mounted viewing system and its stereo calibration method and computer readable storage medium
【中文】图像校正方法、装置、设备、系统及摄像设备和显示设备【EN】Method for correcting image, device, equipment, system and picture pick-up device and display device
【中文】一种图像校正方法和装置【EN】A kind of method for correcting image and device
说明书

【中文】

一种图像校正方法和装置

技术领域

本发明属于摄像头的视频图像处理领域,尤其是一种图像校正方法和装置。

背景技术

随着电子技术的高速发展,使得汽车辅助驾驶领域的进步日新月异,目前最普及的可视化辅助驾驶为车载360度环视,这个技术很方便的解决了驾驶员对车身周围环境几米范围内的死角无法观察的问题。该技术采用在车前、车两侧、车后安装广角摄像头的方式来获取车身四周的完整视场,通过MCU或者DSP配合前期标定好的摄像头参数及广角摄像头光路畸变参数进行各个摄像头的图像校正和图像融合来实现360度的环视影像的生成,并传输到显示屏进行实时显示。

目前的车载360度环视系统在极大程度上解决了驾驶员通过肉眼无法观察到车身周围几米的盲区,对基本驾驶如倒车,转弯,通过狭窄道路,观察高度较小的障碍物等得到了极好的解决,但是安装在车身周围的不同的个体广角摄像头由于本身制成工艺的精度差异,导致各个广角摄像头畸变校正后存在不同程度的图像畸变误差,这增加了在360环视图像融合的环节的复杂度及采用简单图像融合方法的图像中线段错位程度和主观观察的图像融合的不匹配程度。

目前汽车辅助驾驶还处于发展阶段,无论是360度环视还是前置后置行车记录仪都在尝试着完成某种特定的功能,如360环视的主旨在于解决驾驶员对于车辆周围近距离范围内的视线盲区及物体实际分布位置的主观判断,而若环视拼接图像存在个体摄像头的差异性导致的整体拼接融合存在畸变误差,会极大的影响驾驶员对于周围视场物体实际空间位置的主观判断感受,甚至造成一些主观错觉。

目前业内为了减少这种拼接错位的主观错觉,往往采用图像融合技术,使得多个摄像头的图像交叠区域按照一定权重保留个体摄像头的图像信息,通过透明化处理等来减轻驾驶员人眼判断的拼接错位感受。但这样在多个摄像头图像交叠区域仍然存在模糊感和一定程度的错位感;此外业内通常采用内参标定的方法来对个体摄像头进行花费时间较长的内参标定,通过数学优化技巧解内参方程的方式来对个体摄像头进行针对某一内参校正理论方程的内参参数求解,以减小个体摄像头的畸变校正误差,这种方法需要用个体摄像头采集多幅靶图,且多幅靶图的位置需要变化以使得各个靶图的采集图像覆盖尽可能多的视场区域,这种方法对于摄像头标定环境、设备、人工有较高的要求,且本身存在算法精度误差,往往一个摄像头多次进行内参标定得到的内参参数次次都不一样,这使得该摄像头内参标定后再用内参参数进行图像畸变校正也存在一定误差,同时也增加了摄像头的制成时间成本。

发明内容

本发明提供了一种图像畸变校正方法,从而实现对个体摄像头采集图像的高精度畸变校正,提高了图像与实际摄像头视场内物体分布位置的精准匹配度。

依据上述目的,实施本发明的一种图像校正方法,用于对采集到的畸变图像进行校正,包括如下步骤:

步骤S1,摆放单幅特征靶图,采用摄像头进行采集,生成所述单幅特征靶图的畸变图像;

步骤S2,采用光学畸变参数表对所述单幅特征靶图的畸变图像进行畸变校正,生成所述单幅特征靶图的虚拟物平面图像;

步骤S3,对所述单幅特征靶图的虚拟物平面图像进行外参标定得到经典外参矩阵后,采用经典外参矩阵进行透视变换生成所述单幅特征靶图的透视变换图像;

步骤S4,将所述单幅特征靶图的透视变换图像划分为不同区域,对所述单幅特征靶图的透视变换图像上的特征点与单幅特征靶图上的特征点分区域进行一一配对,分区域拟合后得到分区域拟合函数;

步骤S5,采用摄像头采集实际现场图像,生成实际现场图像的畸变图像,依次采用光学畸变参数表进行畸变校正、采用经典外参矩阵进行透视变换、按照划分的不同区域采用分区域拟合函数进行分区域拟合校正后再拼接融合,生成所述实际现场图像的最终校正图像。

可选的,所述单幅特征靶图为矩形,其上设置的特征点为第一特征点,第一特征点在水平方向和垂直方向间隔相同。

可选的,所述虚拟物平面图像为在虚拟物平面上生成的图像,所述光学畸变参数表中包括视场角与对应的像高,所述视场角为虚拟物平面上的点到摄像头光心的连线与摄像头主光轴的夹角,对应的所述像高为像平面上对应的点到像平面中心点的距离,所述像平面为摄像头采集生成的畸变图像;

所述步骤S2中,通过视场角与对应像高的对应关系,对采集的单幅特征靶图的畸变图像进行畸变校正。

可选的,所述步骤S2包括如下步骤:

步骤S21,对光学畸变参数表中视场角与对应像高的数据进行拟合,得到视场角与对应像高的拟合公式;

步骤S22,计算所述虚拟物平面的中心点与所述像平面的中心点的坐标;

步骤S23,结合视场角与对应像高的拟合公式、所述虚拟物平面的中心点和所述像平面的中心点的坐标,计算得到所述虚拟物平面上点的坐标与所述像平面上对应点的坐标的对应关系;

步骤S24,采用像平面上与对应点相邻近的像素进行插值,生成对应的虚拟物平面上点的像素。

可选的,所述单幅特征靶图的畸变图像、虚拟物平面图像、透视变换图像上存在与所述单幅特征靶图上第一特征点相对应的第二特征点、第三特征点和第四特征点,单幅特征靶图上的第一特征点通过所述缩放比例映射到透视变换平面上生成第五特征点,

所述步骤S3包括如下步骤:

步骤S31,提取所述单幅特征靶图的虚拟物平面图像中第三特征点,与单幅特征靶图中均匀分布的第一特征点进行一一映射,得到映射后第三特征点的坐标;

步骤S32,将所述单幅特征靶图中第一特征点的坐标按缩放比例映射到透视变换平面,得到第五特征点在透视变换平面上的坐标;

步骤S33,解出经典外参矩阵,所述经典外参矩阵用于将第五特征点在透视变换平面上的坐标变换为映射后第三特征点的坐标;

步骤S34,根据经典外参矩阵对所述单幅特征靶图的虚拟物平面图像进行透视变换,得到所述单幅特征靶图的透视变换图像。

可选的,所述步骤S31包括如下步骤:

步骤S311,对所述单幅特征靶图的虚拟物平面图像中的第三特征点进行编号,并记录第三特征点的坐标;

步骤S312,取水平坐标值最小的一个第三特征点,作为梯度分析顶点;

步骤S313,将余下的第三特征点按照编号分别与梯度分析顶点连线,计算连线与水平方向上半象限的夹角,选择夹角最大的多个第三特征点,选择的第三特征点的数量比单幅特征靶图中分布的第一特征点的行数少1个;

步骤S314,对选择的第三特征点按照垂直方向坐标值进行从小到大的重新排序,并记录第三特征点的坐标;

步骤S315,将已经重新排序的第三特征点从所述单幅特征靶图的虚拟物平面图像中移除;当单幅特征靶图的虚拟物平面图像中的第三特征点未被全部移除,继续执行步骤S311;当单幅特征靶图的虚拟物平面图像中的第三特征点被全部移除,得到的重新排序的第三特征点及其坐标,即为与原单幅特征靶图中均匀分布的第一特征点一一映射后的第三特征点及其坐标。

可选的,所述步骤S4包括如下步骤:

步骤S41,将单幅特征靶图的透视变换图像上的第四特征点与单幅特征靶图的在透视变换平面上的第五特征点进行一一匹配;

步骤S42,根据相互匹配的第四特征点与第五特征点距离的最大值情况,对透视变换平面进行区域划分,以第五特征点的坐标对第四特征点的坐标进行分区域拟合,得到分区域拟合函数。

可选的,所述步骤S41包括如下步骤:

步骤S411,按编号顺序依次选取第五特征点,计算第五特征点与所有第四特征点的距离;

步骤S412,将计算出的距离的最小值与预设阈值进行比较,当距离的最小值小于预设阈值时,记录距离的最小值及对应的所述第四特征点的编号及坐标,该距离的最小值为相互匹配的第五特征点与第四特征点的距离;否则,记录距离的最小值及所述第四特征点的坐标皆为0;

步骤S413,当所有第五特征点未被全部选完时,继续选取下一个第五特征点,执行步骤S411;否则,停止对第五特征点与第四特征点进行一一匹配。

可选的,所述步骤S5包括如下步骤:

步骤S51,对摄像头采集到的实际现场图像的畸变图像,采用所述光学畸变参数进行畸变校正,得到实际现场图像的虚拟物平面图像;

步骤S52,采用所述经典外参矩阵对实际现场图像的虚拟物平面图像进行透视变换得到实际现场图像的透视变换图像;

步骤S53,采用所述分区域拟合函数分别对划分的不同区域的透视变换图像进行拟合校正,得到各区域的拟合校正图像;

步骤S54,对各区域的拟合校正图像进行拼接得到最终校正图像。

依据上述目的,实施本发明的一种图像校正装置,用于对采集到的畸变图像进行校正,其特征在于,所述图像校正装置包括:

包括摄像头、畸变校正模块、透视变换模块、拟合模块;

所述摄像头,用于采集原始图像,生成畸变图像;

所述畸变校正模块,用于采用光学畸变参数表对畸变图像进行畸变校正,生成虚拟物平面图像;

所述透视变换模块,用于对虚拟物平面图像进行外参标定,并采用外参标定获得的经典外参矩阵对虚拟物平面图像进行透视变换生成透视变换图像;

所述拟合模块,按不同区域对原始图像和透视变换图像上的点一一配对进行拟合,计算出不同区域的分区域拟合函数,采用所述分区域拟合函数对不同区域的透视变换图像进行拟合校正后拼接融合,生成最终校正图像。

可选的,所述光学畸变参数表中包括视场角与对应的像高,所述视场角为虚拟物平面上的点到摄像头光心的连线与摄像头主光轴的夹角,对应的所述像高为像平面上对应的点到像平面中心点的距离,所述像平面为摄像头采集生成的畸变图像,所述虚拟物平面为生成所述虚拟物平面图像的平面。

可选的,所述外参标定为根据虚拟物平面图像中点的坐标与原始图像缩放到透视变换平面中对应点的坐标,建立坐标变换的矩阵方程,求解出变换矩阵,所述变换矩阵即为经典外参矩阵。

可选的,所述拟合模块按照不同区域采用原始图像缩放到透视变换平面中对应点的坐标,对透视变换图像上对应点的坐标进行拟合,得到分区域拟合函数。

采用了本发明的技术方案,针对现有技术的不足,基于单幅特征靶图进行个体摄像头标定,根据摄像头自动化标定环境模拟出实际安装环境的视场条件,采用特征点均匀覆盖归一化有效校正区域的特征靶图,给实际安装环境视场下有效校正区域进行了均匀的虚拟特征坐标标记,在自动化标定环境后进行的畸变校正和外参校正方法与实际安装环境下进行地畸变校正和采用经典外参矩阵进行校正方法一致。由于摄像头装配精度导致镜头LENS光心与CMOS传感器中心不吻合或者焦平面与镜头CMOS平面有夹角,会导致畸变校正的误差及后继透视变换的误差累积,通过分区域拟合的方式将透视变换后提取的特征点映射到真实的特征点物理分布位置来修复这种个体摄像头装配精度误差。具有该装配精度差异的摄像头在实际安装环境进行标定时,延用该拟合方程,即可以在实际安装环境下修正个体摄像头装配精度误差。

附图说明

图1是:图像校正方法的流程示意图;

图2是:归一化单幅特征靶图的示意图;

图3是:单幅特征靶图中特征点分布的示意图;

图4是:畸变图像校正的示意图;

图5是:分区域拟合获取分区域拟合函数的示意图;

图6是:采用分区域拟合函数进行拟合校正的示意图;

图7是:单幅特征靶图的示意图;

图8是:分区域拟合函数拟合曲线的示意图;

图9是:单幅特征靶图的畸变图像的示意图;

图10是:单幅特征靶图的虚拟物平面图像的示意图;

图11是:单幅特征靶图的虚拟物平面图像映射的示意图;

图12是:虚拟物平面与透视变换平面的透视变换关系的示意图;

图13是:单幅特征靶图的透视变换图像的示意图;

图14是:现场采集标定靶图的透视变换图像的示意图;

图15是:拟合校正图像的示意图;

图16是:拟合校正图像的示意图;

图17是:最终校正图像的示意图。

具体实施方式

下面结合附图和实施例进一步说明本发明的技术方案。

图1为图像校正方法的流程示意图。本发明中图像校正方法的具体实施步骤为:

步骤1:根据标定场景归一化有效校正区域特征靶图及摆放位置。

根据实际360环视应用摄像头安装位置对应的视场的有效校正区域的实际空间尺度坐标按一定比例缩小到小于标定场景允许放置靶图的空间尺度范围内,归一化有效校正区域靶图尺寸及摆放位置根据其比例定制。

图2是为归一化单幅特征靶图的示意图。摄像头安装位置对应的视场的有效校正区域为矩形,为360环视应用中所需要显示的实际场景地面的矩形区域,其实际物理尺寸为,其中长为,宽为,摄像头安装距离地面高度为。标定场景允许放置靶图的立方空间区域为,其中长为1,宽为1,高度为1。定制靶图的长宽比Ra为,长为2,宽为2,满足关系式,,标定场景靶图固定摆放位置距离摄像头2,满足关系式

图3为单幅特征靶图中特征点分布的示意图。定制的单幅特征靶图由水平和垂直方向均匀分布的特征点图案组成,水平方向特征点为M个,垂直方向特征点为N个,相邻特征点在特征靶图上的间隔距离为K。水平第个,垂直第个特征点在特征靶图上的坐标满足公式:

特征点编号按照从上到下,从左到右排序,特征点坐标与特征点编号一一映射。

步骤2:利用LENS供应商提供的光学畸变参数表,包含视场角与对应的像高,通过虚拟物平面坐标对应的视场角与像平面像高的映射关系,来进行摄像头采集的畸变图像的校正。

图4为畸变图像校正的示意图。设虚拟物平面U与像平面I为平行关系,其中Rr为虚拟物平面上某点P到虚拟物平面中心点O的距离,Dis为摄像头安装位置离地面的实际高度,POA为摄像头Lens主光轴即Oo的连线与虚拟物平面和像平面垂直,FcsLens到像平面的距离,Pr为虚拟物平面上点P对应到像平面上的点p到像平面中心点o的距离,ƟPLens光心连线与POA的夹角,ΦP到虚拟物平面中心点连线与虚拟物平面水平方向的夹角,与p到像平面中心点连线与像平面水平方向夹角一致,在虚拟物平面UP的坐标为,O的坐标为,在像平面Ip的坐标为,o的坐标为

具体的,像平面等价于摄像头采集的畸变图像,镜头CMOS分辨率对应图像采集区域物理尺寸为长为,宽为,单位为毫米,摄像头采集的畸变靶图图像像素分辨率为长为L像素,宽为W像素。

供应商提供的光学畸变参数表业内规则为Ɵ单位为度,Pr单位为毫米。

具体的,虚拟物平面等价于摄像头最大视场采集的垂直于主光轴距离摄像头Dis的平面,虚拟物平面水平方向长度为,垂直方向长度为,单位为尺度单位。

采用LENS供应商提供的光学畸变参数表进行摄像头采集的单幅特征靶图畸变校正,主要步骤如下:

步骤2.1:通过LENS供应商提供的光学畸变参数表的各个视场角Ɵ与对应像高Pr的数据进行拟合,得到视场角和像高的拟合公式

步骤2.2:计算o点的坐标,计算方式为,

步骤2.3:计算O点的坐标,计算方式为,

步骤2.4:根据公式计算出Ɵ,代入公式得出Pr,根据公式计算出Φ,代入公式,得到虚拟物平面P点对应的像平面p点的坐标;

步骤25:计算出像平面p点对应的畸变图像像素坐标

步骤2.6:虚拟物平面P点的像素值为摄像头采集的畸变图像在像素坐标()处最近邻整数坐标像素的插值。

步骤3:采用畸变校正后的单幅特征靶图进行外参标定及透视变换。主要步骤如下:

步骤3.1:通过特征提取算法提取出步骤2校正后虚拟物平面的单幅特征靶图像的M N个特征点坐标。采用最近向梯度排序法,将提取出的M N个特征点与单幅特征靶图均匀分布的M N个特征点进行一一映射。

最近向梯度排序法的步骤如下:

步骤一:将提取出的M N个特征点进行初始编号,并对应记录其坐标

步骤二:提取出特征点中的水平坐标最小的一个特征点,作为梯度分析顶点T并记录其编号

步骤三:对余下的M N-1个特征点按照其编号分别与T连线,计算连线与水平方向上半象限的夹角,选择中值最大的N-1个特征点的编号

步骤四:对编号为的特征点按照垂直方向坐标值进行从小到大的排序,将特征点坐标按照排序记录为

步骤五:将已经重新排序的N个特征点从特征点序列中移除,将剩余的特征点进行初始编号,并对应记录其坐标,重复步骤二直到所有特征点重新排序完成,每重复一次新排序的N个特征点坐标顺序放置在已有的之后。

步骤3.2:根据步骤1得到的靶图特征点坐标缩放比例S1将特征点坐标映射到透视变换平面坐标,满足公式,。根据特征点编号将靶图在透视变换平面上的特征点与畸变校正后虚拟物平面靶图图像提取的特征点的坐标解单应性矩阵,得到经典外参矩阵,即完成外参标定,再根据外参矩阵对畸变校正后的虚拟物平面靶图图像进行透视变换,得到畸变校正及透视变换后的透视变换平面靶图图像PERS,与实际靶图尺寸根据比例映射到透视变换平面的图像尺寸相同。

步骤4:通过单幅特征靶图分区域的实际特征点物理分布位置和单幅特征靶图透视变换后的分区域对应特征点进行拟合得到分区域拟合函数。

图5是为分区域拟合获取分区域拟合函数的示意图。主要步骤如下:

步骤4.1:通过特征提取算法提取步骤3透视变换后的单幅特征靶图PERS上的特征点,与实际靶图定制的特征点根据比例映射到透视变换平面上的靶图特征点,进行一一配对,其方法为最近向梯度排序法或者最小距离阈值法。

最小距离阈值法的步骤如下:

步骤一:将编号为水平坐标,与所有水平坐标进行差分,得到;将编号为垂直坐标,与所有垂直坐标进行差分,得到。计算与所有的距离

步骤二:得到中的最小值及编号,若该值小于根据镜头预设阈值thre_manu,则将对应编号的坐标记录入与中,将中的最小值记录入中;若中的最小值大于等于thre_manu坐标值赋为0,赋值为0。其中thre_manu小于

步骤三:编号为自加1,重复步骤一直到所有编号遍历完成。

步骤4.2:根据的最大值程度选择透视变换平面的矩形区域划分个数,对包含在划分区域中的与对应的坐标以水平坐标和垂直坐标分别对的水平坐标和垂直坐标进行拟合,其中记录坐标为(0,0)的编号对应的的坐标值不参与拟合,得到该区域g的拟合函数:

,

步骤5:对实际安装场景摄像头采集图像根据LENS供应商提供的光学畸变参数表进行畸变校正,然后再透视变换的图像通过分区域拟合函数进行分区域精确校正。

图6是为采用分区域拟合函数进行拟合校正的示意图。主要步骤如下:

步骤5.1:根据步骤2对实际摄像头安装标定场景采集到的图像通过LENS供应商提供的光学畸变参数进行畸变校正,得到虚拟物平面图像P

步骤5.2:根据步骤3对实际摄像头安装标定场景的实际靶图的物理分布特征点位置与畸变校正图像提取的对应特征点位置进行外参标定,根据外参矩阵对畸变校正图像进行透视变换得到透视变换图像PERS

步骤5.3:分别根据划分区域的拟合函数对透视变换后的图像进行畸变校正。

设编号为的拟合校正图像为,透视变换图像为PERS,按照区域编号选择对应的拟合函数,根据公式,,得到坐标()对应到的PERS的坐标(),其中为比例因子,即PERS长宽比一致前提下,PERS的比值。的像素值为PERS坐标处最邻近的周围像素的插值。

步骤5.4:根据步骤4.2划分的区域坐标分布及,对拟合校正后的图像进行区域划分。

设区域的矩形顶点为,,,,则对应的在区域对应的矩形顶点为,,,

步骤5.5:拼接融合的最终校正图像FIN的区域的图像为编号为在区域的图像。

此外,在另一个技术方案中,可以通过特征靶图多次不同位置摆放来进行单个摄像头的内参标定,然后按照内参校正方程来进行单个摄像头的畸变校正,这种方法需要对自动化标定复杂度提出更高的要求,例如机械手平移和旋转和设定多个靶图位置来采集多幅图像,同时自动化标定工位的空间较大以满足不同位置的靶图摆放要求。

实施例

步骤1:根据标定场景归一化有效校正区域靶图及摆放位置。

某一个体摄像头存在装配精度误差,在其量产的360环视应用中,安装位置离地面高度为1米,要求该摄像头提供出其安装位置下水平9.6米,垂直3米的地面区域视场。

图7为单幅特征靶图的示意图。具体的,可制定特征靶图尺寸为水平0.96米,垂直0.3米,自动标定场景下摄像头离特征靶图0.1米,这个尺寸满足自动标定机台可放置靶图的空间,缩放比例为10。具体的,可定制特征靶图为黑白棋盘格,水平16格,垂直5格,单个格子宽度为6厘米,其特征点即为角点,水平15个,垂直4个,特征点编号为从上到下,从左到右。

步骤2:采用LENS供应商提供的光学畸变参数表进行摄像头采集的单幅特征靶图畸变校正。

LENS供应商提供的光学畸变参数表为离散角度Ɵ及其对应的像高。根据步骤2.1,通过LENS供应商提供的光学畸变参数表的各个视场角Ɵ与对应像高Pr的数据进行拟合,得到视场角和像高的拟合公式,具体的,可采用如下公式进行拟合:

带入摄像头LENS供应商提供的光学畸变参数表解超定方程组得到拟合参数,图8为分区域拟合函数拟合曲线的示意图,其中*标记点为光学畸变参数表提供的数据,横坐标为Ɵ,单位为度,纵坐标为,单位为毫米。

虚拟物平面水平方向长度为1000厘米,垂直方向长度为300厘米,虚拟物平面距离摄像头如上描述为100厘米。CMOS在720p像素分辨率下的有效采集区域水平宽度为5.41毫米,垂直高度为3.043毫米。

根据步骤2.2,计算像平面中心o坐标为(2.705,1.5215)。

根据步骤2.3,计算虚拟物平面中心O坐标为(500,150)。

根据步骤2.4,带入公式,虚拟物平面某一点P若坐标为(0,0),其对应的视场角,约为79.15度。虚拟物平面水平方向夹角,约为159.5度。设带入根据视场角和像高的拟合公式计算出像高为2.087。P对应的像平面坐标点p,约为(0.7502,0.7906)。

根据步骤2.5,计算p对应的像平面像素坐标为,约为(177.5,187.06)。

根据步骤2.6,虚拟物平面P点像素值,采用p点周围最邻近像素插值。具体的可以采用双线性插值,即利用(177,187),(177,188),(178,187),(178,188)对应的像素值gray1,gray2,gray3,gray4,根据p点距离各个点接近程度的权重进行插值。

将虚拟物平面上所有点按上述方法带入即得到畸变校正图像。

图9为采集到的单幅特征靶图的畸变图像的示意图;图10为按步骤2畸变校正后的单幅特征靶图的虚拟物平面图像的示意图。

步骤3:采用畸变校正后的单幅特征靶图进行外参标定及透视变换。

图11为单幅特征靶图的虚拟物平面图像映射的示意图。根据步骤3.1,采用特征提取算法提取步骤2校正后虚拟物平面的单幅特征靶图的M N个特征点坐标,特征提取算法在棋盘格形态下的特征靶图的应用上,特征点即为角点,具体的可以选择角点检测算法,例如SUSAN、Harris等常用的角点检测算法,得到特征点的坐标;采用最近向梯度排序法,将提取出的M N个特征点与单幅特征靶图均匀分布的M N个特征点进行一一映射,具体的,提取到的特征点的编号可根据特征点提取算法遍历图像的顺序,先检测出的特征点编号为1,遍历中每次检测出的特征点编号自加1,直到特征点提取算法遍历完整个虚拟物平面图像,具体实施例中定制的特征点靶图有15x4个特征点,因此M为15,N为4;最近向梯度排序法,具体的由于检测到的特征位置呈现梯形分布,第一步选择水平坐标最小的特征点,则必为梯形的左边的一个顶点,记录其特征点编号,假设为1;得到该特征点的坐标后分别与其余特征点连线,计算连线与水平方向0度(图像按照象限划分右上为一象限)的夹角,计算方法为按编号进行其余特征点与水平坐标最小特征点坐标差分,纵坐标差分值符号为负,横坐标差分值符号为计算值符号,采用atan函数计算夹角,连线夹角值编号与特征点编号一致;选择N-1个连线夹角值最大的特征点编号,假设为16、31、46,对编号为1、16、31、46的特征点垂直方向坐标进行从小到大排序,假设排序为16、31、46、1,则记录编号为16的特征点的坐标,记录编号为31的特征点的坐标,记录编号为46的特征点的坐标,记录编号为1的特征点的坐标;将编号为16、31、46、1的特征点从特征点序列中排除,即去掉了梯形分布特征点的最左边4个特征点,剩余的特征点依然是梯形分布,按照上述方法重复排序,依次记录到直至,即完成了虚拟物平面提取出的M N个特征点与单幅特征靶图均匀分布的M N个特征点进行一一映射。

图12为虚拟物平面与透视变换平面的透视变换关系的示意图。根据步骤3.2,将步骤1得到的靶图特征点坐标即缩放比例S1将特征点坐标映射到透视变换平面坐标,满足公式,,取为1,即编号为1的靶图特征点,其坐标如具体实施例stpe1所述 为(0.06,0.06),对应到透视变换平面坐标为(0.6,0.6),S1为10,透视变换平面尺寸为水平9.6,垂直3。根据特征点编号将靶图在透视变换平面上的特征点与畸变校正后虚拟物平面提取的特征点的坐标解单应性矩阵,得到经典外参矩阵,即完成外参标定,具体的畸变校正后的虚拟物平面与透视变换平面的关系为透视变换关系,满足单应性矩阵关系:

(公式一)

(公式二)

其中矩阵为经典外参矩阵。经典外参矩阵的求解过程即为将的水平方向坐标值和垂直方向坐标值带入单应性矩阵解方程,如下:

(公式三)

解式三,得到经典外参矩阵,带入式二,即可以得到透视变换平面坐标()对应的虚拟物平面坐标()。透视变换后图像

PERS)的像素值为虚拟物平面坐标()最近邻像素坐标的插值,具体的可以采用双线性插值。图13所示为单幅特征靶图的透视变换图像的示意图,即透视变换后的靶图图像PERS

步骤4:通过单幅特征靶图分区域的实际特征点物理分布位置和单幅特征靶图透视变换后的分区域对应特征点进行拟合得到分区域拟合函数。

根据步骤4.1,通过特征提取算法提取透视变换后单幅特征靶图PERS的特征点,其方法可以采用角点检测算法。将一一配对采用最近向梯度排序法或者最小距离阈值法,最小距离阈值法具体的,将编号为1的的特征点坐标与所有坐标进行差分,得到水平差分值和垂直差分值,根据勾股定理计算其与的距离,得到数量为,得到值最小的,若其小于设定阈值thre_manu,这里设定为0.3,则将其对应编号的坐标记录为编号为1,若不满足阈值限定条件,则将记录的坐标置为(0,0),以此类推直到遍历完所有的,则完成了PERS特征点与EXT特征点的一一匹配,不满足阈值限定条件的编号处往往是采用的某种角点检测算法未检测到角点,因此忽略此处特征点信息。

根据步骤4.2,可设定若的最大值为在0和0.1之间,将透视变换平面矩形区域以水平中心线划分为左右2个,若最大值在0.1和0.2之间,将透视变换平面矩形区域以中心点划分为左上,右上,左下,右下四个。设此时最大值为0.06,选择将透视变换平面按照水平中心线划分为左右2个矩形区域,左边的矩形区域编号为1,右边的矩形区域编号为2,则编号为1的矩形区域包含的特征点编号为1~30,编号为2的矩形区域包含的特征点编号为31~60。由于摄像头装配精度,每个摄像头完成装配后其LENS主光轴与CMOS平面并不是完全垂直,同时LENS光心与CMOS有效采图区域的中心也不是完全吻合,因此采用统一畸变校正方法进行畸变校正必然存在装配精度导致的畸变误差,这也导致畸变校正和透视变换后靶图图像提取出的特征点与靶图特征点理论物理分布映射到透视变换平面上的特征点位置存在误差,采用拟合的方法可以大大减小这种误差,由于拟合函数格式多样,不一定完全吻合误差模型,因此根据误差程度将拟合区域划分更细可以极大的提高该区域的拟合精度。采用透视变换后的靶图图像提取的特征点与靶图实际特征点物理分布位置映射到透视变换平面的特征点进行拟合,具体的,可以构造拟合函数:

根据拟合函数解超定方程组,得出拟合参数,即求得了该区域的拟合函数。将编号为1的矩形划分区域中的编号为1~30的代入拟合函数解超定方程组,得到编号为1区域的拟合参数和拟合函数;同理将编号为2的矩形划分区域中的编号为31~60的代入拟合函数解超定方程组,得到编号为2区域的拟合参数和拟合函数

步骤5:对实际安装场景摄像头采集图像根据LENS供应商提供的光学畸变参数表进行畸变校正,然后再透视变换的图像通过分区域拟合函数进行分区域精确校正。

根据步骤5.1,在摄像头实际安装标定场景采集现场标定靶图图像,根据步骤2的实施方式,得到现场采集标定靶图畸变校正后的虚拟物平面图像P

根据步骤5.2,将现场采集标定靶图畸变校正后的虚拟物平面图像P根据步骤3的实施方式,进行外参标定,得到经典外参矩阵,并根据经典外参矩阵对现场采集标定靶图畸变校正后的虚拟物平面图像P进行透视变换得到透视变换图像PERS。如图14所示即为现场采集标定靶图的透视变换图像的示意图。为了展示摄像头个体装配精度导致的误差,此处选用的实际安装标定场景标定靶图采用了覆盖360环视应用有效区域的大棋盘格布,点的位置为棋盘格实际均匀分布的特征点在PERS上的位置,标记出了PERS上检测到的特征点与棋盘格均匀分布特征点在水平、垂直及连线的距离。

根据步骤5.3,采用拟合函数分别对透视变换图像PERS进行拟合校正。透视变换图像PERS尺寸为水平9.6米、垂直3米,设定拟合校正后的图像为水平960垂直300个像素,则为9.6/960=0.01。

如图15所示为拟合校正图像的示意图。将拟合校正图像坐标带入拟合函数:

计算出拟合校正图像该坐标对应的透视变换图像PERS坐标的像素值为透视变换图像PERS坐标最邻近的周围像素的插值,具体的可以采用双线性插值。

如图16所示为拟合校正图像的示意图。将拟合校正图像坐标带入拟合函数:

计算出拟合校正图像该坐标对应的透视变换图像PERS坐标的像素值为透视变换图像PERS坐标最邻近的周围像素的插值,具体的可以采用双线性插值。

步骤5.4,采用步骤42划分的区域坐标分布及,对拟合校正后的图像进行区域划分,具体的,上文描述的划分区域以水平中心校划分左右2个矩形区域,则左侧编号为1的矩形区域矩形顶点为EXT(0,0),EXT(0,3),EXT(4.8,0),EXT(4.8,3),对应的在编号为1的矩形区域对应的矩形顶点为(0,0),(0,300),(480,0),(480,300);右侧编号为2的矩形区域矩形顶点为EXT(4.8,0),EXT(4.8,3),EXT(9.6,0),EXT(9.6,3),对应的在编号为2的矩形区域对应的矩形顶点为(480,0),(480,300),(960,0),(960,300)。

步骤5.5,如图17所示为生成的最终校正图像的示意图;最终校正图像FIN在编号1的矩形区域的图像为在编号1的矩形区域的图像,最终校正图像FIN在编号2的矩形区域的图像为在编号2的矩形区域的图像。

为了展示本发明的效果,在最终校正图像FIN上进行特征点提取,再标记出了FIN上检测到的特征点与实际标定场景覆盖360环视应用的有效区域的大棋盘格均匀分布特征点在水平、垂直及连线的距离,可以看到最终校正图像特征点与理论均匀分布特征点的差异在亚像素级,相比如图14所示的现场采集标定靶图的透视变换图像而言,极大的改善了个体摄像头由于装配精度导致的畸变误差。

本发明所提供的图像校正方法,实现了对个体摄像头采集图像的高精度畸变校正,减少了360环视图像融合环节的复杂度,大大提高了360环视图像与实际摄像头视场内物体分布位置的精准匹配度,减小了个体摄像头内参标定的采集图像次数及时间成本,节省了自动化标定产线上工位装配复杂的机械手等移动靶图的设备、人工的成本。

【EN】

A kind of method for correcting image and device

Technical field

The invention belongs to the field of video image processing of camera, especially a kind of method for correcting image and device.

Background technique

With the high speed development of electronic technology, so that the progress in automobile assistant driving field is maked rapid progress, it is most universal at present

Visualization auxiliary drive and look around for vehicle-mounted 360 degree, it is several to vehicle body ambient enviroment that this technology very easily solves driver

The problem of dead angle in rice range can not be observed.The technology after Chinese herbaceous peony, vehicle two sides, vehicle by the way of installing wide-angle camera

The complete field of view of vehicle body surrounding is obtained, the camera parameter demarcated early period and wide-angle camera are cooperated by MCU or DSP

Optical path distortion parameter carries out image rectification and the image co-registration of each camera to realize 360 degree of the generation for looking around image, and

It is transferred to display screen and carries out real-time display.

Current vehicle-mounted 360 degree of viewing systems, which largely solve driver, can not observe vehicle body by naked eyes

The blind area of several meters of surrounding drives as moved backward to basic, turns, by limited road, lesser barrier of height of observation etc. is obtained

Fabulous solution, but low precision of the different individual wide-angle cameras since technique itself is made that is mounted in around vehicle body

It is different, cause after each wide-angle camera distortion correction there are different degrees of pattern distortion error, which increase in 360 panoramic views

The figure of complexity as the link of fusion and the image middle conductor degree of misalignment using simple image fusion method and subjective observation

As the mismatched degree of fusion.

For automobile assistant driving also in developing stage, either 360 degree are looked around still pre-post automobile data recorder at present

It all is attempting to complete certain specific function, such as 360 purports looked around are to solve driver for vehicle periphery closely

The subjective judgement of sight blind area and object actual distribution position in range, and if looking around stitching image and there is individual camera

Integrally there are distortion errors for splicing fusion caused by otherness, and it is practical for around visual field object empty to will greatly affect driver

Between position subjective judgement impression, or even cause some subjective illusion.

Image fusion technology is often used, so that multiple in the industry at present in order to reduce the subjective illusion that this splicing misplaces

The image overlapping region of camera retains the image information of individual camera according to certain weight, is subtracted by transparency process etc.

The splicing of light driver's human eye judgement, which misplaces, to be experienced.But in this way multiple camera image overlapping regions still have fuzzy sense and

A degree of dislocation sense;Furthermore the method for internal reference calibration is generallyd use in the industry to carry out spending the time longer to individual camera

Internal reference calibration, by way of mathematical optimization skill solution internal reference equation come to individual camera carry out for a certain internal reference correct

The internal reference parametric solution of theoretical equation, to reduce the distortion correction error of individual camera, this method is needed with individual camera shooting

Head acquires several target figures, and the position of several target figures needs to be varied so that the acquisition image covering of each target figure is as much as possible

Field of view, this method for camera calibration environment, equipment, manually have higher requirement, and there is arithmetic accuracy in itself

Error, often a camera repeatedly carries out that internal reference internal reference parameter obtained by calibrating time time is all different, this makes the camera

Carrying out image distortion correction with internal reference parameter again after internal reference calibration, there is also certain errors, while also increasing being made for camera

Time cost.

Summary of the invention

The present invention provides a kind of image distortion correction methods, to realize the high-precision to individual camera collection image

Distortion correction improves the accurate matching degree of object distributing position in image and practical camera visual field.

According to above-mentioned purpose, implement a kind of method for correcting image of the invention, for carrying out to collected fault image

Correction, includes the following steps:

Step S1 is put single width feature target figure, is acquired using camera, and the distortion figure of the single width feature target figure is generated

Picture;

Step S2 carries out distortion correction using fault image of the optical distortion parameter list to the single width feature target figure, generates institute

State the virtual object flat image of single width feature target figure;

Step S3 carries out outer ginseng calibration to the virtual object flat image of the single width feature target figure and obtains after joining matrix outside classics,

The perspective transform image that perspective transform generates the single width feature target figure is carried out using matrix is joined outside classics;

The perspective transform image of the single width feature target figure is divided into different zones, to the single width feature target figure by step S4

Perspective transform image on characteristic point matched one by one with the characteristic point subregion on single width feature target figure, subregion fitting

After obtain subregion fitting function;

Step S5 acquires actual field image using camera, generates the fault image of actual field image, successively uses optics

Distortion parameter table progress distortion correction carries out perspective transform using ginseng matrix outside classics, uses and divide according to the different zones of division

Region fitting function splices fusion after carrying out subregion fitting correction again, generates the correction of a final proof figure of the actual field image

Picture.

Optionally, the single width feature target figure is rectangle, and the characteristic point being arranged thereon is fisrt feature point, fisrt feature point

It is both horizontally and vertically spaced identical.

Optionally, the virtual object flat image is the image generated on virtual object plane, the optical distortion parameter

Include field angle and corresponding image height in table, the field angle for the point on virtual object plane to camera optical center line with take the photograph

As the angle of head primary optical axis, the corresponding image height is as point corresponding in plane to the distance as planar central point, the picture

Plane is the fault image that camera acquisition generates;

In the step S2, by the corresponding relationship of field angle and corresponding image height, to the distortion figure of the single width feature target figure of acquisition

As carrying out distortion correction.

Optionally, the step S2 includes the following steps:

Step S21 is fitted field angle in optical distortion parameter list with the data of corresponding image height, obtain field angle with it is corresponding

The fitting formula of image height;

Step S22 calculates the central point of the virtual object plane and the coordinate of the central point as plane;

Step S23, in conjunction with field angle and the fitting formula of corresponding image height, the central point of the virtual object plane and the picture plane

Central point coordinate, the coordinate and pair as the coordinate of corresponding points in plane of the virtual object Plane-point is calculated

It should be related to;

Step S24 generates corresponding virtual object Plane-point using as carrying out interpolation with the adjoining pixel of corresponding points in plane

Pixel.

Optionally, the fault image of the single width feature target figure, virtual object flat image, exist on perspective transform image with

The corresponding second feature point of fisrt feature point, third feature point and fourth feature point on the single width feature target figure, single width are special

Fisrt feature point on sign target figure is mapped to generation fifth feature point in perspective transform plane by the scaling,

The step S3 includes the following steps:

Step S31 extracts third feature point in the virtual object flat image of the single width feature target figure, in single width feature target figure

Equally distributed fisrt feature point is mapped one by one, the coordinate of third feature point after being mapped;

The coordinate of fisrt feature point in the single width feature target figure is mapped to perspective transform plane by scaling by step S32,

Obtain coordinate of the fifth feature point in perspective transform plane;

Step S33 is solved and is joined matrix outside classics, and the classical outer ginseng matrix is used for fifth feature point in perspective transform plane

Coordinate be transformed to mapping after third feature point coordinate;

Step S34 carries out perspective transform to the virtual object flat image of the single width feature target figure according to matrix is joined outside classics, obtains

To the perspective transform image of the single width feature target figure.

Optionally, the step S31 includes the following steps:

Step S311 is numbered the third feature point in the virtual object flat image of the single width feature target figure, and records

The coordinate of third feature point;

Step S312 takes the smallest third feature point of horizontal coordinate value, as gradient analysis vertex;

Remaining third feature point is calculated line and level respectively with gradient analysis vertex line according to number by step S313

The angle of half-quad on direction, selects the maximum multiple third feature points of angle, and the quantity of the third feature point of selection compares single width

The line number for the fisrt feature point being distributed in feature target figure is 1 few;

Step S314 carries out rearrangement from small to large according to vertical direction coordinate value to the third feature point of selection, and remembers

Record the coordinate of third feature point;

Step S315 moves reordered third feature point from the virtual object flat image of the single width feature target figure

It removes;When the third feature point in the virtual object flat image of single width feature target figure is not removed all, step S311 is continued to execute;

When the third feature point in the virtual object flat image of single width feature target figure is all removed, the third of obtained rearrangement is special

Sign point and its coordinate, the third feature point after as being mapped one by one with equally distributed fisrt feature point in former single width feature target figure

And its coordinate.

Optionally, the step S4 includes the following steps:

Step S41, by having an X-rayed for fourth feature point and the single width feature target figure on the perspective transform image of single width feature target figure

Fifth feature point on changing the plane is matched one by one;

Step S42 is flat to perspective transform according to the maximum value situation of the fourth feature point and fifth feature point distance that are mutually matched

Face carries out region division, carries out subregion fitting with the coordinate of the coordinate pair fourth feature point of fifth feature point, obtains subregion

Fitting function.

Optionally, the step S41 includes the following steps:

Step S411 successively chooses fifth feature point in numerical order, calculate fifth feature point and all fourth feature points away from

From;

The minimum value of calculated distance is compared by step S412 with preset threshold, is preset when the minimum value of distance is less than

When threshold value, the number and coordinate of the minimum value of recording distance and the corresponding fourth feature point, the minimum value of the distance are phase

Mutual matched fifth feature point is at a distance from fourth feature point;Otherwise, the minimum value of recording distance and fourth feature point

Coordinate is all 0;

Step S413 continues to choose next fifth feature point, executes step when all fifth feature points have not been selected all

S411;Otherwise, stop matching fifth feature point one by one with fourth feature point.

Optionally, the step S5 includes the following steps:

Step S51 carries out abnormal the fault image of the collected actual field image of camera using the optical distortion parameter

Become correction, obtains the virtual object flat image of actual field image;

Step S52 carries out perspective transform to the virtual object flat image of actual field image using the classical outer ginseng matrix and obtains

To the perspective transform image of actual field image;

Step S53 is respectively fitted the perspective transform image of the different zones of division using the subregion fitting function

Correction, obtains the fitting correction image in each region;

Step S54 is spliced to obtain correction of a final proof image to the fitting correction image in each region.

According to above-mentioned purpose, implement a kind of image correction apparatus of the invention, for carrying out to collected fault image

Correction, which is characterized in that described image means for correcting includes:

Including camera, distortion correction module, perspective transform module, fitting module;

The camera generates a distortion image for acquiring original image;

The distortion correction module generates virtual object for carrying out distortion correction to fault image using optical distortion parameter list

Flat image;

The perspective transform module, for carrying out outer ginseng calibration, and the warp obtained using outer ginseng calibration to virtual object flat image

Join matrix outside allusion quotation and perspective transform generation perspective transform image is carried out to virtual object flat image;

The fitting module is matched the point on original image and perspective transform image one by one by different zones and is fitted, meter

The subregion fitting function for calculating different zones, using the subregion fitting function to the perspective transform images of different zones into

Splice fusion after row fitting correction, generates correction of a final proof image.

It optionally, include field angle and corresponding image height in the optical distortion parameter list, the field angle is virtual object

For point in plane to the line of camera optical center and the angle of camera primary optical axis, the corresponding image height is as corresponding in plane

Point to the distance as planar central point, it is described as plane is the fault image that camera acquisition generates, the virtual object plane

For the plane for generating the virtual object flat image.

Optionally, the outer ginseng is demarcated as zooming to perspective according to the coordinate and original image at virtual object flat image midpoint

The coordinate of corresponding points in changing the plane, establishes the matrix equation of coordinate transform, solves transformation matrix, and the transformation matrix is

Classical outer ginseng matrix.

Optionally, the fitting module zooms to corresponding points in perspective transform plane using original image according to different zones

Coordinate, the coordinate of corresponding points on perspective transform image is fitted, subregion fitting function is obtained.

Using technical solution of the present invention, in view of the deficiencies of the prior art, individual is carried out based on single width feature target figure and is taken the photograph

Picture leader is fixed, automates the viewing conditions that calibration environmental simulation goes out actual installation environment according to camera, uniform using characteristic point

Covering normalizes the feature target figure of effective correcting area, has carried out uniformly to correcting area effective under actual installation environment visual field

Virtual feature coordinate label the distortion correction that carries out and outer proofreads correction method and actual installation environment after environment is demarcated in automation

Lower progress ground distortion correction is consistent with method is corrected using classical outer ginseng matrix.Since camera assembly precision leads to camera lens

LENS optical center and cmos sensor center, which are misfitted, or focal plane is with camera lens CMOS plane angle, will lead to distortion correction

The error accumulation of error and subsequent perspective transform, subregion be fitted by way of by perspective transform after extract characteristic point mapping

This individual camera assembly precision error is repaired to true characteristic point physical distribution position.With the assembly precision difference

Camera when actual installation environment is demarcated, adopt the fit equation, it can under actual installation environment correct

Body camera assembly precision error.

Detailed description of the invention

Fig. 1 is: the flow diagram of method for correcting image;

Fig. 2 is: the schematic diagram of normalization single width feature target figure;

Fig. 3 is: the schematic diagram that characteristic point is distributed in single width feature target figure;

Fig. 4 is: the schematic diagram of distorted image correction;

Fig. 5 is: subregion fitting obtains the schematic diagram of subregion fitting function;

Fig. 6 is: the schematic diagram of correction is fitted using subregion fitting function;

Fig. 7 is: the schematic diagram of single width feature target figure;

Fig. 8 is: the schematic diagram of subregion fitting function matched curve;

Fig. 9 is: the schematic diagram of the fault image of single width feature target figure;

Figure 10 is: the schematic diagram of the virtual object flat image of single width feature target figure;

Figure 11 is: the schematic diagram of the virtual object flat image mapping of single width feature target figure;

Figure 12 is: the schematic diagram of the perspective transform relationship of virtual object plane and perspective transform plane;

Figure 13 is: the schematic diagram of the perspective transform image of single width feature target figure;

Figure 14 is: collection in worksite demarcates the schematic diagram of the perspective transform image of target figure;

Figure 15 is: fitting correction imageSchematic diagram;

Figure 16 is: fitting correction imageSchematic diagram;

Figure 17 is: the schematic diagram of correction of a final proof image.

Specific embodiment

Technical solution of the present invention is further illustrated with reference to the accompanying drawings and examples.

Fig. 1 is the flow diagram of method for correcting image.The specific implementation step of method for correcting image in the present invention are as follows:

Step 1: effective correcting area feature target figure and placement position are normalized according to calibration scene.

The real space ruler of effective correcting area using the corresponding visual field of camera installation site is looked around according to practical 360

Spend coordinate by a certain percentageNarrowing down to allows to place within the scope of the space scale of target figure less than calibration scene, and normalization is effective

Correcting area target figure size and placement position are according to its ratioCustomization.

Fig. 2 is the schematic diagram to normalize single width feature target figure.Effective correction of the corresponding visual field of camera installation site

Region is rectangle, the rectangular area for looking around required actual scene ground to be shown in application for 360, and actual physical size is, wherein a length of, width is, camera mounting distance ground level is.Calibration scene allows to place the vertical of target figure

Side space region is, wherein a length of1, width is1, be highly1.Customization target figure length-width ratio Ra be, a length of2, width is2, meet relational expression,

, the fixed placement position of calibration scene target figure is apart from camera2, meet relational expression

Fig. 3 is the schematic diagram of characteristic point distribution in single width feature target figure.The single width feature target figure of customization is by both horizontally and vertically

Equally distributed characteristic point pattern composition, horizontal direction characteristic point areMA, characteristic point isNA, adjacent characteristic point exists

Spacing distance on feature target figure isK.LevelIt is a, verticalCoordinate of the characteristic point on feature target figure meets public

Formula:

,

,

Characteristic point is numbered according to from top to bottom, is from left to right sorted, and characteristic point coordinate and characteristic point are numberedIt maps one by one.

Step 2: the optical distortion parameter list provided using LENS supplier is passed through comprising field angle and corresponding image height

The corresponding field angle of virtual object plane coordinates and the mapping relations as plane image height, the fault image of Lai Jinhang camera acquisition

Correction.

Fig. 4 is the schematic diagram of distorted image correction.If virtual object planeUWith picture planeIFor parallel relation, whereinRrFor void

Certain point on quasi- object planePTo virtual object planar central pointODistance,DisFor the actual height of camera installation site from the ground,POAFor cameraLensPrimary optical axis isOWithoLine and virtual object plane and as plane is vertical,FcsForLensTo as plane

Distance,PrFor virtual object Plane-pointPIt corresponds to as the point in planepTo as planar central pointoDistance,ƟForPIt arrivesLens

Optical center connection withPOAAngle,ΦForPTo the angle of virtual object planar central point line and virtual object planar horizontal direction, withp

To as planar central point line with as planar horizontal angular separation is consistent, in virtual object planeUOnPCoordinate be,O

Coordinate be, as planeIOnpCoordinate be,oCoordinate be

Specifically, as plane is equivalent to the fault image of camera acquisition, camera lens CMOS resolution ratio correspondence image acquisition zone

Domain physical size is a length of, width is, unit is millimeter, and the distortion target figure image pixel resolution ratio of camera acquisition is length

ForLPixel, width areWPixel.

Rule is the optical distortion parameter list that supplier provides in the industryƟUnit is degree,PrUnit is millimeter.

Specifically, virtual object plane be equivalent to the acquisition of camera maximum field of view perpendicular to primary optical axis apart from cameraDis

Plane, virtual object planar horizontal direction length is, vertical-direction length is, unit is scalar units.

The single width feature target figure distortion school of camera acquisition is carried out using the optical distortion parameter list that LENS supplier provides

Just, key step is as follows:

Step 2.1: passing through each field angle for the optical distortion parameter list that LENS supplier providesƟWith corresponding image heightPrNumber

According to being fitted, the fitting formula of field angle and image height is obtained

Step 2.2: calculatingoThe coordinate of point, calculation is,

Step 2.3: calculatingOThe coordinate of point, calculation is,

Step 2.4: according to formulaIt calculates,Substitute into formulaIt obtainsPr, according to formulaIt calculatesΦ, substitute into formula, obtain virtual object planePThe corresponding picture of point

PlanepThe coordinate of point;

Step 25: calculating as planepThe corresponding fault image pixel coordinate of point

Step 2.6: virtual object planePPoint pixel value be camera acquisition fault image pixel coordinate () at arest neighbors rounded coordinate pixel interpolation.

Step 3: outer ginseng calibration and perspective transform are carried out using the single width feature target figure after distortion correction.Key step is such as

Under:

Step 3.1: the single width feature target image of virtual object plane after step 2 corrects is extracted by feature extraction algorithmM NA characteristic point coordinate.Using nearestTo gradient ranking method, by what is extractedM NA characteristic point and single width feature target figure are uniform

DistributionM NA characteristic point is mapped one by one.

RecentlyThe step of to gradient ranking method, is as follows:

Step 1: by what is extractedM NA characteristic point carries out initial number, and corresponding record its coordinate

Step 2: the smallest the characteristic point of horizontal coordinate in characteristic point is extracted, as gradient analysis vertexTAnd remember

Record its number

Step 3: to remainingM N- 1 characteristic point is according to its numberRespectively withTLine calculates in line and horizontal direction

The angle of half-quad, selectionIntermediate value is maximumNThe number of -1 characteristic point

Step 4: it is to numberCharacteristic point carry out sequence from small to large according to vertical direction coordinate value, it is according to order recording by characteristic point coordinate

Step 5: will be reorderedNA characteristic point is removed from characteristic point sequence, and remaining characteristic point is carried out

Initial number, and corresponding record its coordinate, repetition step 2 is resequenced until all characteristic points and is completed, every heavy

A multiple new sortNA characteristic point coordinate is subsequently placed at existingLater.

Step 3.2: the target figure characteristic point coordinate scaling obtained according to step 1S1Characteristic point coordinate is mapped to perspective

Changing the plane coordinate, meets formula,.Being numbered according to characteristic point will

Characteristic point of the target figure in perspective transform planeWith the characteristic point of virtual object flat target figure image zooming-out after distortion correctionCoordinate solution homography matrix, obtain joining matrix outside classics, that is, outer ginseng calibration completed, further according to outer ginseng matrix to distortion

Virtual object flat target figure image after correction carries out perspective transform, the perspective transform plane after obtaining distortion correction and perspective transform

Target figure imagePERS, with practical target figure size according to ratioIt is mapped to the image of perspective transform planeSize is identical.

Step 4: saturating by single width feature target figure subregional fact characteristic point physical distribution position and single width feature target figure

It is fitted to obtain subregion fitting function depending on transformed subregion character pair point.

Fig. 5 is that the schematic diagram for obtaining subregion fitting function is fitted for subregion.Key step is as follows:

Step 4.1: passing through the single width feature target figure after 3 perspective transform of feature extraction algorithm extraction stepPERSOn characteristic point, the characteristic point with the customization of practical target figure is according to ratioIt is mapped to perspective transform planeOn target figure it is special

Sign point, matched one by one, method is nearestTo gradient ranking method or minimum threshold of distance method.

The step of minimum threshold of distance method, is as follows:

Step 1: it is by number'sHorizontal coordinate, and it is allHorizontal coordinateDifference is carried out, is obtained;It is by number'sVertical coordinate, and it is allVertical coordinateDifference is carried out, is obtained.It calculatesWith it is allDistance

Step 2: it obtainsIn minimum value and numberIf the value is less than according to camera lens preset thresholdthre_manu, then willReference numeralCoordinate record enter withIn, it willIn minimum value record

EnterIn;IfIn minimum value be more than or equal tothre_manuThenCoordinate value is assigned to 0,It is assigned a value of 0.Whereinthre_manuIt is less than

Step 3: number isFrom adding 1, step 1 is repeated until all numbersTraversal is completed.

Step 4.2: according toMaximum extent value selection perspective transform plane rectangular area divide number, right

It include to divide in regionWith it is correspondingCoordinate withHorizontal coordinateAnd vertical coordinateIt is right respectivelyHorizontal coordinateAnd vertical coordinateIt is fitted, wherein

It is corresponding to record the number that coordinate is (0,0)WithCoordinate value be not involved in fitting, obtain the regiongIt is quasi-

Close function:

,

Step 5: the optical distortion parameter list that actual installation scene camera collection image is provided according to LENS supplier

Distortion correction is carried out, then the image of perspective transform is accurately corrected by subregion fitting function progress subregion again.

Fig. 6 is the schematic diagram to be fitted correction using subregion fitting function.Key step is as follows:

Step 5.1: being provided according to the practical camera installation calibration scene acquired image of step 2 pair by LENS supplier

Optical distortion parameter carries out distortion correction, obtains virtual object flat imageP

Step 5.2: according to the physical distribution characteristic point position of the practical target figure of step 3 pair practical camera installation calibration scene with

The character pair point position that distortion correction image extracts carries out outer ginseng calibration, is carried out according to outer ginseng matrix to distortion correction image saturating

Perspective transform image is obtained depending on transformationPERS

Step 5.3: respectively according to the fitting function for dividing region,Distortion school is carried out to the image after perspective transform

Just.

If number isFitting correction image be, perspective transform image isPERS, according to zone numberSelection

Corresponding fitting function,, according to formula,, obtainCoordinate () correspond toPERSCoordinate (), whereinFor scale factor, i.e.,PERSWithUnder the premise of length-width ratio is consistent,PERSWithRatio.Pixel value bePERS?The interpolation of closest surrounding pixel at coordinate.

Step 5.4: according to step 4.2 divide area coordinate distribution and, to the image after fitting correctionInto

Row region division.

If regionRectangle vertex be,,,, then correspondingIn regionCorresponding rectangle vertex is,,,

Step 5.5: splicing the correction of a final proof image of fusionFINRegionImage be number beIn area

DomainImage.

In addition, can be put by the multiple different location of feature target figure in another technical solution individually to be taken the photograph

As the internal reference calibration of head, the distortion correction of single camera, this method needs pair are then carried out according to internal reference correction equation

Automation calibration complexity is put forward higher requirements, such as manipulator translation and rotation and to set multiple target figures position more to acquire

Width image, while requirement larger is put with the target figure for meeting different location in the space for automating calibration station.

Embodiment

Step 1: effective correcting area target figure and placement position are normalized according to calibration scene.

There are assembly precision errors, 360 in its volume production to look around in application for a certain individual camera, and installation site is from the ground

Height is 1 meter, it is desirable that the camera provides 9.6 meters of level under its installation site, vertical 3 meters of ground region visual field.

Fig. 7 is the schematic diagram of single width feature target figure.Specifically, feature target figure can be formulated having a size of 0.96 meter of level, vertically

0.3 meter, 0.1 meter from feature target figure of camera under automatic Calibration scene, this size, which meets automatic Calibration board, can place target figure

Space, scalingIt is 10.Specifically, customizable features target figure is black and white gridiron pattern, 16 lattice of level, vertical 5 lattice, list

A grid width is 6 centimetres, and characteristic point is angle point, and level 15, vertical 4, characteristic point number is from top to bottom, from a left side

To the right side.

Step 2: the single width feature target figure of camera acquisition is carried out using the optical distortion parameter list that LENS supplier provides

Distortion correction.

The optical distortion parameter list that LENS supplier provides is discrete angularƟAnd its corresponding image height.According to step

2.1, pass through each field angle for the optical distortion parameter list that LENS supplier providesƟWith corresponding image heightPrData intended

It closes, obtains the fitting formula of field angle and image height, it is fitted specifically, following formula can be used:

The optical distortion parameter list solution over-determined systems for bringing camera LENS supplier offer into obtain fitting parameter, Fig. 8

For the schematic diagram of subregion fitting function matched curve, wherein * mark point is the data that optical distortion parameter list provides, abscissa

ForƟ, unit is degree, and ordinate is, unit is millimeter.

Virtual object planar horizontal direction lengthIt is 1000 centimetres, vertical-direction lengthIt is 300 centimetres, virtual object is flat

Identity distance is from cameraAs described above is 100 centimetres.Effective pickup area of the CMOS under 720p pixel resolution is horizontal wide

DegreeIt is 5.41 millimeters, vertical heightIt is 3.043 millimeters.

According to step 2.2, calculate as planar centraloCoordinate is (2.705,1.5215).

According to step 2.3, virtual object planar central is calculatedOCoordinate is (500,150).

According to step 2.4, formula, virtual object plane certain point are brought intoPIf coordinate is (0,0), corresponding field angle, about 79.15 degree.Virtual object planar horizontal angular separation, about 159.5 degree.Image height is calculated according to the fitting formula of field angle and image height it is if brings into

2.087。PCorresponding picture plane coordinate pointpFor, about

(0.7502,0.7906).

According to step 2.5, calculatepIt is corresponding as plane pixel coordinates are, about (177.5,187.06).

According to step 2.6, virtual object planePPoint pixel value, usespClosest picture element interpolation around point.It specifically can be with

Using bilinear interpolation, i.e. utilization (177,187), (177,188), (178,187), (178,188) corresponding pixel value

Gray1, gray2, gray3, gray4, according topWeight of the point apart from each degree of closeness carries out interpolation.

All the points on virtual object plane are brought into according to the above method and obtain distortion correction figure picture.

Fig. 9 is the schematic diagram of the fault image of collected single width feature target figure;Figure 10 be by step 2 distortion correction after

The schematic diagram of the virtual object flat image of single width feature target figure.

Step 3: outer ginseng calibration and perspective transform are carried out using the single width feature target figure after distortion correction.

Figure 11 is the schematic diagram that the virtual object flat image of single width feature target figure maps.According to step 3.1, mentioned using feature

The single width feature target figure of virtual object plane after taking algorithm extraction step 2 to correctM NA characteristic point coordinate, feature extraction algorithm

Feature target figure under gridiron pattern form using upper, characteristic point is angle point, specifically can choose Corner Detection Algorithm, example

The common Corner Detection Algorithm of such as SUSAN, Harris, obtains the coordinate of characteristic point;Using nearestIt, will to gradient ranking method

It extractsM NA characteristic point and single width feature target figure are equally distributedM NA characteristic point is mapped one by one, specifically,

The number for the characteristic point extracted can traverse the sequence of image according to feature point extraction algorithm, and the characteristic point number first detected is

1, the characteristic point number detected in traversal every time from plus 1, until complete virtual object plan view of feature point extraction algorithm traversal

Picture, the characteristic point target figure customized in specific embodiment have 15x4 characteristic point, thereforeMIt is 15,NIt is 4;RecentlyIt sorts to gradient

Method, specifically since trapezoidal profile is presented in the feature locations detected, the first step selects the smallest characteristic point of horizontal coordinate, then must

For a vertex on the trapezoidal left side, its characteristic point number is recorded, it is assumed that be 1;Obtain after the coordinate of this feature point respectively with its

Remaining characteristic point line calculates the angle of line and 0 degree of horizontal direction (it is a quadrant that image, which divides upper right according to quadrant), calculating side

Method is to carry out remaining characteristic point and horizontal coordinate minimal characteristic point Coordination difference by number, and Diff N award symbols are negative, horizontal

Coordinate difference award symbols are calculated value symbol, calculate angle, binding clip angle value number and characteristic point number one using atan function

It causes;SelectionNThe maximum characteristic point number of -1 binding clip angle value, it is assumed that it is 16,31,46, the spy for being 1,16,31,46 to number

Sign point vertical direction coordinate is sorted from small to large, it is assumed that is ordered as 16,31,46,1, thenRecord number is 16

The coordinate of characteristic point,The coordinate for the characteristic point that record number is 31,The seat for the characteristic point that record number is 46

Mark,The coordinate for the characteristic point that record number is 1;The characteristic point that number is 16,31,46,1 is arranged from characteristic point sequence

It removes, that is, eliminates 4 characteristic points of Far Left of trapezoidal profile characteristic point, remaining characteristic point is still trapezoidal profile, according to upper

It states method to repeat to sort, successively be recordedUntil, that is, complete what virtual object plane extractedM NIt is a

Characteristic point and single width feature target figure are equally distributedM NA characteristic point is mapped one by one.

Figure 12 is the schematic diagram of the perspective transform relationship of virtual object plane and perspective transform plane.According to step 3.2, will walk

Rapid 1 obtained target figure characteristic point coordinate i.e. scalingS1Characteristic point coordinate is mapped to perspective transform plane coordinates, is met public

Formula,, takeIt is 1, that is, numbers the target figure characteristic point for being 1, coordinate

As described in specific embodiment stpe1For (0.06,0.06), perspective transform plane coordinates is corresponded toFor (0.6,0.6),S1Be 10, perspective transform planar dimension be level 9.6, vertical 3.According to feature

Characteristic point of the point number by target figure in perspective transform planeThe characteristic point extracted with object plane virtual after distortion correctionCoordinate solution homography matrix, obtain joining matrix outside classics, that is, complete outer ginseng calibration, the void after specific distortion correction

Quasi- object plane and the flat relation of plane of perspective transform are perspective transform relationship, meet homography matrix relationship:

(formula one)

(formula two)

Wherein matrixTo join matrix outside classics.The solution procedure of classical outer ginseng matrix is willWithHorizontal direction coordinate valueWith vertical direction coordinate value

Homography matrix is brought into solve equation, as follows:

(formula three)

Xie Shisan obtains joining matrix outside classics, brings formula two into, it can obtain perspective transform plane coordinates ()

Corresponding virtual object plane coordinates ().Image after perspective transform

PERS() pixel value be virtual object plane coordinates () nearest neighbor pixels coordinate interpolation,

Bilinear interpolation can specifically be used.Figure 13 show the schematic diagram of the perspective transform image of single width feature target figure, that is, has an X-rayed

Transformed target figure imagePERS

Step 4: saturating by single width feature target figure subregional fact characteristic point physical distribution position and single width feature target figure

It is fitted to obtain subregion fitting function depending on transformed subregion character pair point.

According to step 4.1, pass through single width feature target figure after feature extraction algorithm extraction perspective transformPERSCharacteristic point, method can use Corner Detection Algorithm.It willWithPairing is using nearest one by one

To gradient ranking method or minimum threshold of distance method, minimum threshold of distance method is specifically, be 1 by numberFeature

It puts coordinate and ownsCoordinate carries out difference, obtains horizontal difference valueWith vertical difference value

, according to Pythagorean theorem calculate its withDistance, obtaining quantity is's, obtain being worth the smallest,

If it is less than given thresholdThre_manu,Here it is set as 0.3, then by its reference numeralCoordinate record isNumber is 1, will if being unsatisfactory for threshold value qualificationsThe coordinate of record is set to (0,0), and so on it is straight

It is all to having traversed, then completePERSCharacteristic point withEXTThe matching one by one of characteristic point, is unsatisfactory for threshold value

Angle point is not detected in certain Corner Detection Algorithm often used at the number of qualifications, therefore ignores characteristic point letter herein

Breath.

According to step 4.2, if can setMaximum value be between 0 and 0.1, by perspective transform planar rectangular area

Domain with horizontal center line be divided into left and right 2, if maximum value between 0.1 and 0.2, by perspective transform planar rectangular region in

Heart point is divided into upper left, upper right, lower-left, bottom right four.If at this timeMaximum value is 0.06, is selected perspective transform plane

2 rectangular areas in left and right are divided into according to horizontal center line, the rectangular area number on the left side is 1, the rectangular area number on the right

It is 2, then the characteristic point number that the rectangular area that number is 1 includes is 1 ~ 30, and the characteristic point that the rectangular area that number is 2 includes is compiled

Number be 31 ~ 60.Due to camera assembly precision, its LENS primary optical axis is not with CMOS plane after each camera completion assembly

It is completely vertical, while LENS optical center effectively adopts the center of graph region with CMOS and is also not and fits like a glove, therefore is distorted using unified

Bearing calibration carries out distortion correction and certainly exists distortion error caused by assembly precision, this also leads to distortion correction and perspective transform

The characteristic point in characteristic point and target figure characteristic point Theoretical Physics distribution map to perspective transform plane that target figure image zooming-out goes out afterwards

There are errors for position, can greatly reduce this error using the method for fitting, not necessarily complete due to fitting function format multiplicity

Full sync error model, therefore fitted area is divided to the fitting essence that more carefully can greatly improve the region according to error degree

Degree.Using the characteristic point of the target figure image zooming-out after perspective transformIt is reflected with target figure fact characteristic point physical distribution position

It is mapped to the characteristic point of perspective transform planeIt is fitted, specifically, fitting function can be constructed:

According to fitting function solution over-determined systems, obtains fitting parameter, that is, acquired the fitting function in the region,

.The number in rectangular partition region for being 1 by number is 1 ~ 30WithSubstitute into fitting function solution

Over-determined systems obtain the fitting parameter and fitting function that number is 1 region,;The rectangular partition for being similarly 2 by number

Number in region is 31 ~ 60WithFitting function solution over-determined systems are substituted into, are compiled

Number be 2 regions fitting parameter and fitting function,

Step 5: the optical distortion parameter list that actual installation scene camera collection image is provided according to LENS supplier

Distortion correction is carried out, then the image of perspective transform is accurately corrected by subregion fitting function progress subregion again.

According to step 5.1, target figure image is demarcated in camera actual installation calibration scene collection site, according to step 2

Embodiment, the virtual object flat image after obtaining collection in worksite calibration target figure distortion correctionP

Virtual object flat image according to step 5.2, after collection in worksite to be demarcated to target figure distortion correctionPAccording to step 3

Embodiment carries out outer ginseng calibration, obtains joining matrix outside classics, and abnormal to collection in worksite calibration target figure according to matrix is joined outside classics

Virtual object flat image after becoming correctionPIt carries out perspective transform and obtains perspective transform imagePERS.It is as shown in figure 14 scene

The schematic diagram of the perspective transform image of acquisition calibration target figure.In order to show error caused by camera individual assembly precision, herein

The actual installation calibration scene calibration target figure of selection looks around the large chessboard lattice cloth using effective coverage using covering 360, point

Position is that actually equally distributed characteristic point exists gridiron patternPERSOn position, markPERSOn the characteristic point that detects with

Gridiron pattern uniform distribution features point is in horizontal, vertical and line distance.

According to step 5.3, using fitting function,,Respectively to perspective transform imagePERSIt is fitted

Correction.Perspective transform imagePERSHaving a size of 9.6 meters of level, 3 meters vertical, the image after setting fitting correction hangs down as level 960

Straight 300 pixels, thenIt is 9.6/960=0.01.

It is as shown in figure 15 fitting correction imageSchematic diagram.By fitting correction imageCoordinateWithBring fitting function into:

Calculate fitting correction imageThe corresponding perspective transform image of the coordinatePERSCoordinate,Pixel value be perspective transform imagePERSCoordinateThe interpolation of closest surrounding pixel, specifically

Can use bilinear interpolation.

It is as shown in figure 16 fitting correction imageSchematic diagram.By fitting correction imageCoordinateWithBring fitting function into:

Calculate fitting correction imageThe corresponding perspective transform image of the coordinatePERSCoordinate,Pixel value be perspective transform imagePERSCoordinateThe interpolation of closest surrounding pixel, specifically

Can use bilinear interpolation.

Step 5.4, using step 42 divide area coordinate distribution and, to the image after fitting correctionIt carries out

Region division, specifically, above-described division region divides 2 rectangular areas in left and right with horizontal centre school, then left side is numbered

It is for 1 rectangular area rectangle vertexEXT(0,0),EXT(0,3),EXT(4.8,0),EXT(4.8,3), corresponding?

The corresponding rectangle vertex in the rectangular area that number is 1 is(0,0),(0,300),(480,0),

(480,300);The rectangular area rectangle vertex that right side number is 2 isEXT(4.8,0),EXT(4.8,3),EXT(9.6,0),EXT

(9.6,3), correspondingIt is on the corresponding rectangle vertex in rectangular area that number is 2(480,0),

(480,300),(960,0),(960,300)。

It step 5.5, is as shown in figure 17 the schematic diagram of the correction of a final proof image generated;Correction of a final proof imageFINIn number 1

The image of rectangular area beImage in the rectangular area of number 1, correction of a final proof imageFINIn the rectangle region of number 2

The image in domain isImage in the rectangular area of number 2.

In order to show effect of the invention, in correction of a final proof imageFINUpper carry out feature point extraction, then markFIN

On the characteristic point that detects and practical calibration scene covering 360 look around application effective coverage large chessboard lattice uniform distribution features

Point is in horizontal, vertical and line distance, it can be seen that the difference of correction of a final proof image characteristic point and theoretical uniform distribution features point

It is different in sub-pixel, for the perspective transform image compared to collection in worksite calibration target figure as shown in figure 14, greatly improve

Individual camera distortion error as caused by assembly precision.

Method for correcting image provided by the present invention realizes the high-precision distortion school to individual camera collection image

Just, reduce 360 complexities for looking around image co-registration link, substantially increase 360 and look around in image and practical camera visual field

The accurate matching degree of object distributing position reduces the acquisition image number and time cost of individual camera internal reference calibration, section

The equipment of running targets figure, artificial cost such as the manipulator that station assembly is complicated in automation calibration producing line are saved.

图1
©2018 IPPH.cn   PatViewer·专利搜索
主办单位:国家知识产权局知识产权出版社  咨询热线:01082000860-8588
浏览器:IE9及以上、火狐等  京ICP备09007110号 京公网安备 11010802026659号