|
Image stitching or photo stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high-resolution image. Commonly performed through the use of computer software, most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results,〔Steve Mann and R. W. Picard. "Virtual bellows: constructing high-quality images from video.", In Proceedings of the IEEE First International Conference on Image ProcessingAustin, Texas, November 13–16, 1994〕 although some stitching algorithms actually benefit from differently exposed images by doing HDR (High Dynamic Range) imaging in regions of overlap.〔Steve Mann. "Compositing Multiple Pictures of the Same Scene", Proceedings of the 46th Annual Imaging Science & Technology Conference, May 9–14, Cambridge, Massachusetts, 1993〕〔S. Mann, C. Manders, and J. Fung, "The Lightspace Change Constraint Equation (LCCE) with practical application to estimation of the projectivity+gain transformation between multiple pictures of the same subject matter" IEEE International Conference on Acoustics, Speech, and Signal Processing, 6–10 April 2003, pp III - 481-4 vol.3〕 Some digital cameras can stitch their photos internally. Image stitching is widely used in today’s world in applications such as *“Image Stabilization” feature in camcorders which use frame-rate image alignment. *High resolution photo mosaics in digital maps and satellite photos. *Medical Imaging. *Multiple image super-resolution. *Video Stitching. *Object Insertion. ==Process stages== The image stitching process can be divided into three main components - ''image registration'', ''calibration'' and ''blending''. ; Possible issues with image stitching Since the illumination in two views cannot be guaranteed to be the same stitching two images could create a visible seam. Other reasons for the seam appearing could be the background changing between two images for the same continuous foreground. In general the major issues to deal with are presence of parallax, lens distortion, scene motion, and exposure differences. For panoramic stitching the ideal set of images will have a reasonable amount of overlap (at least 15 – 30%) to overcome lens distortion and have enough detectable features. The set of images will have consistent exposure between frames to minimize the probability of seam occurring. But in the non ideal real life case the intensity varies across the whole scene and so does the contrast and intensity across the frames. Lens distortion, motion in the scene and misalignment all cause ghosting. Also the ratio of width to height of panorama image needs to be taken into account to create a visually pleasing composite. ; General overview of image stitching algorithms * Firstly algorithms are needed to determine the appropriate mathematical model relating pixel coordinates in one image to pixel coordinates in another. This is for image alignment. * Next, we need to estimate the correct alignments relating various pairs (or collections) of images. Algorithms that combine direct pixel-to-pixel comparisons with gradient descent (and other optimization techniques) can be used to estimate these parameters. * Distinctive features can be found in each image and then efficiently matched to rapidly establish correspondences between pairs of images. When multiple images exist in a panorama, techniques have been developed to compute a globally consistent set of alignments and to efficiently discover which images overlap one another. * For image stitching, we must first decide on a final compositing surface onto which to warp or projectively transform and place all of the aligned images. We also need to develop algorithms to seamlessly blend the overlapping images, even in the presence of parallax, lens distortion, scene motion, and exposure differences. ; Keypoint detection Feature detection is necessary to automatically find correspondences between images. Robust correspondences are required in order to estimate the necessary transformation to align an image with the image it is being composted on. Corners, blobs, harris corners and Difference of gaussian of harris corners (DoG) are good features since they are repeatable and distinct. One of the first operators for interest point detection was developed by Hans P. Moravec in 1977 for his research involving the automatic navigation of a robot through a clustered environment. It was also Moravec who defined the concept of "points of interest" in an image and concluded these interest points could be used to find matching regions in different images. The Moravec operator is considered to be a corner detector because it defines interest points as points where there are large intensity variations in all directions. This often is the case at corners. It is interesting to note, however, that Moravec was not specifically interested in finding corners, just distinct regions in an image that could be used to register consecutive image frames. Harris and Stephens improved upon Moravec's corner detector by considering the differential of the corner score with respect to direction directly. They needed it as a processing step to build interpretations of a robot's environment based on image sequences. Like Moravec, they needed a method to match corresponding points in consecutive image frames, but were interested in tracking both corners and edges between frames. SIFT and SURF are recent keypoint or interest point detector algorithms but a point to note is that these are patented and their commercial usage restricted. Once a feature has been detected then a descriptor method like SIFT descriptor can be applied to later match them. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Image stitching」の詳細全文を読む スポンサード リンク
|