|
Photogrammetry is the science of making measurements from photographs, especially for recovering the exact positions of surface points. Moreover, it may be used to recover the motion pathways of designated reference points located on any moving object, on its components and in the immediately adjacent environment. Photogrammetry may employ high-speed imaging and remote sensing in order to detect, measure and record complex 2-D and 3-D motion fields (see also sonar, radar, lidar, etc.). Photogrammetry feeds the measurements from remote sensing and the results of imagery analysis into computational models in an attempt to successively estimate, with increasing accuracy, the actual, 3-D relative motions within the researched field. Its applications include satellite tracking of the relative positioning alterations in all Earth environments (e.g. tectonic motions etc.), the research on the swimming of fish, of bird or insect flight, other relative motion processes (International Society for Photogrammetry and Remote Sensing). The quantitative results of photogrammetry are then used to guide and match the results of computational models of the natural systems, thus helping to invalidate or confirm new theories, to design novel vehicles or new methods for predicting or/and controlling the consequences of earthquakes, tsunamis, any other weather types, or used to understand the flow of fluids next to solid structures and many other processes. Photogrammetry is as old as modern photography, can be dated to the mid-nineteenth century, and its detection component has been emerging from radiolocation, multilateration and radiometry while its 3-D positioning estimative component (based on modeling) employs methods related to triangulation, trilateration and multidimensional scaling. In the simplest example, the distance between two points that lie on a plane parallel to the photographic image plane can be determined by measuring their distance on the image, if the scale (''s'') of the image is known. This is done by multiplying the measured distance by 1/''s''. Algorithms for photogrammetry typically attempt to minimize the sum of the squares of errors over the coordinates and relative displacements of the reference points. This minimization is known as bundle adjustment and is often performed using the Levenberg–Marquardt algorithm. == Photogrammetric methods == Photogrammetry uses methods from many disciplines, including optics and projective geometry. The data model on the right shows what type of information can go into and come out of photogrammetric methods. The ''3-D co-ordinates'' define the locations of object points in the 3-D space. The ''image co-ordinates'' define the locations of the object points' images on the film or an electronic imaging device. The ''exterior orientation'' of a camera defines its location in space and its view direction. The ''inner orientation'' defines the geometric parameters of the imaging process. This is primarily the focal length of the lens, but can also include the description of lens distortions. Further ''additional observations'' play an important role: With ''scale bars'', basically a known distance of two points in space, or known ''fix points'', the connection to the basic measuring units is created. Each of the four main variables can be an ''input'' or an ''output'' of a photogrammetric method. Photogrammetry has been defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) as the art, science, and technology of obtaining reliable information about physical objects and the environment through processes of recording, measuring and interpreting photographic images and patterns of recorded radiant electromagnetic energy and other phenomena.〔(ASPRS online )〕 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「photogrammetry」の詳細全文を読む スポンサード リンク
|