Lecture 5&6: Photogrammetry Flashcards
(25 cards)
Photogrammetry
= “Science of measuring in photographs” (Linder, 2009)
➢ Goal = measuring shapes, distances, surfaces and volumes in 3D, from “flat” photographs
➢ Invented in 1851 by Aimé Laussedat (Ecole Polytechnique in Paris) Photo technology ≃ 1840
➢ 1st aerial photo = from a balloon, in 1858 (lost)
➢ Mostly developed for war purposes
First aerial surveys with balloons and kites
➢ 1st use of photogrammetry for topographic reconstruction: 1898-1908 (in Caucasus and Siberia)
➢ 1st archaeological application: 1908 (along Tibre River, Italy)
Industrialisation of aerial photogrammetry
➢ Development of aircraft-based aerial photogrammetry: 1918-1939
➢ 1st aerial mapping tests in Belgium: 1937-1938
➢ Actual development of aerial photogrammetry for cartography in Belgium: from 1945
Structure-from-Motion (SfM) Multi-View Stereo (MVS) Photogrammetry
SfM
➢ = automatic processing of images into 3D point clouds
▪ Based on multiple images acquired from different positions (around the object)
▪ Camera parameters and information derived from processing
➢ SfM is derived from Computer Vision
▪ Computer vision = high-level computational understanding of photos and videos to automatically
extract information (3D reconstruction, object recognition, motion estimates, object tracking, etc.)
MVS
= dense image matching using imaging geometry determined by SfM
CAMERA MODEL (= interior orientation)
= set of parameters that define a camera as a spatial system consisting of a planar imaging area (i.e., the sensor) and a lens with its perspective centre and optical distortion.
The Brown’s distortion model (or Brown’s Conrady distortion model)
= set of equations defining the non-linear distortion associated with a camera and its lens.
Parameters:
f = focal length
cx, cy = coordinates of the centre of perspective
Kn = nth radial distortion coefficients (AMP = K1 → K4)
Pn = nth tangential distortion coefficients (AMP = P1 → P2)
Bn = nth skew (non-orthogonality) distortion coefficients (AMP = B1 → B2)
(X, Y, Z) = coordinates in the local camera coordinate system
(u, v) = projected point coordinates in the image coordinate system (in pixels)
w, h = image width and height (in pixels)
Space resection
If four or more points of known 3-D
coordinates are observed in an image, the camera position can be determined
Commonly used to determine the exterior orientation parameters, based on GCPs.
Exterior orientation
= position and view angle related to a geographic coordinate system.
≃ link between camera geometry and
“real world” geometry
Intersection
If a point is observed in two or more
cameras of known relative positions and orientations, the 3-D coordinates of the point can be determined.
Image matching
Matching automates the process of
locating the same features in different images
2 steps:
(1) Detection of key points
(= remarkable points in a single image)
(2) Detection of tie points
(= key points matched in an image pair)
Image residuals are the difference
between the photogrammetric model and the data ≃ model error
Usually given only by RMS summary values
BUT , spatial distribution and directions insightful
Bundle adjustment
➢ Photo orientations and 3D object
reconstruction are calculated using
triangulation methods
➢ The most accurate one is bundle block adjustment
= A least squares minimisation of overall error (all image residuals) by simultaneous adjustment of all model parameters:
▪ tie point positions
▪ camera positions and orientations
If self-calibrating bundle adjustment:
▪ additionally adjusts camera model parameters
Control measurement
The results can be scaled, rotated and/or georeferenced
Scaling = adding know distances between key points in images.
Rotation = alignment with horizontal/vertical plane
Georeferencing = defining the
geographic/projected coordinate system
➢ Use of ground control points (GCPs)
➢ can also use camera position and pointing data if suitably
precise data are available (= ‘direct georeferencing’)
Other option: co-registration
SfM-MVS workflow
Structure-from-Motion (SfM)
Automatic processing of images into 3D point clouds
➢ Multiple images from different positions
➢ Determines camera information
➢ Produces a sparse surface point cloud (SPC)
Georeferencing
Scale, translate and rotate 3D model to real-world coordinate system
Multi-view stereo (MVS)
Dense image matching
➢ uses the imaging geometry determined by SfM
➢ Produces a dense surface point cloud (DPC)
By-products: mesh, DEM, orthophoto/orthomosaic
Recommendation for good results photogrammetry
- High quality imaging
➢ Static features
➢ Pixel-scale texture
➢ Good camera with fixed internal geometry
➢ Good image overlap with converging views - Good ground control points (GCPs)
➢ Target clearly visible on images (choose appropriate size)
➢ High contrast targets (ideal = 2 by 2 black/white squares)
➢ Accurate XYZ positioning (dGPS, EDM, etc.)
➢ Good distribution (homogeneous, enough GCPs) - Good field imaging conditions
➢ No high contrasts illumination
➢ No changing illumination
➢ No change in the weather conditions
Photogrammetric precision
➢ Determines the reproducibility of the surface shape
A function of:
▪ Image measurement precision
▪ Number of images each point is observed in
▪ Geometry of the image network
If a survey’s overall precision is limited by photogrammetric
considerations, then precision varies irregularly, reflecting
changes in the image content, imaging geometry, etc.
Precision deteriorates with:
▪ Less precise image matching
▪ Fewer observations of individual points
▪ Increasingly parallel directions
Georeferencing precision
➢ Determines the reproducibility of the scale, translation and
rotation of the survey within a geographic coordinate system
A function of:
▪ Number of GCPs
▪ Distribution of GCPs
▪ Precision of measurement + positioning
If a survey’s overall precision is limited by georeferencing, then
precision varies gradually and systematically
Precision deteriorates with:
▪ Increasing distance from the centroid of the control
measurements
▪ Fewer or less well distributed control measurements
▪ Less precise control measurements
Improving survey precision:
photogrammatic considerations
More image observations per point, from wider angles (include convergent imagery) more precise image observations (e.g. avoid areas of vegetation cover)
Improving survey precision:
control considerations
Georeferencing with GCPs (Ground Control Points):
More GCPs, more widely distributed, more precise ground urvey and image observatons of GCPs
Direct georeferencing:
More images, collected over a wider survey span. Increased ratio of survey span to viewing distance (height above ground) more preciseely measured camera positions
Shooting sharp photos
➢ Photogrammetry relies on photographs
➢ Image matching based on same details (points) across different photographs
➢ Blurred images prevent precise feature matching -> inaccurate 3D reconstruction
e.g. Shutter speed/exposure time too slow for camera displacement/ moving parts
e.g. Need of large aperture f/ number for a long depth of field
e.g. ISO value to high -> noisy image
Depth of field
DoF is controlled by aperture (f/ number, here n)
➢ Aperture wide open (large) = small n (e.g., 2.8)
→ Produce a narrow depth of field
➢ Aperture very small = large n (e.g., 16)
→ Produce a wide depth of field
➢ Hyperfocal distance (H) = limit distance from which everything is in focus between H/2 and infinity
➢ H is inversely proportional to n
➢ A small n (e.g., 2.8) produces a large H (the hyperfocal distance is far away)
➢ A large n (e.g., 16) produces a small H (the hyperfocal distance is close)
𝐻= 𝑓^2 / (𝒏×𝑒)
Where:
H = hyperfocal distance
f = focal length
n = aperture
e = circle of confusion
First/Last plane of sharpness (as a function of focusing distance p)
𝐹𝑃𝑆 = 𝐻×𝑝 / [𝐻 + 𝑝 − 𝑓 ] = H^2/(2H - f) ~ H/2
𝐿𝑃𝑆 = 𝐻×𝑝 / [𝐻 − 𝑝 − 𝑓 ] = H^2/(H - H - f) = H^2/f ~ ∞
DoF = LPS - FPS
→ Everything is in focus between H/2 and infinity
Horizontal and vertical fields of view
▪ Sensor size = Ws × hs
▪ 𝐻𝐹𝑂𝑉 = 2 𝑡𝑎𝑛^(-1) (𝑊𝑠/2𝑓)
▪ 𝑉𝐹𝑂𝑉 = 2 𝑡𝑎𝑛^(-1) (h𝑠/2𝑓)
Where:
Ws = sensor width (in mm)
hs = sensor height (in mm)
f = focal length (in mm)
HFOV = horizontal field of view (in radian) VFOV = vertical field of view (in radian)
Ground sampling distance
▪ Sensor pixel dimensions = Wp × hp ▪ 𝐻𝐹𝑃=𝐻𝐹𝑂𝑉×𝐻𝐴𝐺
▪ 𝑉𝐹𝑃=𝑉𝐹𝑂𝑉×𝐻𝐴𝐺
▪ 𝑮𝑺𝑫=𝑯𝑭𝑷/𝑾𝒑 =𝑽𝑭𝑷/𝒉𝒑
Where:
Wp = sensor width (in pixel)
hp = sensor height (in pixel)
HAG = height above the ground (in m) HFP = horizontal footprint (in m)
VFP = vertical footprint (in m)
HFOV = horizontal field of view (in radian)
VFOV = vertical field of view (in radian)
GSD = ground sampling distance (in mm)
Shutter speed (s)
OPTION 1: fixing s according to v and GSD
▪ s < GSD/v (e.g., s = GSD/10v)
Where:
s = shutter speed(in sec.)
GSD = Ground sampling distance (in m) v = flight speed (in m/s)
OPTION 2: selecting v and HAG according to s
Comparison of independent surveys
➢ Each epoch (= dataset acquired at a given date) is processed separately, as an individual dataset
➢ Georeferencing needed for each epoch
➢ Direct DEM comparison
➢ Change detection highly dependent of georeferencing and 3D reconstruction quality
Multi-epoch co-alignment
➢ The images of all epochs are aligned all together
➢ Georeferencing can rely on a single epoch
➢ Camera modelling similar for each epoch
➢ Only works for similar surveys with similar equipment
➢ Does not work properly if the ground surface is significantly different
Point cloud co-registration
What to do when point clouds need to be co-registered
➢ Use Iterative Closest Point (ICP)
▪ Needs DPCs to be ~co-registered already § Create a transformation matrix
▪ Can rescale DPC to align (or not)
➢ Best practice
1. Clone the DPCs
2. Filter the DPCs (resampling)
3. Remove areas with topographic change
4. Apply the IPC
5. Copy the transformation matrix and apply it to the original DPC to align