2. Background—Spatial Resolution of Imaging Systems
Spatial resolution is a fundamental property
of any imaging system used to collect remote sensing data, and directly
determines the spatial scale of the resultant information. Resolution
seems intuitively obvious, but its technical definition and precise application
in remote sensing have been complex. For example, Townshend (1980) summarised
13 different ways to estimate the resolving power of the Landsat MultiSpectral
Scanner (MSS). Simonett (1983, p. 20) stated 'In the simplest case, spatial
resolution may be defined as the minimum distance between two objects
that a sensor can record distinctly...[but] it is the format of the sensor
system that determines how spatial resolution is measured'. By focusing
attention on the properties of the system (and not the images acquired)
this definition can be applied to a variety of types of images provided
by different systems. In order to provide as complete a treatment as possible,
we provide background on several views of spatial resolution that are
relevant when considering the spatial resolution of astronaut photography.
2.1. Ground Resolved Distance
Photogrammetrists were measuring spatial
resolution of aerial photographs using empirical methods long before the
first scanning sensor was placed in orbit. These methods integrated properties
of the imaging system with the other external factors that determine spatial
resolution. Ground resolved distance (GRD) is the parameter of most interest,
because it measures the applicability of an image to a specific task.
The GRD of an image is defined as the dimensions of the smallest discernible
object. The GRD is a function of geometry (altitude, focal length of optics),
equipment (internal system spatial resolution of the camera or scanner)
and also on reflectance characteristics of the object compared to its
surroundings (contrast).
Performance of a film, film and camera,
or deployed aerial photography system is measured empirically using standard
targets that consist of black-and-white bars of graduated widths and spacings
(figure 6-9 in Slater et al. 1983). The area-weighted average resolution
(AWAR) in lp/mm (line-pairs/mm) at the film plane is determined by measuring
the smallest set of line pairs that can be discriminated on an original
film negative or transparency. Line pairs are quoted because it is necessary
to discriminate between one object and another, to detect it and measure
it.
Film resolving power is measured by manufacturers
under standard photographic conditions (also in lp/mm, e.g. Smith and
Anson 1968, Eastman Kodak Company 1998) at high contrast (object/background
ratio 1000/1) and low contrast (object/background ratio 1.6/1). Most terrestrial
surfaces recorded from orbit are low-contrast, for the purpose of estimating
resolving power of film. Kodak no longer measures film resolving power
for non-aerial photographic films (Karen Teitelbaum, Eastman Kodak Company,
personal comm.), including films that NASA routinely uses for Earth photography.
AWAR can be measured for the static case
of film and camera, or for the camera-aircraft system in motion. For example,
AWAR for the National Aerial Photography Program includes effects of the
lens, resolving power of original film, image blur due to aircraft motion,
and spatial resolution of duplicating film (Light 1996). Given AWAR for
a system in motion, the GRD can be calculated by trigonometry (see equation
A2 3 in section 5in the Appendix, d = 1/AWAR and D = GRD).
An impediment to similar rigorous measurement
of GRD for orbital remote sensing systems is the lack of a target of suitable
scale on the ground; thus, spatial resolution for most orbiting sensors
is described in terms of a less all-encompassing measure, instantaneous
field of view (see below). Additional challenges to measuring AWAR for
a complete astronaut photography system include the number of different
options for aspects of the system including different cameras, films,
and orbital altitudes. These elements that must be standardised to determine
AWAR provide a useful list of those characteristics of astronaut photography
that will most influence GRD.
2.2. Instantaneous Field of View
The instantaneous field of view
(IFOV) is generally used to represent the spatial resolution of automated
satellite systems, and is commonly used interchangeably with the term
spatial resolution when comparing different sensors. IFOV is a combination
of geometric, mechanical and electronic properties of the imaging system.
Geometric properties including satellite orbital altitude, detector size,
and the focal length of the optical system (Simonett 1983). The sensitivity
of each detector element at the wavelength desired plus the signal-to-noise
level desired are electronic properties that determine a minimum time
for energy absorption. For a linear sensor array (pushbroom scanning),
this minimum time is translated to areal coverage by the forward velocity
of the platform (Campbell 1996:97). Usually, each detector element in
the array corresponds to a pixel in the image. Thus for a given altitude,
the width of the pixel is determined by the optics and sensor size, and
the height of the pixel is determined by the rate of forward motion. When
magnified by the ratio of the sensor altitude to the focal length of the
optics of the sensor system, IFOV is the size of the area on the ground
represented by an individual detector element (pixel, Slater 1980:27).
The equation of IFOV with spatial resolution
can be misleading because IFOV does not include factors other than geometry—factors
that largely determine the level of detail that can be distinguished in
an image. For example, IFOV does not include characteristics of the target
(contrast with surroundings, shape of an object, colour), atmospheric
conditions, illumination, and characteristics of the interpreter (machine
or human). Factors influencing spatial resolution that are included in
IFOV can be calculated from design specifications before the system has
been built, whereas the actual spatial resolution of an image captured
by the sensor will be unique to that image. IFOV represents the best spatial
resolution possible given optimal conditions.
2.3. Imagery from Scanners versus Film from Cameras
Like aerial photography systems, the GRD
of early remote sensing scanners (electro-optical imaging systems), was
calibrated empirically (e.g. figure 1-6 in Simonett 1983) but such methods
are impractical for routine applications. Satellites are now common platforms
for collecting remote sensing data, small-scale aerial photographs are
widely available within the USA and Europe, and astronaut photographs
have become available worldwide. Straightforward comparison of resolutions
of photographs to IFOVs of scanner images is necessary for many applications.
How can spatial resolutions of data from
different sources be compared using common units? Oft-quoted 'resolution
numbers', actually IFOV, of digital scanning systems should be at least
doubled for comparison to the standard method of expressing photographic
spatial resolution as GRD, simply because of the object contrast requirement.
The following rule of thumb has been used in geographic applications to
compare spatial resolution of scanners and aerial photography (e.g. Welch
1982, Jensen 1983): a low-contrast target may be converted to an approximate
IFOV by dividing the GRD (of the photograph) by 2.4. Such a rule of thumb
might also be reasonably applied to astronaut photography, making it possible
to convert observed GRD to IFOV, and IFOV to an estimated GRD.
In this manuscript, we will discuss the
spatial resolution of astronaut photographs in terms of system properties
and compute the area on the ground represented by a single pixel, the
equivalent to IFOV. To complete our treatment of spatial resolution, we
will also provide estimates of the sizes of small features identifiable
in images, for several cases where this can be readily done.
<< Back
Next >>
|