a library of selected
you are at
The name of this site, "The Pumpkin", alludes to the hollowed-out pumpkin that figured in the infamous Whittaker Chambers/Alger Hiss espionage case in the late 1940's as a storage spot for "special" documents. I hope these documents are equally (although of course differently) "special".
The current areas are as follows; click on a topic area to jump to that section of the index:
Table of Contents
The documents require Adobe Reader or Adobe Acrobat to view in your browser or to view after download.
For information about the author, click here.
Readers are urged to comment on these articles. To contact the author by e-mail, click here:
Click on a topic area or article title to go to that item
Suppression in a Target Rifle Aperture Sight
Music and electronic music, film, and video
The “80-20 rule” New
Art and craft
The ASCII Character "Octatherp" Withdrawn
Windows users: Click on the title to view the article, or right-click and make the appropriate selection to download the article.
Mac users: Beats me.
Note that the sizes of the PDF files are given in kilobytes (kB). In accordance with the recognized overall conventions of science and engineering, a kilobyte (kB) is defined as 1000 bytes. We are aware of the widely used, but unfortunate, convention in which a kilobyte (kB) is defined as 1024 bytes. This practice is deprecated by the general standards for scientific units.
There is an internationally recognized system of distinct "binary-based" multiple prefixes for units, under which a quantity of 1024 bytes is unambiguously designated a kibibyte (KiB). However, we find no justification for stating the sizes of files here in that unit.
This article gives more information on this matter.
The Canon Speedlite 550EX flash unit provides vertical head tilt for bounce flash operation. Detented positions are only provided for angles of -7°, 0°, 60°, 75°, and 90°. Other tilt angles are useful for various work. This article describes the modification of the 550EX to add further positions to the detent.
Issue 1, 2004.10.04. 5 pages, 1270 words, 5 photographs, PDF format, 184 kB
Many digital image files accommodate metadata items we may describe as annotation, human-oriented information about the image or its circumstances. In this article we describe three classes of such annotation data items. We also discuss the way several image-manipulation software packages allow us to view, add, or change annotation data.
Issue 1, 2004.05.22. 16 pages, 3300 words. PDF format, 122 kB
The Additive System of Photographic Exposure (APEX) provides for stating several factors involved in photographic exposure in logarithmic form. In this way, calculation of the “proper exposure” for a given situation may be done manually using only addition. Although the importance of that has largely faded since the time the system was developed, the scheme is still widely used in technical work relating to photographic exposure, especially the quantity “exposure value” (EV). This article explains the APEX system, and gives cautions about irregularities in its usage that are often encountered.
Issue 7, 2007.08.04. 16 pages, 3700 words. PDF format, 153 kB
When presenting lens performance data in the form of a modulation transfer function (MTF), we often see separate curves for meridional and sagittal response. This primarily relates to a lens aberration called astigmatism. In this article, we discuss astigmatism and the significance of the terms meridional and sagittal.
Issue 2, 2014.06.04. 20 pages, 4270 words 9 figures. PDF format, 99 kB
Canon, Inc. typically expresses the accuracy tolerance of the autofocus system in their EOS digital SLR cameras as a fraction of the depth of focus. Of interest in relating this specification to its impact on actual photographic work is how this relates to depth of field. In this article, we describe that relationship, as well as the basic significance of the specification, and of depth of focus itself.
Issue 1, 2005.06.06. 9 pages, 2900 words. PDF format, 114 kB
The quantity “assumed average scene reflectance” is widely mentioned in connection with the calibration of “reflected light” photographic meters. Understanding of its significance is elusive. In this paper, we examine the actual significance of this quantity and how it plays a role in deciding upon a calibration constant for a reflected light exposure meter. We also examine the significance of various oft-mentioned values of assumed average scene reflectance, such as 18% and 12.5%, and finally discuss the use of a gray card of known reflectance to perform “incident light” metering using a reflected light meter.
Issue 1, 2005.01.30. 19 pages, 6200 words. PDF format, 159 kB
Many digital cameras (including many Canon models) offer an optional form of output file in which the data does not directly represent a “finished” image but rather is a more-or-less verbatim transcript of the digitized output of the collection of individual sensor photodetectors. This is referred to as the “raw” output. Recent middle- and upper‑tier Canon dSLR cameras offer two alternatives to this output file type, described as the “sRaw” (“small raw”) and “mRaw” (“medium raw”) output files.
In this article I first give extensive background on various underlying topics. I next discuss the principles of the sensor arrangement that leads to the use of the raw output concept. Then I briefly review the “regular” raw output, what we do with it, and the advantages of operating in that mode. Next I describe the concept of the sRaw format, its important technical details, and why it is said to be beneficial. Finally, I introduce the mRaw format, and compare it with sRaw.
Issue 3, 2014.11.17. 14 pages, 4275 words. PDF format, 88.53 kB
The Canon EOS 20D, 30D, 5D, and certain 1-series cameras have a Custom Function (Custom Function 4, "C.Fn-04" or "CF04") that allows customization of the effects of half press of the shutter release button and press of the "*" button on the execution of automatic exposure (AE) and automatic focus (AF) functions. Four settings are available. The entire scheme is complex, with the implications varying with the combination of metering, autofocus, and drive modes in effect. This document features a chart that shows the implications of the four settings for those various combinations. The chart itself is accompanied by a narrative synopsis of the implications of the four settings and a brief discussion of where and how each might be useful.
Issue 4, 2007, 2007.11.26. 4 pages, 971 words (synopsis). PDF format, 104 kB
The shutter release switch on the Canon 20D digital SLR camera can fail completely or misbehave. In this article, we describe how to replace the switch. The required disassembly procedure is described in detail, with illustrations.
Issue 3, 2016.11.21. 12 pages, 2860 words, 12 illustrations. PDF format, 431 kB
We are often interested in quantifying the maximum “output” of a photographic flash unit. We often see descriptions in terms of guide number, beam candlepower seconds (BCPS), candela-seconds, and watt-seconds (or joules). In this article we explain the different properties we may wish to describe and the various metrics and units that apply to them.
Issue 7, 2013.10.19. 10 pages, 2445 words. PDF format, 91 kB
In connection with the definition of color in such fields as computer graphics, television systems, and digital still photography, we encounter the two similar-looking, and often-confused, terms chromaticity and chrominance. In this article we illuminate the distinction between these terms.
Issue 4, 2010.10.03. 5 pages, 1390 words. PDF format, 65 kB
The JPEG and TIFF digital still image formats, along with various digital video formats, have provision for recording the chrominance information (which conveys in a special way what the lay person would describe as the “color” of the pixels) in a resolution lower than that of the image being encoded. This concept, followed for over half a century in television broadcasting, takes advantage of the properties of the human perceptual system to reduce the amount of data required to convey an acceptable full-color image of certain pixel dimensions. There are various standard “patterns” for performing this “chrominance subsampling”, and a curious and confusing notation for indicating them. In this article we discuss the concept of chrominance subsampling and describe various systems of notation used in this area.
Issue 3, 2012.01.19. 11 pages, 3300 words, 4 illustrations. PDF format, 119 kB
Digital camera sensors typically have three sets of photodetectors, with differing spectral responses. When a small area of the sensor receives a certain light, the sensors in the three sets (referred to as “channels” in this context) deliver three output values. It would be nice if this set of three values would consistently tell us the color of the light, but for the sensors we commonly encounter, it doesn’t
That makes it essentially impossible to, from a set of these values for a pixel of the image, accurately determine what color (under the representation defined for a certain color space) to record for the pixel.
It also makes complicated the matter of describing the response of the sensor (as we might look to some testing laboratory to do for us).
In this article, I discuss these interlocking issues.
An appendix discusses the reports of sensor behavior for various digital cameras published by a well‑respected testing laboratory (DxOMark) and discusses some conundrums in them.
Issue 1, 2015.10.18. 11 pages, 5285 words, 9 figures, one appendix. PDF format, 448 kB
A Color Space is a completely‑specified scheme for describing the color of light, ordinarily using three numerical values (called coordinates). An important color space, defined by the International Commission on Illumination (CIE, the initials of its French name) is the CIE XYZ color space. It is widely used in scientific work, and color descriptions in other color spaces are often related to their representation in this space. A derivative of this color space, the CIE xyY color space, is often used as a way to graphically present the chromaticity of colors.
The XYZ color space itself has a fascinating genesis. Its nature, history, and role in both theoretical and practical color science are described in this article, along with that of its cousin, the CIE xyY color space.
The article begins with a review of several important technical concepts that are involved in the story.
Issue 1, 2010.03.21. 16 pages, 4465 words, 7 illustrations. PDF format, 241 kB.
Many digital still cameras today use a “CMOS” sensor array. The name comes from the fact that this type of sensor uses the same construction, and can be fabricated with much the same technique, as the familiar CMOS (complementary metal-oxide-semiconductor) integrated circuit chip. However, the designation also implies an architecture and readout technique dramatically different from that of the other popular sensor type, the CCD (charge-coupled device). In this article we discuss the principles and operation of an important form of the CMOS sensor, the active pixel sensor (APS) form. We also describe a specific application of the design in the sensor used by Canon, Inc. in various of its digital SLR cameras.
Issue 2, 2006.06.08. 14 pages, 4160 words, 2 figures. PDF format, 279 kB
The concept of color and color models (coordinate systems for defining a specific color); color model families: tristimulus family, luminance-chromaticity family, luminance-chrominance family; gamma precompensation; details of specific color models for computer graphics, for TV transmission, for still and moving images, including the RGB family, YIQ, YUV, YCbCr, CIE L*a*b* ("CIELAB"; no, "LAB" is not short for "laboratory"), and many others; the CIE chromaticity diagram; luma and chroma; chromaticity vs. chrominance.
Issue 8, 2005.11.08. 40 pages, 13,200 words. PDF format, 242 kB
Color temperature is a concept in which the chromaticity ("color") of a specific flavor of white light is described by reference to the chromaticity of the light emitted by a blackbody radiator at a certain temperature.
This article explains the concept and gives cautions regarding some widely-held misconceptions in this area.
Issue 4, 2005.11.08. 10 pages, 2700 words. PDF format, 976 kB
This Excel spreadsheet allows the user to calculate the near and far limits of the depth of field for focus at any specified distance; the hyperfocal distance; and the near limit of the depth of field for focus at the hyperfocal distance. Separate sheets are provided allowing distances to be stated in meters, millimeters, feet, or inches.
The format allows for the particulars of several "setups" to be entered on separate lines so that the results can be easily compared.
Issue 12, 2010.06.13. XLS format, 51.5 kB
Although the image of an object created by a camera is only “perfectly focused” when the object is at the precise distance to which the camera has been focused, objects at other distances (over a certain range) will have images of what we consider “acceptable sharpness”, an honor for which we must adopt some quantitative, if arbitrary, definition. The range of object distances for which this occurs is spoken of as the depth of field of the camera. This article discusses the traditional concept by which depth of field is defined, quantified, and calculated, and describes the rationales of two outlooks often used to develop a criterion of “acceptable sharpness”. It also discusses the way in which the film frame or format size of a camera influences depth of field. The related topics of depth of focus and out of focus blur performance are also discussed.
Issue 10, 2006.05.15. 27 pages, 7600 words. PDF format, 191 kB
In a photographic system, for a given object luminance (brightness), the image illuminance on the film or equivalent declines as we move outward from the center of the image as a result of the geometric optics involved. The result is a relative darkening of the image toward its borders. If we consider a lens having certain ideal properties, it can be shown that the decline in relative illuminance goes very nearly as the fourth power of the cosine of the angle by which the object point is off the camera axis. Here we derive this relationship. We also discuss differing results given by other authors.
Issue 4, 2007.05.01. 12 pages, 3170 words. PDF format, 146 kB
Several curious conventions are used to describe the general size of sensors in digital cameras. We often find the size of sensors of “compact” digital cameras described with a notation such as 1/1.7” (which refers to a sensor size of about 0.32” x 0.24”). In a larger camera range, we find a sensor size described as 2/3” (that sensor size is about 0.26” x 0.35”). In a larger range yet, we may find an 0.89" x 0.59" (22.5 mm x 15.0 mm) sensor described as “1.6x”. That same sensor is sometimes described as being “APS‑C” size, or as “APS size”. In this article, we describe the premises, evolution, and definitions of these various systems of notation.
Issue 1, 2008.09.26. 7 pages, 1980 words. PDF format, 106 kB
In many types of technical work it is necessary to quantify the “potency” of light. The matter is complicated by the fact that there are many distinct circumstances in which the potency of light is a consideration, each having its own physical concept, dimensionality, and units of measure.
In this article we describe these circumstances and the way in which the potency of light is quantified for each.
Issue 3, 2013.08.19. 16 pages, 3500 words. PDF format, 108 kB
The modulation transfer function (MTF) of a photographic lens tells us how effectively the lens transfers a luminance variation in the scene (by which detail is conveyed) onto the focal plane, and in particular how that varies with spatial frequency (which we can think of as the “fineness” of the detail). This function indicates, objectively, the “resolving potential” of the lens.
We often read of the MTF being determined using a slant edge target test. In this article we review the concept of the MTF and the principles of this testing technique.
Issue 3, 2013.12.01. 11 pages, 3315 words, two illustrations. PDF format, 87 kB
We look to a digital camera sensor to discern the color of the light landing on it and report that in some system of “color coordinates”. It does this by essentially emulating the mechanism used by the human eye to discern color. But for practical reasons, that emulation is imperfect. In most practical sensor designs, two “instances” of light having different spectrums but nevertheless having the same color but may be reported by the sensor as having different colors.
The story of this involves many concepts. Near its end, we see the motivations for, and the nature, certain compromises in practical sensor design, and we see how their adverse affects are mitigated.
Language used to describe the color response properties of a sensor is explained. Included is a discussion of the role in all this of the concept of “white balance color correction”. Extensive background review is given to several areas pivotal to the overall presentation, such as the nature of color, the concept of color spaces, and so forth.
Issue 1.1, 2010.03.15. 27 pages, 8240 words. PDF format, 188 kB
If a flat glass plate is inserted, perpendicular to the optical axis, into the path of rays from a lens heading to form an image, its effect is to shift the point of convergence of the rays (the point at which an image is formed) away from the lens by an amount depending on the thickness of the plate and its index of refraction. In this article, we derive the expression for the amount of this shift.
Issue 1, 2007.07.13. 4 pages, 920 words, two figures. PDF format, 107 kB
In digital photography, a convention called “35‑mm equivalent focal length” is used to allow comparison, across cameras of different format size, of the field of view implications of the use of a lens of a certain focal length on a particular camera.
This article describes the issues that are involved and the working of this convention.
Issue 1, 2009.03.25. 4 pages, 1200 words. PDF format, 78.9 kB
The Canon Wireless Flash System allows freestanding Canon Speedlite flash units remote from a Canon EOS-type SLR camera to be controlled and triggered by optical signals transmitted from a master Speedlite flash unit or Speedlite flash control transmitter located at the camera. The system includes flexible provisions for adjusting the relative contribution to exposure of different flash units, including the topic often spoken of as “flash ratio setting”. In this article, we describe the system and its exposure control provisions.
Issue 1, 2006.06.12, 8 pages, PDF format, 80 kB
We often hear that “a standard photographic exposure meter [or automatic exposure system] is calibrated to a reflectance of 18% [or maybe 13%, or some other nearby number].” Sometimes the word “gray” appears in the description. What does this mean, and why the large variation in the numerical value? In this article, we look at several relevant ISO standards and see how a “standard exposure meter calibration” is implied by their interaction. In an appendix, we look into the calibration situation for Canon digital SLR cameras, as inferred from a test recommended by the manufacturer. We also discuss the related issue of incident light metering, including by way of the use of a “gray card”. A summary is included.
Issue 1, 2006.05.06. 23 pages, 6100 words. PDF format, 162 kB
The Canon EOS-300D (Digital Rebel) digital camera can employ a number of different exposure metering modes for both ambient and (where applicable) flash components of exposure. The user has no free choice of the mode in effect. Rather, it is chosen based on the user's choice of shooting mode and other operational options. This chart shows which modes come into play under what circumstances.
Issue 6, 2004.09.17 1 page, PDF format, 40 kB
The field of view of a camera refers to the region in three dimensional space which is taken in by the camera’s view. It is basically an angular property. There are a number of ways in which its extent can be numerically stated. In this article we discuss the significance of field of view, various approaches to its quantification, and of terms used in that connection.
Issue 2, 2006.02.02 11 pages, PDF format, 114 kB
In photography, the term “format size” describes the actual physical size of the image captured by the film frame, digital sensor, or equivalent. The widespread popularity of digital photography has brought to the user community a plethora of different format sizes, most of them unique to digital photography. In this article, we review the effect of format size on a number of camera behavior and performance issues. We also debunk various misconceptions that circulate in this area, and discuss terminology used to identify a significant numerical factor.
Issue 2, 2005.09.08 17 pages, 7400 words. PDF format, 133 kB
A synopsis of this article is available here:
This is a synopsis of the principal topics covered by the tutorial article, Format Size in Digital Photography. Readers should be aware that, of necessity, this synopsis overlooks many important “if’s, and’s, and but’s”, and I have taken certain liberties with technical precision in the interest of conciseness. Readers wishing additional, more detailed, or more rigorous information on these topics should consult the article proper.
Issue 1, 2005.09.08 3 pages, 1160 words. PDF format, 63 kB
The spreadsheet application Microsoft Excel includes a tool that will calculate the discrete Fourier transform (DFT) or its inverse for a set of data. Users not familiar with digital signal processing may find it difficult to understand the scales used for the input and output data for this process. In this article we review the concept of the discrete Fourier transform, summarize how the Excel tool is used, and explain the meaning of the scales for the data.
Issue 1, 2009.03.04 11 pages, 3570 words, 5 figures. PDF format, 128 kB
The international standard for photographic exposure meters, ISO 2720-1974, contains an inexplicable gaffe, leading to a curious situation regarding the exposure meter calibration constants, K and C. This article tells the story.
Issue 4, 2014.12.06. 7 pages, 2095 words, one figure. PDF format, 108 kB
Incident light exposure metering is a useful technique for planning photographic exposure in many situations. An important genre of incident light exposure meter uses a hemispherical receptor. While there are various facile explanations of what this implementation does and why that is beneficial, the underlying technical concepts are elusive.
In this article I give a concise overview of the basic operation and premises of this type of meter. Some basic background in the concept of incident‑light exposure metering is provided.
[This article is largely an excerpt from the article "The Secret Life of Photographic Exposure Metering", which provides more technical detail on the matter.]
Issue 2, 2014.08.05. 9 pages, 2690 words. PDF format, 123 kB.
The color models known as HSV and HSL (and specific color spaces based on them) are intended to provide ways of describing color that have a broad relationship to the easily‑grasped color attributes hue, saturation, and relative luminance. Almost invariably when an HSV color space is described, it is mentioned that “the color space can be described by a hexcone (a homey synonym for hexagonal pyramid)”. Similarly, we hear that the HSL color space can be “described by a bi‑hexcone” (meaning two hexagonal pyramids joined at their bases).
But these descriptions are paradoxical—the gamuts of these color spaces, “plotted” in their inherent cylindrical coordinate systems, are in fact full circular cylinders. So what might be meant by the allusion to these other, tapered solid figures?
In this article, we describe these two color models and reveal the rationales by which these unusual geometric figures are invoked as representing the associated color spaces.
The article begins with a review of the principles of color, color models, color spaces, and gamuts.
An appendix describes a “different” HSL color model used by Canon in some of its image manipulation software.
Issue 3, 2008.05.12. 30 pages, 8300 words, 10 figures. PDF format, 788 kB
The dynamic range of a digital camera can be simplistically defined as the ratio of the maximum and minimum luminance that a camera can “capture” in a single exposure. But when we try to quantify this property, we find that the establishment of an explicit definition is much more complicated than it seems on the surface. International Standard ISO 15739-2003 gives an explicit definition of dynamic range for a digital still camera and a procedure for determining it. This article explains the basic concept of dynamic range and discusses some of the complications in defining it. Then, the definition given by ISO 15739-2003 is discussed in detail.
Issue 2, 2008.02.06 15 pages, 4580 words. PDF format, 129 kB
With the Speedlite 580EX flash unit, Canon introduced an “image size compensation” feature intended to optimize the automatic beamwidth control (the automatic “head zoom” functionality) when the flash units are used with EOS digital SLR cameras having different image sizes. However, an anomaly in the scheme put into effect an insufficient beamwidth for smaller focal length lenses.
In this article, we review the overall matter of beamwidth control, the Canon image size compensation scheme, the anomaly in the initial 580EX and its impact, and how the scheme works, without anomaly, in newer production of the 580EX and in later models. Charts are included to present the tedious details.
An appendix discusses the operation of the beamwidth control system when a Canon Speedlite 580EX II is used on a Canon Powershot SX20 IS compact camera.
Issue 2, 2011.09.01 17 pages, 4700 words, one figure. PDF format, 121 kB
A technique known as JPEG is widely used for the compression of digital data representing photographic still images. In this article, we explain how this technique works. Appendixes give tutorial insight into several technical concepts that are involved, including the Discrete Cosine Transform (DCT), Huffman coding, and run-length encoding, and discuss the difference between reversible ("lossless") and non-reversible ("lossy") compression.
Issue 1, 2003.08.16. 19 pages, 5740 words. PDF format, 232 kB
The La Crosse BC-900 is a charger, tester, and conditioner for AA and AAA size rechargeable cells of the Ni-Cd and Ni-MH types. It offers extremely flexible management of its numerous functions. In this article, we review the features of this unit and give instructions for its operation.
Issue 1.1, 2006.06.25. 6 pages, 2200 words. PDF format, 78 kB
In large‑format film cameras (and smaller-format cameras whose design is derived from large‑format cameras) there are three principal types of back used: the spring (Graphic) back, the Graflex back, and the Graflok back. This article describes these three types (with illustrations) and gives insights into their implications.
Issue 1, 2007.03.31. 9 pages, 2460 words. PDF format, 225 kB
In discussions of photographic lenses, we often hear of the importance of the principal points and nodal points of a lens. This article describes what these are and why they are significant.
Issue 3, 2004.01.21. 5 pages, 1315 words. PDF format, 53 kB
Often in connection with older cameras we see or hear of the markings "M", "X", and "V" on a lever on the shutter, or may see or hear of the flash sync connector on a camera labeled as "X". In this article, we explain the history and significance of these markings. In the process, we will visit other parts of the alphabet, including "F", "S", and "PC".
Issue 2, 2004.01.17. 4 pages, 1300 words. PDF format, 53 kB
Extension tubes are devices that are placed between an interchangeable camera lens and the camera body to shift the focusing range of the camera to embrace shorter subject distances. The object is generally to achieve a greater image magnification than can be had with the lens in its normal situation. In this article we review the optical principles involved with the use of extension tubes, and give various equations useful in their application.
Issue 1, 2013.10.10. 11 pages, 2735 words. PDF format, 78 kB
The JPEG image encoding system provides for a representation of a digital image in far fewer bits that would be required by a "straightforward" representation. The system comprises many ingenious stages. In this article, I describe the working of these stages in considerable detail.
Issue 2, 2014.06.07, 20 pages, 6525 words. PDF format, 246 kB.
The color of light is defined wholly in terms of visual perception. The color of an instance of light perceived by the human visual system—by definition, that is the color of the light—is determined by the spectrum of the light. Any particular spectrum will “have” a specific color. But there can be many spectrums that will have the same color, a situation called metamerism.
Ideally, a digital camera imaging system would honor this, “recording” the same color for light of any spectrum having the same color. But various compromises in sensor design make our cameras fall short of this; they exhibit some degree of metameric error.
In this article we investigate human perception of color, the nature of metamerism, the operation of digital camera sensors, and why metameric error exists. We also discuss ways in which the impact of metameric error can be mitigated, and the way in which the residual metameric performance of a digital camera sensor can be “scored”.
Issue 1, 2010.03.22. 12 pages, 3575 words. PDF format, 106 kB
In many areas of photographic practice, we are concerned with the difference between two chromaticity values, especially in connection with “white balance color correction” matters. An example would be the departure of the recorded image of a “white” object from the reference white chromaticity of the color space in use, or the departure form “neutrality” of the reflective chromaticity of a “neutral target” (gray card).
In this article, the author suggests the use of the metric “du’v’” as a single-valued measure of the degree of a chromaticity difference. The metric is defined and a rationale given for its use. An appendix defines the way this metric can be calculated from the sRGB coordinates R, G, and B of the two colors whose chromaticity we wish to compare. An available spreadsheet is also described that can be used to perform this determination.
Issue 1, 2008.02.08. 9 pages, 1850 words, one figure. PDF format, 219 kB
Many contemporary digital still cameras use a mosaic sensor array (often called a color filter array, or CFA, or a Bayer array) to develop a digital color image of a scene. In this article we describe this device, the principles of its operation, and its implications on the nature of digital camera images..
Issue 1, 2003.08.04. 7 pages, 2200 words, PDF format, 63 kB
CIPA, the technical association of the Japanese camera industry, introduced in 2006 two new measures of the “sensitivity” of a digital camera, recommended for use instead of the ISO speed rating to date used for the purpose. They are the standard output sensitivity (SOS) and recommended exposure index (REI). These new measures are also defined, as alternatives to the ISO speed, by the 2006 version of the relevant ISO standard itself. In this article, we discuss these two new measures and their significance. The article begins with background information on related topics encountered in the discussions. A summary is included.
Issue 2, 2007.08.30. 13 pages, 3900 words. PDF format, 107 kB
In the late 1930’s, Donald W. Norwood introduced a new principle of incident light photographic exposure metering in which a translucent hemispherical shell (a “dome”) collects the ambient light incident on the scene for measurement by a photoelectric cell. It was found that exposure meters following this principle could, with a single measurement, consistently develop a photographic exposure recommendation that would be highly appropriate over a range of lighting situations, especially those of interest in cinematography.
Today, the preponderance of “serious” incident light photographic exposure meters exploit Norwood’s principle.
But it is not at all obvious, even after considerable study, just how and why meters following Norwood’s principle give this widely‑acclaimed performance. In this article, we will look “under the dome” and see just what is going on.
Background is given in various pertinent aspect of the topic of photographic exposure metering. An appendix gives an analysis and critique of Norwood’s seminal paper on this system, and another gives the derivation of the theoretical directivity of a meter with a hemispherical receptor.
Issue 2, 2016.10.04. 22 pages, two appendixes, 5535 words. PDF format, 370 kB
In photography, the term “field of view crop factor” is sometimes used to describe the ratio of the format size of a full-frame 35-mm camera to the format size of a particular camera of interest. This article describes the underlying technical concept and why we are interested in the factor itself. The author also suggests that the term is not appropriate, and gives his reasons for that opinion, including a critical examination of the rationale often offered by the term’s proponents.
Issue 5, 2006.05.06. 4 pages, 1500 words. PDF format, 73 kB
This Excel spreadsheet allows the user to determine the amount of blurring on the image (in terms of the actual diameter of the circle of confusion) for an object at a certain distance for focus at a different distance. Separate sheets are provided allowing distances to be stated in meters, millimeters, feet, or inches.
The format allows for the particulars of several "setups" to be entered on separate lines so that the results can be easily compared.
Issue 5, 2006.05.13. XLS format, 38.9 kB
The Packard Ideal shutter is a behind‑the‑lens shutter widely used in large‑format view cameras. It was introduced in the late 1800s and is still made and used today. This article describes the shutter, its features, and how it operates. An appendix describes, with illustrations, the detailed working of the shutter mechanism.
Issue 2, 2007.01.06. 15 pages, 4100 words. PDF format, 406 kB
Parallax Suppression in a
Target Rifle Aperture Sight
With an aperture sight, often used on target rifles, the shooter looks through a small hole in a metal plate that is mounted on the rear of the rifle near the shooter’s eye, observes a front sight which is typically a small vertical post located near the front of the barrel, and adjusts the aim of the rifle until the top of that post is located on the desired location on the target. Additionally, users of these sights are always urged to position their eye so that the tip of the post is positioned precisely in the center of the field of view observed through the rear sight aperture. This is done in the interest of eliminating parallax shift between the front sight and target which would lead to uncertainty in aiming.
But we find that, as we look through such a sight and move our eye from side to side (with the aiming point of the rifle fixed), we see essentially no change in the relative position of the tip of the front sight and the target. The expected effect of parallax shift does not appear. This suggests that sight alignment, in the traditional sense, does not affect the point of aim.
In this article we look into this phenomenon. In the process, we will encounter various related matters in the fields of photographic optics and human vision. The results of both “live fire” and optical model tests are given and discussed.
An appendix presents a ray tracing exercise that demonstrates the phenomenon and its source.
Issue 4, 2007.05.30. 23 pages, 6140 words, numerous illustrations. PDF format, 311 kB
The following article is a companion to this one.
In the use of the common aperture sight on rifles in precision target shooting, common wisdom emphasizes the necessity for the shooter to carefully maintain his eye position so the tip of the front sight post appears centered within the circular field of view of the rear sight aperture. Otherwise, goes the wisdom, parallax shift will occur, which will disrupt the accuracy of the shooter’s aim.
A recent article by Robert J. Burdge and Douglas A. Kerr, P.E., points out that this parallax shift doesn’t seem to really occur in practice, and advances an explanation in terms of basic optical theory.
Subsequently, Kerr conducted tests in which the human eye is replaced by a digital camera in the interest of actually demonstrating the behavior involved. This article reports on these tests and discusses the results.
Issue 2, 2007.05.30. 12 pages, 3090 words, numerous illustrations. PDF format, 226 kB
This article is a companion to one listed just above.
This article gives insight into a number of aspects of the concept of pixel resolution in digital photographic practice. Topics include: What do we mean by resolution, and what is pixel resolution? What is the resolution indicator in a digital image file, and what does it mean? What are resizing, resampling, and interpolation? What do publishers mean by their resolution requirements for submitted digital photographs? What is the difference between a pixel and a dot? How do we accommodate the resolution appetite of a printer?
Issue 1, 2004.11.09. 13 pages, 4300 words. PDF format, 110 kB
We often hear it said that photographic exposure meters (including those forming part of camera automatic exposure systems) are “calibrated to 18% reflectance” (or maybe 12.8% or thereabouts). What does this mean? In this article, we discuss what this actually means in digital photography. We also discuss the closely-related matter of “18% gray card metering”.
Issue 1, 2004.07.19. 7 pages, 1800 words. PDF format, 98 kB
During the history of photography, there have been few aspects so fertile as that of photographic exposure meters. Hundreds of firms spawned thousands of models. Three historically-important families of exposure meters have recently come under study here, the GE "DW" series, the "Weston Master" series, and the "Norwood Director" series. Each of them is discussed in some detail in separate articles, listed below.
General Electric made a line of photographic exposure meters with model numbers beginning with "DW" from 1937 thorough the 1950s. They were widely used, especially by amateur photographers. In this article, I describe the evolution of this line, pointing out interesting and curious technical wrinkles that emerged along the way.
Issue 2, 2014.10.28. 14 pages, 3195 words, 6 figures, one appendix. PDF format, 241 kB.
An important family of photographic exposure meters consists of meters which in the earlier days of the family had the name "Norwood Director". The family has a fascinating history with respect to the meters themselves and with respect to the various people and firms involved. In this article I try to paint the overall picture of this family and its story, and describe most of the meters in it. Basic background on the underlying technical theory of incident light exposure metering and related issues, and some specialized data, are given in appendixes.
Issue 5, 2014.12.07. 43 pages, 11,200 words, 24 figures, 6 appendixes. PDF format, 817 kB.
An important family of photographic exposure meters is the Weston Master family. The series has a fascinating history with respect to the meters themselves and with respect to the firms involved. In this article I try to paint the overall picture of this family and its story, and describe the meters in it (actually starting with one that is just before the family proper). Basic background on the underlying technical theory of incident light exposure metering and other specialized technical information is given in an appendixes.
Issue 3, 2014.11.03. 27 pages, 6388 words, 21 figures, 3 appendixes. PDF format, 891 kB.
Photography deals with light, and we are concerned in many technical ways with light and its behavior. Especially in matters of exposure, exposure metering, and the like, our discussions often involve photometry, the discipline of describing the “strength” of light. Our discussions are often hampered by inadequate or incorrect understandings of the different concepts of the strength of light and the terms, quantities and units that are involved. This article provides a concise review of photometry as it applies to photographic matters. It also gives an introduction into how the f/number of a lens affects photographic exposure.
Issue 2, 2007.12.25. 12 pages, 3600 words. PDF format, 111 kB
Light is a form of electromagnetic radiation, and has the property of direction of polarization. This article discusses the concept of polarization and some of its implications. Both plane (linear) and circular polarization are covered. The operation of polarizers - optical components that manipulate the polarization of light - is described, and some applications discussed. Finally, a brief introduction is given to the use of polarizers in photography.
Issue 4, 2004.09.23. 9 pages, 2600 words, PDF format, 106 kB
Many single lens reflex (SLR) cameras are equipped with an arrangement in the viewfinder known as a split image focusing aid, intended to facilitate accurate visual determination of the point of proper focus when focusing manually. In this article, we explain the principle by which this arrangement operates. We also describe another related viewfinder manual focusing aid, the microprism field, and discuss the application of the split image principle to one type of automatic focus detection system, the phase comparison system.
Issue 5, 2005.08.28. 17 pages, 4700 words, 16 figures. PDF format, 151 kB
Many modern cameras, both film and digital, offer (usually as their basic mode of operation) a “programmed automatic exposure” mode. In this mode the camera, after measuring the luminance of the scene, sets both aperture and shutter speed with no further intervention on the part of the photographer. This article discusses the details of this operation as found in the Canon EOS 10D, 20D, 300D (Digital Rebel), and 350D (Digital Rebel XT) digital single-lens reflex (SLR) cameras. It also discusses the related matters of exposure compensation (exposure bias) and program shift, tools that allow the photographer to “tweak” the programmed automatic exposure control mode to deal with the special needs of a particular shot.
Issue 1, 2005.07.04. 13 pages, 4000 words, 4 figures. PDF format, 133 kB
In cartography (mapmaking) and photography, the term projection refers to the process of mapping an array of points in three‑dimensional space to locations on a flat (or flattenable) two‑dimensional surface. A projection is a particular algorithm for doing so. Any photographic process involves projection. Still, we don't often speak of projection in connection with “ordinary” photography, but we do often hear of the concept in connection with panoramic photography (photography with a large field of view). Because of the differing interests in cartography and photography, certain statements about the properties of a particular projection, applicable to one of these contexts, may not apply to the other. This is often a cause of bewilderment to those hoping to understand the technical matters involved.
In this article we introduce the concept of projection, clarify the differing outlooks of the cartographic and photographic contexts, and illustrate the implications of the use of four important projections on a certain representative situation in photographic imaging.
The article is not a treatise on panoramic photography or on the complicated issue of choosing a projection to be used as the premise for the preparation of panoramic images.
Issue 3, 2009.02.23. 22 pages, 6230 words, 10 figures. PDF format, 188 kB
When doing panoramic photography with a conventional camera, multiple, slightly‑overlapping shots of the overall scene are taken by pivoting the camera in steps, and the images are joined to make a single large-scope image. In order to be able to properly join the images, we must avoid parallax shift between them. To do so, the camera must be pivoted about the camera’s center of perspective, which turns out to be the center of the entrance pupil of the lens.
It is widely, but incorrectly, said that the proper pivot point is “the nodal point” of the lens.
In this article we discuss the optical principles involved, and demonstrate why the center of the entrance pupil is the proper pivot point.
Issue 3, 2008.02.23. 15 pages, 4975 words, 7 figures. PDF format, 315 kB
In a digital camera, when we take the voltage output of an individual sensor element and convert it to digital form, a phenomenon called quantizing (often called quantization) takes place. As a result, the digital representation does not exactly represent the voltage. At the end of the digital image chain, this results in a discrepancy between the reconstructed image and the original image.
In the area of waveform‑based digital representation of speech waveforms, we sometimes characterize this discrepancy between the original data and the reconstructed data as a special kind of pseudo‑noise, quantizing noise.
Some workers suggest that this concept is pertinent to the impact of quantizing error on digital images, and that quantizing noise should be reckoned among the ingredients of noise in a digital imaging system, a notion with which the author disagrees. In any case, the process of quantizing does have an effect on how noise already present in the sensor voltage is seen in the digital representation. In this article, both these matters are examined.
An appendix discusses a similar concept followed in the field of digital video engineering.
Issue 1, 2009.02.02. 18 pages, 6100 words. PDF format, 113 kB
Various suppliers provide alternative focusing screens for the Canon EOS 20D single lens reflex (SLR) cameras, including such focusing aids as a split-image prism or a microprism field. This article gives step by step instructions for replacing the focusing screen on the 20D digital SLR , with illustrations.
Issue 2, 2005.09.13. 8 pages, 2260 words, 7 figures. PDF format, 234 kB
The process of capturing a photographic image in digital form with a fixed number of pixels is equivalent to the sampling used in digital audio practice. This process can introduce a type of corruption in the reconstructed image knows as aliasing. This article explains some fundamental principles of the sampling process, including the impact of the Nyquist-Shannon sampling theorem. The cause and nature of aliasing is described, along with its prevention by means of a pre-sampling low pass filter (anti-aliasing filter). The blur filter, an implementation of the low pass filter in a digital camera setting, is explained. Also discussed is the concept of the digital anti-aliasing filter.
Issue 3, 2003.11.14. 4 pages, 1400 words. PDF format, 56 kB
In most methods of representing a continuous “function” (such as an audio or video waveform or a photographic image) in digital form the first step is to capture the value of the signal at regular intervals, a process called sampling. This process can be viewed as a form of (amplitude) modulation. That outlook assists us in understanding how the reconstruction of the original function from its samples works, and helps us understand the phenomenon of aliasing, which corrupts the reconstructed function in cases where the original function does not follow certain constraints. An appendix explains the concept of the power spectral density (PSD) plot, which will be encountered extensively in the article.
Issue 1, 2011.11.21. 18 pages, 4700 words. PDF format, 124 kB
We can tilt the lens of a camera in order that the plane containing objects in perfect focus will not need to be parallel to the film plane, a desirable situation for many types of work, including architectural photography. It is often said that the required relationship between lens, film, and the desired plane of object focus is prescribed by “the Scheimpflug principle”. It fact, two criteria must be satisfied, the second of which may be in terms of the classical Gaussian focus equation. However, we may instead use as the additional criterion a second principle also articulated by Scheimpflug. In this article we describe this whole situation along with the these two principles of Scheimpflug, and we show the equivalence of this additional principle of Scheimpflug to the Gaussian focus equation.
Issue 2, 2006.07.16. 18 pages, 3990 words. PDF format, 132 kB
In photographic exposure metering we make some photometric measurement of the scene or its environment, from which, combined with knowledge or presumption of the sensitivity of the film or digital sensor, is determined a photographic exposure (shutter speed and aperture) we hope will fulfill our "exposure strategy".
There are many subtleties to the concept in its various forms, and many tricky details in its execution. There are many misconceptions and misunderstandings afoot about the area.
In this article I describe the principles of the various types of exposure metering and their implications and try to rectify some of the misunderstandings. The article is not a treatise on exposure metering practice.
Extensive background is given in many matters that are predicates of the process. The presentation is somewhat technically detailed, and some basic algebra is involved here and there. The various concepts are presented in layers, a given layer possibly being visited more than once, so that at any stage the reader will hopefully have all the necessary background to follow the presentation.
Issue 7, 2014.12.02. 41 pages, 10,800 words. 2 appendixes, 13 figures and photographs. PDF format, 512 kB.
In the 1950s, several camera and shutter manufacturers adopted systems for setting camera exposure through a single number that reflected the joint effect of both shutter speed and aperture. This quantity eventually came to be known as exposure value (symbolized Ev). In this article we see what this is all about and how it has been supported in practice.
Issue 2, 2007.05.15. 6 pages, 1754 words, one illustration. PDF format, 106 kB
In considering the behavior of a camera, we may be concerned with how a shift of the lens-to-focal plane distance affects the distance to the object plane of best focus. In this article, we show how this can be calculated. The derivation of the relationship is given in the appendix—an opportunity for the reader to brush up on his freshman calculus.
Issue 1, 2004.10.20 7 pages, 1230 words. PDF format, 103 kB
In this article, we review a number of areas of optics that are especially pertinent to the field of photography, including focal length, focus, magnification, exposure, aperture and f/number, field of view, and depth of field. Basic mathematical formulas for factors of interest are given.
Issue 2, 2004.09.05. 13 pages, 3380 words. PDF format, 144 kB
The sYCC color space is an alternative representation of the “sRGB” color space, but with a special wrinkle though which it can represent a larger color gamut than the sRGB space proper. In this article we review the definition, principles, and implications of the sYCC color space.
Issue 2, 2015.07.24. 10 pages, 3039 words. PDF format, 101 kB
An important class of photoflash unit automatically controls exposure by regulating the duration of the flash output pulse based on measurement, with a photosensor on the unit, of the light reflected from the main subject. Such flash units are often described by their manufacturers and others as “thyristor” flash units. In modern flash units that also offer more sophisticated modes of exposure control, the more basic mode is often spoken of as the “thyristor” mode. In this article, we discuss what a thyristor is and why its name has come to suggest a kind of flash unit and a particular exposure control mode.
Issue 1, 2007.06.28. 9 pages, 300 words. PDF format, 90 kB
Unsharp mask refers to a process used in both film and digital photographic processing to increase the apparent “sharpness” of an image. In this article we explain the working of both film and digital versions of the process, and discuss the meaning of the name. The article does not attempt to teach techniques for the effective use of the process.
Issue 2, 2010.06.05. 12 pages, 3300 words, 2 figures. PDF format, 102 kB
Many camera viewfinders are equipped with a lever or knob that controls “adjustable vision correction”, primarily to allow users who are nearsighted or farsighted to obtain a sharp view of the viewfinder image without their eyeglasses. In this article, we examine how this works and learn about the unit “diopter” which is used to quantify the amount of correction in effect.
Issue 2, 2015.03.23. 8 pages, 2489 words. PDF format, 65 kB
Variations in the chromaticity of the light under which a scene is photographed result in variations of the chromaticity recorded for different objects, often leading to an “unnatural” color appearance in the finished image as viewed. Avoiding this effect is the task of the white balance color correction process. In this article, we discuss the concepts involved, as well as some of the ways in which the process is performed in digital photography. We also discuss various “measurement tools” used in this connection. The discussion is detailed, but not mathematical.
Issue 2, 2009.12.13. 22 pages, 8080 words, three illustrations, two appendixes. PDF format, 146 kB
When we photograph an object illuminated by light whose chromaticity does not match the “reference white” chromaticity of the color space used to record the image, then when the “published” image is examined by a viewer, familiar objects will not seem to have their expected chromaticity. To overcome this undesirable effect, we apply color correction (often called “white balance correction”) to the captured image. In digital photography, we may actually have the camera do this for us “on the fly”. In order for the camera to do so, it must know the actual chromaticity of the incident light—the light that illuminated the subject during its photography.
Although we can measure this with a specialized laboratory instrument, we can also equip the camera temporarily with a special “front end” (often called a white balance diffuser) that will equip it to make the needed measurement itself. There is considerable misunderstanding about the technical principles involved in doing so. In this article we review and explain these principles and show how they pertain to the actual workings of this technique. The article does not discuss the operation or performance of specific available white balance diffusers.
Issue 1, 2008.02.17. 12 pages, 3850 words. PDF format, 108 kB
The focimeter (also called lensmeter, Lensometer, and Vertometer, the last two being trademarks) is an instrument for measuring the optical parameters ("prescription”) of an eyeglass lens. Although there exist today digital readout, and wholly automatic, focimeters, in this article we concentrate on the classical manual focimeter. After the instrument is introduced, background is given on various topics in lens optics and human vision correction. Then the operation of a typical focimeter is described. Appendixes give background in the scheme of measurement used for vision correction lenses, describe the actual optical principles of the focimeter, and give some history of the instrument.
Issue 5, 2016.05.24. 34 pages, 9800 words, 26 figures, 3 appendixes. PDF format, 732 kB
The refractive parameters of eyeglass lenses are specified (in a “prescription”) under a model that considers the refractive effect of the lens as a combination of the effect of both a spherical and a cylindrical lens. In this article we describe this model and its underlying optical principles, and describe how the parameters are usually stated in a prescription. We also look into the fact that a given lens can have its properties stated in two different, but equivalent ways, each used in separate branches of the eye care profession.
We also learn about the “optical cross”, a graphical presentation of the overall refractive behavior of an eyeglass lens.
Issue 3, 2010.12.05. 15 pages, 3975 words, 7 figures, one appendix. PDF format, 188 kB
In the field of vision care, refractor refers to an instrument (sometimes called a phoropter, although Phoropter is a tradename) used to examine a patient to determine the optimum properties of corrective lenses used to overcome various deficiencies in his vision.
In this article we describe the traditional “manual” form of this instrument and the basic way in which it is used.
Since corrective lenses are the heart of the overall activity here, we begin with a review of some principles of lens theory.
Next we consider the notation by which the properties of an eyeglass lens are specified in a “prescription”. We also look into the fact that a given lens can have its properties stated in two different, but equivalent ways, each used in separate branches of the eye care profession, a matter that will have an affect on the design details of the refractors to be used in those two contexts.
An appendix discusses in detail one ingenious part of a manual refractor, the Jackson cross cylinder, used to optimize the parameters for astigmatism correction. A second appendix reviews how the power of a cylindrical lens in a certain direction varies with the angle of that direction with respect to the axis of the lens.
Issue 3, 2010.12.02. 21 pages, 5360 words, 12 figures, two appendixes. PDF format, 333 kB
When prescribing corrective lenses (eyeglass lenses), the prescriber may determine the optimum parameters for the lenses with a technique involving calibrated trial lenses, placed in an eyeglass‑like "trial frame”. In this article, this process and its apparatus are described. An introductory section gives background in some areas of lens theory and the nature and correction of certain vision defects.
Issue 2, 2016.12.18. 21 pages, 5955 words, 12 figures. PDF format, 246 kB
The ability of a lens to converge or diverge rays of light that arrive on separate, parallel paths is quantified as the refractive power (or just power) of the lens. In most optical work, the power is defined as the reciprocal of the focal length of the lens. In the case of ophthalmic (vision correction) lenses, the “rated” power is the reciprocal of the back focal length of the lens, a different quantity. The power reckoned that way is called the vertex power of the lens. The rationale for the use of this convention has been written of ad nauseam, but rarely is the basic justification for it clearly revealed. This article seeks to do that.
The article also discusses the use of a focimeter to determine the vertex power of ophthalmic lenses, including some special conditions pertaining to bifocal lenses.
Issue 6, 2010.12.07. 27 pages, 7890 words 12 figures, two appendixes. PDF format, 147 kB
The Musical Instrument Digital Interface (MIDI) can link together, electrically, various entities involved in the generation, storage, or execution of musical performances. This interface is the centerpiece of an entire complex paradigm of note‑oriented digital representation of musical performances. This paper reviews the basic concepts of this interface, of the paradigm it spawned, and of the related matter of the Standard MIDI File, a format for storing in a computer file the MIDI instructions for executing a musical performance. It also discusses the related concepts of MIDI‑oriented musical notation software, the MIDI sequencer, and MIDI interface hardware. The “MIDI language”, and its repertoire of MIDI messages, is discussed, with full detail, in an appendix.
Issue 7, 2009.03.09. 33 pages, 9560 words, one appendix. PDF format, 174 kB
We may wish to be able to “observe” the MIDI message streams emitted by various electronic music programs in our computer, such as scoring programs or MIDI editors, perhaps in connection with trouble shooting. Various software tools (“MIDI monitors”) allow these message streams to be captured, parsed, decoded, and displayed and/or printed for us. Their use is sometimes complicated by a gender issue at the computer’s internal (logical) MIDI interface. In this article, we describe a typical MIDI monitoring tool (MIDI Ox); discuss the gender issue, and explain how it can be overcome; and give basic guidelines for the use of MIDI Ox.
Issue 3, 2008.09.20. 22 pages, 6080 words, 5 figures, 3 appendixes. PDF format, 241 kB
Various “electronic” music activities inside a PC involve the transmission between entities of “MIDI message streams”, streams of coded messages that describe a musical performance. The flow of these streams from one entity to another is not over the traditional MIDI electrical interface, but rather over a “logical interface”, similar to the ones between applications and I/O devices such as printers and keyboards (the overall arrangement of which can be described as “MIDI plumbing”). Because of the asymmetry of this interface, there arises the concept of entity gender, which may impede us from arranging certain useful information flows. In this article, we describe this architecture, explain the limitations it imposes, and discuss various ways to circumvent those impediments.
Issue 2, 2008.09.20. 12 pages, 3390 words, 4 figures. PDF format, 177 kB
SMPTE time codes are used to identify specific “points” in a video recording or film to a precision of the time of one frame, the smallest unit by which video recordings can be edited, trimmed, and so forth. The time codes may be embedded in the video tape or film or in the digital representation of the “production” in an editor. Musical notation programs intended for use in scoring for video productions often use these time codes as a way of coordinating the music with the video itself. The structure of the time codes is dependent on the frame rate of the production.
A common frame rate for video recordings, following from the North American analog TV broadcast format in use since the advent of color television broadcasting, is nominally 29.97 fps (frames/second). We can readily see that with a non‑integral number of frames per second a straightforward time designation system (working in terms of integral hours, minutes, seconds, and frames) is not feasible. Rather, for this frame rate, a rather tricky scheme is used. That scheme is described in this article.
An appendix explains where this peculiar frame rate came from. A second appendix shows the details of the time discrepancy of the system. A third appendix describes an algorithm that can be used to convert actual time to hours:minutes:seconds;frames notation under this special time code system.
The article only discusses this time code system from an abstract standpoint; there is no discussion of how the time code is physically embedded in a film or video medium.
Issue 3, 2017.06,30. 17 pages, 5600 words, three appendixes. PDF format, 170 kB.
U-verse telecommunication service, offered by AT&T through its operating telephone companies in many areas, uses a single very high speed digital subscriber line (VDSL), or a comparable fiber link, to deliver to a residence telephone service, high‑speed Internet access, and television programming. The system design provides great flexibility in the kinds of wiring over which signals may be carried between different system devices within the premises, facilitating deployment of the service. In this article, we review the basic concepts, architecture, and features of this service, with emphasis on these various interconnection modes. The story of a U‑verse installation at the author’s home is given. An appendix gives technical details of the VDSL transmission system used in one form of the U‑verse service.
Issue 2, 2009.06.05. 23 pages, 7060 words, five figures. PDF format, 280 kB
The first general-availability mobile telephone system (in the sense of a system that extended the telephone network to mobile users) in North America was introduced by the Bell Telephone System in 1946. It was called the Mobile Telephone System (MTS) and had two “versions”, one operating in the 30-50 MHz band and one in the 155 MHz band. There were a limited number of channels allocated, and even in a large city not all of them could be used, owing to the need to reserve some for nearby cities. There was no short‑distance frequency reuse, such as we have today in the various cellular systems. As a result, in the typical large city, there could only be perhaps 12 mobile telephone calls in effect at any time. The system was wholly manual, the services of a special operator being required for both incoming and outgoing calls. This article describes the essential features of this system and its implementation. An appendix describes in detail the operation of the ingenious electromechanical selector used to recognize an incoming call for a mobile station.
Issue 1, 2015.11.16. 26 pages, 7285 words, one appendix, 8 figures. PDF format, 1.27 MB.
In the U.S., until about 1920, telephone switching was done with switchboards operated by human operators, an arrangement called manual switching. In the forthcoming years, this arrangement was progressively superseded by automatic switching, in which the subscribers used a dial to tell a switching machine the number they wanted to reach.
By 1960, the preponderance of U.S. telephone service was on an automatic (“dial”) basis, but some manual switchboards continued in use well beyond that. (The last Bell Telephone System manual switchboard in the state of New Jersey was retired in the mid 1960s. I had the privilege of attending its dismantlement.)
It is tempting to think that the manual telephone switching system was “primitive”, but it was far from that. There was enormously detailed, clever, and precise work done at the system engineering, circuit design, equipment design, manufacture and installation, and operational protocol standpoints.
In this article, I try to give the reader some idea of the basic (and some not‑so‑basic) concepts of the sphere of manual telephone switching, and in the process, perhaps give some idea of what a wondrous creature it was.
Issue 4, 2015.01.07. 39 pages, 10350 words, 18 figures. PDF format, 3.7 MB.
A multi-party telephone line (known to the general public as a “party line”) uses a single pair of conductors (today usually in a cable) from the telephone central office to serve two or more subscribers’ “stations”. The motive is to spread the capital and maintenance cost of the cable pair, and the equipment associated with it at the central office, over two or more subscribers’ service, allowing for lower rates than for an individual line (a single‑party line, often called by the general public a “private line”).
The technical arrangements for this type of service are varied and ingenious. In this article I describe many of these.
Considerable background is given in many principles of telephone line operation that set the context for the descriptions of multi‑party line techniques and practices.
Issue 2, 2013.01.01. 35 pages, 10500 words. 10 figures, one appendix. PDF format, 1.1 MB.
In 1950, the Bell Telephone System introduced a new family of general‑purpose telephone sets, made by their internal manufacturer, Western Electric Company, called the “500 type”.
In the past, the basic model of a telephone set family was given an apparatus code (“model number” to civilians) ending with “A” if the set did not have a dial and “B” if it did have a dial. But in the 500 family, the model (with dial) with which we were the most familiar over the years was designated 500D, not 500B.
This unexpected designation came about through a fascinating story of the evolution of this telephone set family in its early years—a true “war story”. This article tells that story.
Issue 1, 2014.12.04. 5 pages, 1390 words, one figure, one appendix. PDF format, 121 kB
Several types of insulin intended for injection by the diabetes patient are available in a convenient disposable administration device called an insulin pen. In three popular types, the Novo Nordisk FlexPen and FlexTouch pen and the Eli Lilly KwikPen, the device is dispensed pre‑filled with 3 ml of insulin solution, corresponding to 300 international units (IU) of insulin, which may be used for numerous “shots”. For each shot, the user attaches a disposable needle and sets the desired dose with a knob carrying a scale marked in IU. Then, after insertion of the needle at the injection site, the user presses an operating plunger or actuating button, accurately delivering the preset dose. This article describes these three popular types of insulin pen and discusses in detail, with illustrations, the operation of their ingenious and intricate mechanisms.
The reissue to Issue 3 was made principally to include information on the Novo Nordisk FlexTouch pen, recently introduced. The reissue to Issues 4 and 5 primarily made editorial improvements.
Issue 5, 2015.11.30. 34 pages, 9397 words, 27 figures. PDF format, 380 kB
The “80-20 rule” New
We often hear, for example, that in a certain population, “80% of the wealth is held by 20% of the population”. (Almost always, by the way, with those particular numbers.) We hear the more generic formulation, “In many natural or societal situations, 80% of the overall outcome results from 20% of the causes.”
This “situation” is often called the “80-20 rule”. It is often used to “describe” a certain inequality of income, or wealth holdings, or the like, and is often thought to widely apply to such matters. But there are many misunderstandings about what this means. For example, in the matter of wealth, does this “description” actually completely define a certain “distribution of wealth”? And, if so, is that distribution of wealth “typical” for modern societies? This article looks into many aspects of this matter.
Issue 1, 2016.01.26. 15 pages. 3680 words, 6 figures. PDF format, 195 kB.
In converting quantities from one unit to another, we may know the applicable “conversion factor” but be uncertain as whether to multiply or divide. The same uncertainty often arises in other basic mathematical calculations, such as those involving distance, time, and velocity. A technique called “dimensional analysis” can give foolproof guidance in these cases, and can even help us develop the formula needed to determine a certain quantity. This article describes the principles involved and illustrates the practical application of the technique.
Issue 1, 2010.02.06. 8 pages, 1850 words. PDF format, 78 kB
A recreational problem in statistics describes a situation in which an intelligence operative, wishing to know (on very short notice) the fraction of the inhabitants in a certain city who lived north of a river running through the city, contacted one inhabitant at random and determined that he lived on the north side of the river. From that, it was determined that “the best estimate” of the fraction of the inhabitants living north of the river was 2/3. In this article we discuss the possible meanings of that answer and, choosing one for further study, analytically derive that same value.
Issue 1, 2005.12.19. 5 pages, 1385 words. PDF format, 96 kB
The Oliver no. 23-B reversible sulky plow is a horse‑drawn riding plow that can be set to turn the soil to either side, thus allowing the use of highly-efficient plowing “patterns”. The machine was manufactured by the Oliver Chilled Plow Works of South Bend, Indiana, over the period from about 1917 through 1934. It has a very ingenious mechanism, the crafty geometry of which obscures its principles of operation.
Many authorities consider this machine to represent the pinnacle of design for plows of its type.
In this article I describe the Oliver 23-B and explain its mechanism and the way it supports the many special features of the machine. Some background is first given on the concepts of plowing.
Issue 4, 2013.05.24. 23 pages, 6285 words, 20 figures. PDF format, 1159 kB.
The familiar kind of steam locomotive is propelled by a reciprocating steam engine. Central to the operation of any such engine is a valve system, which admits steam to the cylinder during certain portions of the cycle and exhausts the cylinder to the atmosphere during other portions.
In a steam locomotive, valve systems that are robust and free from the need for delicate adjustments are desirable and are generally used. They nevertheless follow ingenious principles to approach ideal engine performance over a range of operating conditions.
In this article, we postulate a simple mathematical model which, it will turn out, closely follows the actual operation of most of these valve systems, and we see how it produces a beneficial plan of steam admission and exhaust.
Two appendixes then review the construction and operation of two classical “valve gear” systems, showing analytically how they in fact closely fulfill the abstract mathematical relationship between piston and valve movement we assume in the body of the article, thus closing the circle.
Issue 1, 2011.04.27. 32 pages, 9500 words, 15 figures. PDF format, 506 kB.
The axles of steam locomotives are supported by springs, for the obvious reasons and some less obvious. If the axle ends were each independently sprung, owing to the high stiffness (spring rate) the springs must have, even small local undulations in the height of the rail will cause substantial dynamic variation in the distribution of the locomotive’s weight over the various wheels, with undesirable effect.
To mitigate this problem, locomotive designers soon developed an array of clever mechanisms for equalizing the wheel‑to‑track force over most of the driving wheels on a given side. Because of the special duties of the first set of driving wheels and the preceding unpowered “leading” wheels, there are different objectives there, fulfilled by a more complex mechanism. This article describes the principles of these equalization mechanisms.
Issue 1, 2011.10.17. 18 pages, 4486 words. 16b figures (including 6 photographs). PDF format, 682 kB.
Wayside rail signaling practice in the U.S. is a nightmarish web of operating rules, signal types, aspects, aspect names, and indications, differing between the different roads and even their individual divisions and locations. Much of the “vocabulary” involved is extremely curious and counter-intuitive, a result of the long historical evolution of this field and of the industry. In this article, after an examination of some of the history of this field, a consistent (if tortured) thread of syntax is identified for the mainstream of current practice, and its principles are discussed at length. Extensive charts illustrate the majority of the vocabulary with explanatory notes under two widely-used “dialects”.
Issue 3, 2007.03.20. 40 pages, 8600 words. PDF format, 267 kB
The Episcopal Church, the U. S. arm of the worldwide Anglican Communion, is embroiled in a controversy—a “civil war”—so virulent as to hold the potential of schism of the denomination, or even of the worldwide Anglican Communion of which it is a member. This article summarizes the principal presenting issues in this controversy, describes significant events along the way, and characterizes the present state of the matter.
Appendixes summarize, and report on the current state of, ensuing litigation with respect to The Episcopal Diocese of Forth Worth (Texas) and The Episcopal Diocese of Northern Virginia.
Issue 5, 2011.09.02. 50 pages, 15,800 words, one illustration, two appendixes. PDF format, 257 kB
The Anglican Communion is a world‑wide association of national or regional Christian churches whose ancestry is linked, in one way or another, to the Church of England, and which hold to some form or another of the faith of that church. A new church body of the “Anglican faith” has been formed which evidently aspires to be recognized as a component of the Anglican Communion, and consequently, interest arises as to how that could happen. The Communion has no constitution or similar instrument, in which we might expect to find the provisions for the embrace of a new member church.
Some commentators have asserted that the Anglican Consultative Council (ACC), a major advisory body within the Anglican Communion, has the power to confer membership in the Communion, through the working of a certain clause in its constitution.
In this article, the credibility of that assertion is investigated through forensic analysis of the constitution of the ACC. The author’s conclusion is that the assertion is not credible.
Issue 1, 2010.09.08. 9 pages, 2520 words. PDF format, 84 kB
Muslims worldwide are exhorted, with regard to their daily formal obligatory prayers, to face in the direction of The Kaaba, a stone structure located within the Sacred Mosque in the city of Makkah (often presented as “Mecca”), Saudi Arabia. It is considered the holiest place in Islam. The concept is often described in the popular press as praying “toward Mecca”, thus the title of this article.
While that mandate seems simple on the surface, when we consider that the Earth is not flat, we immediately run into the matter of how should “in the direction of The Kaaba” be interpreted if one is any significant distance from The Kaaba. Countless works have been written by Islamic scholars over the years on this matter. Two pragmatic premises for making the determination (giving quite different results) have been widely “taught” in modern times by different Islamic advisors. In this article we examine these two premises from a standpoint of the geometry they seem to imply. No attempt is made to judge which is the “most appropriate”, although some personal observations are offered.
Issue 1, 2011.08.13. 12 pages, 3160 words, one figure, one appendix. PDF format, 167 kB
A quilt is a fabric item, often in the form of a bed covering, with two fabric layers between which is a layer of insulating “batting”. In one style, the three layers are held together by a pattern of continuous stitching, a process known as quilting. This may be efficiently applied by a quilting machine, in which a sewing machine (“sewing head”) travels on a bidirectional carriage system over a portion of the entire quilt. In an automated machine quilting system, the sewing head is driven by a computer‑controlled servo system so as to automatically execute the desired pattern. In this article we describe a modern automated machine quilting system utilizing a newly introduced sewing head and a mature commercially‑available PC‑based computer control system.
Issue 1.1, 2008.10.23. 32 pages, 8500 words. PDF format, 786 kB
Most modern general-purpose sewing machines form a lockstitch by way of the rotary hook technique. The working of this system can be quite mystifying, and depends on a clever design worthy of a magician’s kit. In this article, we explain how the rotary hook mechanism produces a lockstitch.
Issue 3, 2008.10.28. 11 pages, 2365 words, numerous illustrations. PDF format, 369 kB
In 2013, the incandescent‑lamp based house lighting system of the Rohovec Theater at New Mexico State University–Alamogordo, in Alamogordo, New Mexico, was completely replaced by an LED based system. As with the earlier system, one of the two sets of lights in the new system—intended for use in connection with theatrical productions—was controlled by the theater’s theatrical light control system, and was intended to be dimmable, using a modern dimming interface system intended for such fixtures. But from the completion of the upgrade, that set of lights could not be dimmed, and in fact on some occasions one half of those lamps flickered. This article tells the story of how this malfunction was diagnosed and corrected. Extensive technical background is provided.
Issue 1, 2016.05.08. 18 pages, 4662 words, 8 illustrations. PDF, 770 kB
This article has been withdrawn.
It is essentially replaced by this new article.
In connection with personal computer technology we often hear reference to the "ASCII" and "ANSI" character sets. This paper describes what these are and explains the basis of the acronyms used to identify them, including why "ANSI" is an inappropriate designation for the second of them.
Issue 2, 2004.07.18. 4 pages, 1300 words. PDF format, 79 kB
In the mid 1960’s, the terms “octatherp” and “octotherp” began to be mentioned as names for the symbol “#”, and this practice continued for many years. These terms arose in a very interesting way. This article tells the story, as best it can be reconstructed at this later point in time.
The article also contains, in an appendix, information about the related term "octothorpe".
Issue 3, 2014.12.06. 13 pages, 3940 words, one appendix. PDF format, 84.8 kB
The standard coded character set ASCII was formally standardized in 1963 and, in its “complete” form, in 1967. It is a 7‑bit character set, including 95 graphic characters. As personal computers emerged, they typically had an 8‑bit architecture. To exploit that, they typically came to use character sets that were “8‑bit extensions of ASCII”, providing numerous additional graphic characters. One family of these became common in computers using the MS-DOS operating system. Another similar but distinct family later became common in computers operating with the Windows operating system.
In the late 1980s, a true second‑generation coded character set, called Unicode, was developed, and was standardized in 1991. Its structure provides an ultimate capacity of about a million graphic characters, catering thoroughly to the needs of hundreds of languages and many specialized and parochial uses. Various encoding techniques are used to represent characters from this set in 8‑ and 16‑bit computer environments.
In this article, we will describe and discuss these coded character sets, their development, their status, the relationships between them, and practical implications of their use.
Information is included on the entry, in Windows computer systems, of characters not directly accessible from the keyboard.
Issue 3, 2010.06.15. 19 pages, 5240 words. PDF format, 132 kB.
Early in the unfolding of the modern era of computer science, it was recognized that computer memory modules typically had location sizes that were integral powers of two. To allow sizes of commonly-encountered memory modules to be stated with modest-size integers, it became common to speak of the size of computer memory modules in terms of a multiple unit of 1024 (210) locations. Sadly, rather than adopting a distinct name and symbol for that multiple, the workers just hijacked the prefix term, “kilo”, and symbol, “k”, used in science and engineering (since 1795) for a unit multiple of 1000. This practice then escalated (in a not-always-consistent way) to larger multipliers such as mega (M) and giga (G). The result was ample opportunity for misunderstanding. Now, an international standard provides distinct, unambiguous prefix names and symbols for a series of “binary” multiples.
This article gives historical and technical background in this matter and describes the new (but, sadly, rarely-used) scheme of “binary” multiples.
Issue 1, 2009.02.25. 7 pages, 2400 words. PDF format, 81.3 kB
Often when editing images in Picture Publisher 10 we may wish to add annotations, which might consist of circles or squares to call attention to certain features, lines to lead to “callout” text, and so forth. It is desirable that these annotations be objects, so that they can be moved, resized, rotated, otherwise modified, or removed.
There are no provisions in Picture Publisher 10 for directly generating geometric figures as objects. However, there are straightforward, although not trivial, procedures for doing so indirectly.
This article gives the details of several such procedures.
Issue 1, 2011.08.28. 8 pages, 2235 words/ PDF format, 69.0 kB
Scientific notation is a scheme for presenting numbers over a wide range of values that avoids the consumption of page space and the other inconveniences of long and unprofitable strings of leading or trailing zeros. A related convention provides for the convenient entry of numbers over a wide range into calculators or their introduction as constants into computer programs. A closely‑related concept, floating point representation, provides for the compact representation of numerical values over a wide range inside computers or in data recording structures. In this article we will examine these concepts and then give details (some very tricky) of various standardized conventions for their use. The metrics range and precision are discussed. Background is given on pertinent basic number theory concepts.
Issue 1, 2011.07.20. 19 pages, 4680 words. PDF format, 125 kB
Many of my friends and colleagues have expressed interest in how I got together with my wife, the lovely and fabulous Carla. Here's a brief recount of that deal.
Issue 2, 2004.09.25. 4 pages, 1100 words. PDF format, 42 kB
This Web site maintained by
Complaints? Bug reports? Location of buried treasure?