Photography – Bernie Sumption's Blog https://blog.berniesumption.com Various writings on software development and photography Thu, 17 Oct 2019 08:06:41 +0000 en-GB hourly 1 https://wordpress.org/?v=4.9.11 The making of “Vistagraphs” https://blog.berniesumption.com/photography/the-making-of-vistagraphs/ https://blog.berniesumption.com/photography/the-making-of-vistagraphs/#comments Sat, 06 Nov 2010 14:27:38 +0000 http://berniesumption.com/photography/?p=315 Continue reading The making of “Vistagraphs”]]> My second big photography project, Vistagraphs, is an experiment in making photos that reflect the way that humans perceive the world. This is an explanation of what I mean by that.

First a little theory

As a digital camera focuses light through its lens and captures an image on a silicon sensor, so the eye focuses light through its lens and captures an image on the retina. For a traditional photograph, this moment of capture is the defining point in the production of the final image. This is very different to the way that the human visual system works – for us, a single image viewed by the eye at one point in time is just one of a stream of images that we perceive as we look around a scene. This has a number of repercussions:

  • Fixed direction of view. A photo must be taken from a single position, but a human can walk round a scene appreciating it from several angles. We can stoop down to look closely at a footprint then turn around to take in the sky; a photograph must do one or the other.
  • Single focal length. The focal length of the lens used for a photo determines the angle of view. A photo can be wide angle to take in more of the scene as a whole, or telephoto to focus on a particular part, but humans can happily go from appreciating a whole scene to focusing on a specific detail.
  • Limited depth of field. A photograph has a sharp field – an area where objects are in focus. In a photo with both near and distant objects, some must be out of focus. Eyes have a sharp field too – the object we are looking directly at is sharp, but other objects are viewed at lower resolution, out of focus, in double vision, or all three. Our brain does a wonderful job at hiding this from us. As we look round a scene, we remember an image of each object when we look directly at it, and put together a mental collage of a scene in which everything is sharp.
  • Limited exposure range. A photograph has a limited dynamic range – if a scene contains both very light and very dark objects (say both clouds and shadows) then either the bright objects will be overexposed or the dark objects will be underexposed or both. This means that the whole cloud or shadow will be rendered as an area of pure white or black, with no details. Eyes can quickly adjust from viewing dark shadows to bright lights in the same scene.

A vistagraph is an image that combines elements of several photographs in an attempt to more accurately represent the feeling of being in the place and time that the photographs were taken.

An example

This is one of my favourite Vistagraphs. I feel it captures the elements of the scene that I liked – the lush fields of young wheat with mountains rising dramatically behind them:

Spain, Spring 2009
Wheat grows in the plains below the mountains. Spring 2009, Spain

The above image represents what my brain saw better than any single image I was able to capture on my camera that day. It is in fact a composite of two images:

Vistagraphs technique 1
A shot taken with a telephoto lens captures the mountains but completely loses the foreground
Vistagraphs technique 2
A shot taken with a wide-angle lens captures the texture of the wheat but makes the mountains appear tiny

Vistagraphs are composed of photographs taken in different directions, at different focal lengths and exposure settings, but always in the same place and at the same time. This last point is important – the same technique can be used to create fantastical landscapes that bear no relation to a place in the real world, but this is not what I’m aiming for.

When I take the photos, I try to visualise how they will fit into the final vistagraph, and compose them so that the features of each photo will blend into the next without the join between them being obvious.

]]>
https://blog.berniesumption.com/photography/the-making-of-vistagraphs/feed/ 2
Incubator: The Birds https://blog.berniesumption.com/photography/the-birds/ Wed, 03 Nov 2010 20:24:54 +0000 http://berniesumption.com/photography/ Continue reading Incubator: The Birds]]> This is an “Incubator” post. Like the others, I know that there’s an idea for a big project in here, but I don’t know what it is yet.

Birds are one of the most photographed subjects, so it’s hard to do anything that hasn’t been done many times before. I’d like to show a slightly quirky and unusual side of birds, but I don’t know what that is yet.

Here are a few photos that contain elements I’d like to expand on:

]]>
Incubator: Still https://blog.berniesumption.com/photography/still/ Wed, 03 Nov 2010 20:23:46 +0000 http://berniesumption.com/photography/ Continue reading Incubator: Still]]> This is an “Incubator” post. Like the others, I know that there’s an idea for a big project in here, but I don’t know what it is yet. Each photo would use a fast shutter speed or flash to freeze motion, capturing a view of a subject that people don’t get to see with the naked eye.

This connecting thread will link together a disparate set of subjects that wouldn’t otherwise find themselves together in a book. It might work beautifully, or it could be an incoherent mess. I need to shoot more pictures to find out.

Here’s a slideshow of the images in question, and a shot of the HiViz DIY microphone flash trigger used for a couple of the shots.

]]>
Incubator: Somewheres https://blog.berniesumption.com/photography/somewheres/ Wed, 03 Nov 2010 20:03:37 +0000 http://berniesumption.com/photography/ Continue reading Incubator: Somewheres]]> This is an “Incubator” post. Like the others, I know that there’s an idea for a big project in here, but I don’t know what it is yet.

While walking for a month across northern Spain, I passed thousands of doors. All of them lead somewhere. I started a collection of doors, whimsically limiting myself ones bearing the number 8 (as soon as I had decided upon this limitation, every other street seemed to have a demolished house between 6 and 10).

Since then, I see magnificent doors everywhere. Graffitied metal doors, ancient thick wooden doors, ostentatious gold-plated McMansion doors. They arouse the stamp collector in me: I want to collect around 50 of the most interesting doors into a book, or perhaps an installation.

Here are some on the ones from Spain, all bearing a certain Spanish fingerprint:

]]>
Incubator: Portraits on Location https://blog.berniesumption.com/photography/portraits-on-location/ Wed, 03 Nov 2010 19:42:47 +0000 http://berniesumption.com/photography/ This is an “Incubator” post. Like the others, I know that there’s an idea for a big project in here, but I don’t know what it is yet.

In order to be included in this project, the portraits must have been taken out and about, as part of the subjects life.

]]>
Project: “Vistagraphs” book https://blog.berniesumption.com/photography/vistagraphs/ Sat, 30 Oct 2010 21:06:31 +0000 http://berniesumption.com/photography/ Continue reading Project: “Vistagraphs” book]]> Humans are nothing like cameras.

While the human eye itself is somewhat like a camera, the eye is just one part of the human ability to perceive the visual world, and cameras suffer from several limitations that do not affect humans.

Vistagraphs are an experiment in producing images that reflect the way that humans perceive the world.

Get the book

Download the book PDF (free)

View the pictures

This slideshow contains a selection of my favourite photos from the book.

]]>
Project: “Eyes” book https://blog.berniesumption.com/photography/eyes/ https://blog.berniesumption.com/photography/eyes/#comments Sat, 30 Oct 2010 20:22:04 +0000 http://berniesumption.com/photography/ Continue reading Project: “Eyes” book]]> When I look at a portrait I first see the expression on the subject’s face, and after a few moments of appreciation I find my gaze inevitably drawn to the eyes. The eyes reveal the personality of the subject, confirming the expression of the face or contradicting it. The eyes are the most important part of the portrait, yet the smallest. This is a pity because eyes contain beautiful details that are lost in most portraits.

This project consists of inside-out portraits. The intention is that after the first few moments regarding the eye, your gaze is inevitably drawn inwards to the subject’s face.

Get the book

Purchase the book in hardcover

View the pictures

View the photoset on flickr

]]>
https://blog.berniesumption.com/photography/eyes/feed/ 17
A better depth of field table https://blog.berniesumption.com/photography/better-depth-of-field-table/ https://blog.berniesumption.com/photography/better-depth-of-field-table/#comments Thu, 28 Oct 2010 21:44:27 +0000 http://berniesumption.com/photography/ Continue reading A better depth of field table]]>
Traditional DoF tables apply to one focal length only.

Normal depth of field tables list the depth of field for any combination of aperture and subject distance for a single focal length. I find this a pain in the ass for several reasons: firstly, I have to carry around several charts, secondly if I'm using a zoom lens I have to guess what focal length I'm using, and finally I generally find it easier to gauge the size of the subject rather than the distance to it.

This chart is my solution.

For photos of subjects that are relatively close compared to the lens' hyperfocal distance, depth of field depends only on subject size and aperture. A portrait taken with a 35mm lens at f/2.8 lens has around the same depth of field as one taken with a 100mm lens at f/2.8, because although the 100mm lens' aperture is physically larger, you have to stand further away to get the whole subject in the frame, and the two cancel out. This is one of the natural laws of depth of field.

This chart lists the depth of field for any combination of subject size and aperture. It therefore applies to all focal lengths: you don't need to know how far away your subject is or what focal length you're using, just how big the subject is.

I have used the equation from Wikipedia:

DOF equation

This equation becomes inaccurate near the lens' hyperfocal distance, so I have marked the hyperfocal distance on the chart as four light-grey to dark-grey bands representing 24mm, 50mm, 100mm and 200mm lenses. Stay out of the grey area for your lens and this chart is accurate. If you print this chart, tell your web browser to print backgrounds so that you get the grey shades (in IE, Tools > Internet Options > Advanced > Printing > Print backgrounds).

Click here to open in a new window.

Enjoy...

Caveats:

  • The calculations in this chart only work when a subject is close relative to the lens' hyperfocal distance. This is generally the case for small subjects and long lenses. The light to dark grey shaded areas represent the boundary for 24mm, 50mm, 100mm and 200mm lenses at which inaccuracies start to become significant (more than 50%).
  • This chart uses a circle of confusion of 1/1250 of the image width to produce "acceptably sharp" images. If it is important to you to get images as sharp as your camera is capable of capturing, read the sidebar titled "super sharp shooter" on this article: www.berniecode.com/writing/photography/depth-of-field.
  • This chart implies that the depth of field is symmetrical, which it isn't: close to the hyperfocal distance there is a lot more depth of field behind the focal plane than in front of it. However, close to the hyperfocal distance this chart doesn't work anyway because focal length becomes important, so that's not really an issue.
]]>
https://blog.berniesumption.com/photography/better-depth-of-field-table/feed/ 12
Bernie’s Better Guide to Depth of Field for Geeks Who Want to be Digital Artists https://blog.berniesumption.com/photography/depth-of-field-for-geeks/ https://blog.berniesumption.com/photography/depth-of-field-for-geeks/#comments Thu, 28 Oct 2010 18:16:42 +0000 http://berniesumption.com/photography/ Continue reading Bernie’s Better Guide to Depth of Field for Geeks Who Want to be Digital Artists]]> Being a guide to portrait photography cleverly masquerading as a technical analysis

Like the topics we covered in the beginner’s guide last month, depth of field might initially seem complex, but behind it is some relatively simple logic and maths. Don’t worry if maths isn’t your strong point: long equations are the crutch of the inarticulate, and there’s nothing in this article more complicated than division.

Speaking of division, I like to divide photography into two broad disciplines. Portraits are photos of a particular object (not necessarily a person), designed to capture something about that object. Landscapes are photos of a scene (not necessarily outdoors), designed to capture the sensation of being in a place. I say that this is a disguised guide to portrait photography because mastering depth of field is one of the main skills you need to take great portraits. Whether you’re keeping the whole scene in focus for an environmental portrait, or using a very shallow depth of field to emphasise just the eyes of your subject, you need to understand how to get exactly the right depth.

On categorisation

me this fascinating bonus content

My definition of a portrait is not the conventional one. Wikipedia defines it as a photo that “captures the likeness of a person or a small group of people, typically in a flattering manner”, but I think that is too restrictive.

John Shaw once said that he considers himself to be a portrait photographer whose subjects are animals and plants. I like this way of thinking: if your photo has a subject, approaching the photo as you would a portrait of a person will put you in the right frame of mind to make photos that will convey the reason that you wanted to photograph your chosen subject.

Likewise, approaching a photo of any scene in the same way that you would a landscape offers a similar advantage.

Of course as soon as you try to impose a neat 2-way classification onto a subject such as photography / sexual orientation, you notice problems. Not all photos / people can be neatly classified as portrait or landscape / gay or straight.

For example, architectural photographs / bisexuals often have a definite subject / attraction to men as well as a compelling vista / taste for ladies.

On the other hand, abstract photographs / transgender individuals may technically be portrait or landscape / gay or straight, depending on whether you decide to classify them as being of a subject or a scene / male or female but are sufficiently different that it might make more sense to consider them a genre / orientation in their own right.

This difficulty says more about the classification system than it does about the photos / people being classified. As a portrait photographer / conservative politician, don’t let your ideas of what kind of photography / public lifestyle you should be engaging in get in the way of your fun. Put aside your preconceptions / wife grab your camera / dark mackintosh and head for the nearest national park / public toilet where I’m sure…

OK, I think this analogy has gone far enough :o)

Depth of field terminology

The following image depicts a (fake) flower about 50cm away, with a hedge around 5 meters behind it. This is an example of narrow depth of field: notice how some of the petals are sharp, others are slightly soft, and the hedge is almost blurred out of all recognition. Below it is a version with very wide depth of field.

Wide aperture photo
A photo taken at wide aperture
Narrow aperture photo
A photo taken at narrow aperture
Depth of field
The range of distances in a photo within which the image appears sharp. In this photo the sharp region starts about 49cm from the camera lens and extends to 51cm, giving a depth of field of 2 cm. Any point outside of this range appears blurred into a disc.
Blur
The size of the blur discs as measured in the real world. This is tied to depth of field: the larger the blur, the narrower the depth of field.
Circles of confusion
The size of the blur as measured on the photographic image.

Blur vs circles of confusion: Consider the rightmost petal and the leaves in the bottom right. The blur – measured in the real world – is around 2mm and 20cm respectively, and the circles of confusion – measured on the image – are around half a millimetre and 5mm respectively. When measured in the real world the leaves are 100 times more blurred than the petals, but measured on the actual photo they are only 10 times more blurred. This is because of perspective: the leaves are 10 times further away from the camera than the petals are, so the ratio of the two kinds of blur are different by a factor of 10.

I just measured those circles of confusion by counting pixels in the above image. This is valid, but inconvenient because the values would change if I altered the size of the image. For this reason, circles of confusion are usually measured on the camera’s sensor.

Over the rest of this article I’ll explain the physics behind blur and sharpness, so that you can learn to precisely control the look of your photos. An explanation of depth of field often starts with a list of rules. These do exist, and I list them at the end of the article, but the whole premise of a Geek’s Guide is that the rules are easier to learn if you understand the mechanism behind them first.

Onwards…

What causes blur in a photo?

First of all, consider a pinhole camera with no focusing capability and infinite depth of field:

DoF diagram

Light from each point in the scene enters the pinhole, forming an upside-down image on the sensor. Unfortunately, the vast majority of the light is wasted. Each point on the vase is emitting light in every direction, but only the light that happens to be emitted exactly in the direction of the pinhole will be captured. So little light comes in through the pinhole that you need long shutter speeds to expose the image. Also the small pinhole creates a very soft image due to diffraction. In order to allow more light in, the pinhole is replaced by a wide aperture and a glass lens to bend the light so that it still forms a sharp image on the sensor:

DoF diagram

The lens works by bending light so that the rays emitted from any point on the subject have a larger target to hit, but still form a sharp image on the sensor. So far so good, but there is a cost. In the pinhole camera, all objects are equally in focus. With the lens, the focus must be adjusted for a subject. You tell the lens how far away the subject is, and it will make sure that light radiating from that distance is brought back to a single point on the sensor. The slice of the world that is in focus is called the focal plane.

So what about objects that are not on this focal plane? Light rays from points on these other objects will still be bent back to a single point by the lens, this point will be slightly in front or behind the sensor. If the object is in front of the focal plane then the light rays will not have enough distance to converge; if the object is behind the focal plane the rays will converge before the sensor and cross over. Either way they appear in the image as a blurred disc the same shape as the aperture. This disc is called a circle of confusion.

DoF diagram

I find this model a bit hard to use to visualise the depth of field in a photo, but fortunately there is a much simpler one for predicting the size of the blur: imagine a pair of imaginary lines starting at the aperture and crossing over at the plane of focus. Any object not on the plane of focus will be blurred into a disc the size of the distance between the lines.

DoF diagram

An example

Port bottle
A lovely bottle of port. 1977 is just perfect for consumption now.

This photo was taken with an 85mm lens at f/1.8, which means that it has an aperture width of 47mm.

The bottle is one meter from the camera, and half a meter behind the bottle are two candle flames which are blurred into discs.

According to the simple model above, since the candles are half as far from the focal plane as the camera is the circle of confusion should be half the aperture width, or 23.5mm. Measuring the discs in Photoshop, I find them to indeed be just over 23mm wide. Not bad.

The rule for calculating the blur according to the model in figure 4 is that if your subject is x meters away, an object that is x meters away from the subject will be blurred into a disc the size of the aperture, an object 2x meters behind the subject will be twice the size of the aperture, etc…

The size of the circle of confusion created by the candle flames is 2.14mm measured on the camera’s 35mm sensor.

Aperture shape and bokeh

Bokeh, pronounced with the ‘bo’ of ‘bombastic’ and the ‘keh’ of ‘Ken Livingstone’, is a term used to refer to the quality of the blur in the out of focus areas.

Circular bokeh is generally perceived to be the most pleasing. Any lens will produce circular bokeh when wide open, but when stopped down the bokeh takes on the polygonal shape of the aperture. Many people spend a lot of money on lenses that have great bokeh because they have wide apertures, are sharp even when wide open, and use many curved aperture blades to round out the polygonal shapes even when they are stopped down. Such people are called bokeh whores, and can be easily observed in their natural habitat of the flickr forums.

If one was so inclined, one could modify the shape of the aperture using a piece of card to produce any shape of bokeh you want:

Heart-shaped bokeh

Photo from a tutorial on DIY Photography.net, used with permission. More examples.

Obtaining sharp images

So far we’ve been talking about the size of the blur in the out of focus areas, now lets turn to the sharp area.

Firstly, we need a definition of what qualifies as ‘sharp’. Technically speaking, only the 2D slice of the world lying exactly on the plane of focus is perfectly sharp (and even then, only if the lens is a perfect optic, which it isn’t). However, since our eyes are even less perfect, we can use the more useful definition that a part of an image appears sharp when its resolution exceeds that of the human eye.

geeky aside: how to measure the size of the blur

Measuring real-world blur size

I measured the distance between the flames with a ruler: 38mm.

I took a second, underexposed photo with the same aperture, in which the circles were clearer:

With these measurements we can calculate the size of each pixel at the distance of the flames: 38 / 354 = 0.107 millimetres per pixel.

The discs are therefore 23.1mm wide (0.107 x 215)

So what is the resolution of the human eye?

DoF flower photo
Depth of field at f/1.4

In this image you can see a band crossing the flower petals and rising up the side of the vase in which everything seems to be sharp (it is clearer in the large version – click to open it). This is the range either side of the focal plane where the circles of confusion are so small that they appear as points.

How small do the circles have to be before they look like a single point? In an exhaustive analysis, Wikipedia tells me that assuming that you’re going to make 25cm wide prints and view them from a distance of 25cm, then you will perceive any circle less than 0.2mm wide as a single point.

This means that a circle of confusion must be no bigger than 1/1250 of the image width if the image is to appear sharp. This translates into a minimum circle of confusion of 1/1250th of the sensor size. For 35mm cameras, the value 0.03mm is often used.

geeky aside: fact checking Wikipedia

So, the minimum detail we can perceive from 25cm is 0.2mm eh? That sounds about right but you should always distrust figures you find on the internet, except for this article of course.

Unfortunately I didn’t have a 5 line pair per millimetre grid lying around the house, but I do have a nit comb, presented here with a tiny little chilli for no good reason:

The prongs on the nit comb are 1.2 mm apart. I walked backwards with one eye closed until the individual prongs started to merge together at 170cm away. 170 is around 7 times further away than 25cm, so my feat is equivalent to resolving a 0.17mm detail at 25cm: pretty close to the claimed value.

OK Wikipedia. You win this time.

Note that this is the minimum acceptable sharpness. If you’re a perfectionist or plan to make larger than A4 sized prints, you should strive to get your images even sharper.

Super sharp shooter shooting super super super sharp shots*

Most digital cameras are capable of greater resolutions than this 1/1250 of the image width, and just because you can’t see any resolution above 0.2mm from 25cm away, it doesn’t mean that any sharpness beyond that point is wasted. Firstly, you might want to print the image larger, or view it closer, than the values assumed above. Secondly, you may want to crop an image to enlarge a small portion of it. Finally, random visitors on flickr might pixel-peep your images on maximum magnification.

Here are some tips for obtaining images even sharper than ‘sharp’.

  • Always use a tripod if possible, or alternatively a shuter speed of twice the normal recommendation of 1/<focal length>. e.g. for a 100mm lens, try and use a shutter speed of 1/200 second.
  • By default, shoot at your lens’ sharpest aperture (typically f/8 or f/11 on SLR lenses) even if you don’t need all that depth of field. Especially avoid shooting wide open with cheap lenses: budget lenses tend to improve greatly when closed down 2 stops
  • When using hyperfocal focussing (more on this later), double the recommendation of your depth of field calculations: if you reckon you need f/5.6 to get a sharp image, use f/11
  • Avoid f-numbers higher than f/16: diffraction will degrade the image quality.
  • Where the two previous rules contradict, consider recomposing the shot so that it doesn’t require quite so much depth of field, or using focus stacking (see the last section of this article)

* obscure British drum and bass reference

Visualising the depth of field

It’s imaginary line time again. Picture a cone extending from your lens to infinity, always 1/1250th the width of your image. I’ll call this the sharpness cone because as far as I know there’s no accepted name for it. If the blur discs are smaller than this cone, the image appears sharp. Using a 24mm lens which has a 72 degree angle of view on a full frame DSLR, this cone will be around a millimetre wide one meter from the camera, and a meter wide one kilometre from the camera.

DoF diagram

Between the near and far boundaries where the blur is smaller than the sharpness cone, the image will be perceived as sharp. The distance between the two boundaries is the depth of field.

Calculating the size of the depth of field

There are equations for calculating the near and far boundaries, which I shall include here even though I don’t fully understand them so that we may share a sigh of relief when I introduce a nifty gadget that calculates them for you:

DoF equation

Yuck. Since pausing for a minute to use a calculator interrupts the creative process a bit, people use depth of field charts. I might not fully understand the above equation, but I can count that it has 4 variables. A single chart applies to one combination of focal length (f) and sensor size (c), leaving two variables left: subject distance (s) and aperture (N). The chart is a grid of the result of the above equations for every permutation of subject distance and aperture.

Go generate charts for all your lenses here: www.dofmaster.com/doftable.html. You use the chart by finding the row representing the subject distance, then selecting an aperture from that row that gives you the depth of field you need.

Nice laminated depth of field charts
For extra geek points, get them laminated!

If you look at the bottom row of these charts, you will see that they have a figure for each f-number called hyperfocal distance.

Hyperfocal distance

Sometimes, especially during landscape photography, you want the whole of a scene to be in focus. In order to do this you use hyperfocal distance focussing. This can be explained in terms of concepts already covered in this article: the hyperfocal distance is the closest distance you can focus on at which the blur discs behind the plane of focus are always smaller than the sharpness cone. In other words, the red and grey lines in figure 5 never cross over and there is no far focus boundary: very distant objects are in sharp focus.

When you are focussed on the hyperfocal distance, everything from half that distance to infinity will be sharp. For example, the hyperfocal distance for an 85mm lens at f/8 is 30 meters; focus on 30 meters and everything from 15 meters to infinity will be sharp.

Bear in mind when using hyperfocal focussing that it will produce “acceptably sharp” images according to the 1/1250th rule. However, if you don’t actually need all that depth of field it is possible to get sharper images. If the subject you’re shooting with the 85mm lens is 100 meters and there is no foreground that needs to be sharp, just focus on the subject!

The aesthetics of depth of field

The effect of focal length and aperture on depth of field has been mentioned above but a picture, as they say, is worth a thousand words.

The effect of focal length

Assuming that you change position to keep the subject filling the frame, focal length does not affect depth of field. However, the perspective causes the image to look different.

Flower 1
30mm @ f/2, taken from 40cm
Flower 2
85mm @ f/2, taken from 110cm

Look at these two photos large (click them to view the large size). At first glance the 85mm one looks like it has a thinner depth of field, but in fact this is not the case: in both photos the sharp area is about two squares of tablecloth thick, just enough to get the petals and the leaf sharp. In the second shot the perspective compresses the scene, causing the same depth of field to take up less space on the image.

Likewise the flower in the background is just as blurred in the second shot, but the circles of confusion are larger because the flower is larger. If you resize the flowers to compensate for perspective, you can see that the blur is identical.

See? Just the same.

The effect of aperture

DoF flower photo
30mm focal length, f/1.4 aperture
DoF flower photo
30mm focal length, f/4 aperture
DoF flower photo
30mm focal length, f/16 aperture

’nuff said.

The natural laws of depth of field…

There are some laws governing the relationship between these factors, which I shall call the Natural Laws of Depth of Field, highlighted in bold to convey an appropriate sense of gravitas. If you are one of those people who imagines the author speaking as they read an article, you may cast my voice in an “Ian McKellen as Gandalf” tone for this section.

Each of these rules explained in terms of the grey and red lines in figure 5, so here it is again for reference:

DoF diagram
1. Larger apertures cause narrower depth of field
Increasing the size of the aperture increases the angle of the grey lines.
2. Closer subjects cause narrower depth of field
Bringing the subject closer to the lens increases the angle of the grey lines. Remember: Ian McKellen voice.
3. Cameras with larger sensors give more blur with a given focal length and f-number
This is a corollary of the previous rule: because full fame DSLRs have a wider angle of view than cropped DSLRs, you have to get closer to the subject to take the same picture with a particular lens.
The other way of looking at it is that with a full frame DSLR you have to use a longer focal length lens to get the same angle of view as a cropped DSLR, and that longer lens will have a physically larger aperture. Either way the effect is the same.
4. As long as your subject fills the frame, depth of field depends only on f-number, not focal length.
Increasing the focal length and then moving back to keep the subject filling the frame keeps depth of field and blur constant, but increases the size of the circles of confusion because of perspective. Increasing the focal length increases the physical size of the aperture, but at the same time you move backwards, so the angle of the grey lines in figure 5 does not change.
This is why telephoto lenses are good for background control in portraits: they make the background appear more blurred without sacrificing depth of field on the subject. (Also, because the telephoto lens includes less of the background, it is easier to select a less complicated bit of background for the composition)
5. Zooming in on a subject (increasing focal length while maintaining the same position) massively narrows the depth of field
Two effects combine to produce this: the longer focal length has a physically wider aperture (hence the angle of the grey lines in figure 5 becomes steeper) and the longer focal length magnifies the image (so the angle of the red lines becomes narrower)

some convoluted reasoning of questionable utility

Philosophical aside: A thought-experiment proof of the 4th law

The 4th law is important since it means that you only have to remember one number per subject. If you shoot a lot of head/torso portraits, you might experiment and discover that f/2.8 gives you the depth of field you need for your personal style. This number applies regardless of your position and focal length, as long as you’re still shooting a head/torso composition.

The law depends on two relationships. Firstly, as you move away from your subject you need to use a longer focal length, which in turn has a physically larger aperture, so the grey lines in figure 5 do not change angle. Secondly as the focal length increases, the red lines to become narrower so that the sharpness cone is not wider at the focal plane as a result of it being farther away.

At first I thought that this was a bit too convenient: a “just so” story that was probably just an approximation. I wished that I was better at maths so I could combine the equations for angle of view and depth of field and see if it was true. Then I realised that this is not a coincidence but is logically necessary:

First, the grey lines. An f-number represents the size of aperture that will capture a constant amount of light from a subject so that the same shutter speed gives you the same exposure regardless of focal length. This means that with a constant f-number, a constant proportion of the total light radiated from a point on the subject is captured. Say in one photo at 30mm f/2.8 the aperture captures any light emitted within a 5 degree cone from each point on the subject. When you shift to a 90mm lens and stand further away, in order for it to still capture the same amount of light, it has to still cover that cone of 5 degrees, otherwise it would capture a different amount of light. In other words, the angle of the grey lines can’t change without changing the f-number.

Secondly, the red lines. Because you are adjusting your position to keep the same subject in the frame, the composition is going to have the same width and height at the focal plane. Since the sharpness cone is defined as 1/1250th of the image width, the red lines have to be the same distance apart at the focal plane, because they can’t change without the composition changing.

Since neither the angle of the grey lines or the distance between the red lines can change, the depth of field must be the same. The 4th law proven and we didn’t use a single equation. Awesome.

… and how to break them

So now you know the rules of depth of field.

[Morpheus voice:] What you must understand is that these rules are no different than the rules of any computer system. Some of them can be bent, others can be broken.

Focus stacking

A technique called focus stacking can be used on macro photographs to increase the depth of field beyond the limits of optics. You take a series of exposures, changing the focal plane by a tiny amount each time so that every point on the subject is sharp in at least one exposure.

You then run the series of images through some software that generates a composite image made from the only the sharp parts of each image.

Focus stacking example
The first photo, with only the antennae sharp
Focus stacking example
The last photo in the stack
Focus stacking example
The resulting image, obtained from stacking the above two with 35 other exposures in Helicon Focus. There is an excellent page of samples here.

images copyright Charles Krebs, 2005, taken from heliconsoft.com.

Helicon Focus is commercial software that makes it very easy to create your own focus stacks. The best free program to do the same is probably CombineZM, and there are some more listed on Wikipedia.

Tilt-shift miniature fakes

If your aperture is big relative to your subject then you will get a narrow depth of field, and if not then you won’t. Since diffraction prevents you from having microscopically sized apertures, macro photographs have a narrow depth of field relative to the subject size. Likewise, since they don’t make lenses with apertures a meter wide, you won’t get a narrow depth of field when you shoot something the size of… well, a field.

This rule is so engrained in the minds of people who look at photographs that you can actually make a scene look like a tiny architectural model by adding fake depth of field. Your mind tells you that it must be small, because large subjects never have a narrow depth of field. This effect is called a ’tilt-shift miniature’, because it was first done using tilt shift lenses that can tilt their plane of focus to achieve this effect without digital processing.

First take an image shot from above. The higher the better, since you normally look down on a model. Then use a fake blur – for greatest realism a lens blur like the one in Photoshop – and selectively blur the regions behind and in front of an imaginary plane of focus. Here are some I found earlier, click on them to see the large versions:

Cars and racetracks are great subjects for this effect, since its easy to believe they you’re looking at a hobbyist’s model. I find the bushes particularly amusing.
Without the fake depth of field added to the original photo, the huge hand added in Photoshop would have looked (even more) out of place! Note how the artist has also changed the white balance of the original. The warmer tone makes the scene look as if it was shot under indoor lighting; very clever.

Images by Dingo2494 and Photo Munki Deluxe on flickr. Hooray, for Creative Commons and for silly usernames.

The end!

That was my second photography article, I hope it was a better sequel to its predecessor than the Matrix Reloaded was. Feel free to post comments or ask questions below.

]]>
https://blog.berniesumption.com/photography/depth-of-field-for-geeks/feed/ 27
Bernie’s Better Beginner’s Guide to Photography for Computer Geeks Who Want to be Digital Artists https://blog.berniesumption.com/photography/beginners-guide-for-geeks/ https://blog.berniesumption.com/photography/beginners-guide-for-geeks/#comments Sat, 23 Oct 2010 19:49:18 +0000 http://berniesumption.com/photography/ Continue reading Bernie’s Better Beginner’s Guide to Photography for Computer Geeks Who Want to be Digital Artists]]> Illustrated with photos wot I ‘ave taken

This is a beginner’s guide for computer geeks who want to be digital artists. Specifically:

Venn diagram

Roll over a section of the diagram

You are a beginner: you have little experience with photography beyond point-and-shoot cameras and mobile phones. If you are not a beginner then why are you reading this? Shoo! Go outside and play with your camera.

You want to be a digital artist: you intend to make a small number of photos or illustrations that are as close to perfection as you can get them.

If you want to take large numbers of shots to document weddings or sports events for example, then you won’t want to edit them all on a computer afterwards so you have to get everything perfect when you take the shot, just like in ye olden days of film photography. This guide may well help you, but ignore the section on digital manipulation. Then practice. A lot.

On the other hand if you don’t care about making each shot perfect then save yourself a lot of money and buy a point-and-shoot camera.

You have a computer and know how to use it. If you are reading this, I’m guessing that you do. If you are not reading this then something very strange is happening right now.

You are a geek: The fact that you’re reading this article already gave you a 90% chance of being a geek, and taking the time to roll over all these little bits guarantees it. If you think Venn diagrams are interesting, you’re a geek, end of story. I like to define a geek as someone who cares enough about something that they want to get good at it for their own sake, not to impress others or earn more (though being a geek helps you with those two goals too).

Moot point – all digital artists are computer geeks

You are a computer geek: you enjoy using computers and can learn a piece of software by playing with it for a day or two. If you are not a computer geek then it may be for the best to use a digital camera as if it was a film camera: forget digital retouching and just capture the best image you can when you shoot. This article will still be useful, but ignore the section on digital manipulation.

If all of the above apply, come on in!

There is a lot of material in this article, so I suggest you have your camera with you as you read it and try out the techniques as you go along. If you don’t have a camera yet then you can still enjoy this article; however if you do intend to buy a camera sooner or later, I suggest doing so before you read. Check out the buying advice at the end of the page.

Introduction

An extended apology: Most authors of photography guides are experienced professionals, and speak with the authority of the published photographer. Instead I write this beginner’s guide with the authority of a beginner. I flatter myself that I am better placed to advise the beginner geek on how to learn to use a camera than the professional photographer is: I have just been a beginner myself, so what was confusing and what was simple is fresh in my mind.

Speaking of my being a beginner, this is the first long article I’ve written. Do e-mail me and tell me what you think of it. bernie at berniecode dot com.

</apology>

This is the guide I wish someone had written for me when I started 3 months ago. It’s much shorter than photography books that cover the same topics because it’s a computer geek’s guide. I skip right over the basics of using a camera because you can guess your way through the basics or even read your camera manual (wimp!). I skip any advice about composition or artistic technique because there are better guides that cover those (though I might give it a shot next month). I use terms without defining them because I assume you can use Wikipedia if you need more detail.

For further reading covering field technique and composition I unreservedly recommend John Shaw’s Nature Photography Field Guide. Also, the National Geographic field guides are said (by my sister) to be good.

If you want to be a digital artist then you’ll need to be so comfortable using your camera that the exposure controls are second nature to you, so you can focus yourself on composing the scene that you want. This guide tries to get you to that point as quickly as possible. Some otherwise excellent photography guides take ages walking through the basics of exposure before gradually eking out the advanced details. This will never do: you’re a geek and can be dropped in at the deep end.

This guide doesn’t even try and address how to create a composition that qualifies as art, but this one does, and the book Photography and the Art of Seeing goes further.

Onwards…

Digital SLR systems

For this article I’ll be assuming that you have an SLR camera*. The distinguishing feature of an SLR is that when you look through the viewfinder you see through the lens. This means that you can view the picture pretty much exactly as it will look when you take it. You can also change the lens mounted on the camera body to alter the look of the photo. The technical details are quite interesting, but you don’t need to know them to use the camera.

* The Point & Shoot alternative

this fascinating bonus content

Almost all professional photographers use SLR cameras, rather than point and shoot cameras. SLRs offer very high image quality, a choice of lenses that affect the look of the photo, and most importantly easy access to the exposure controls that are the subject of this article.

However, top-of-the-range point and shoot cameras are very good these days. Some of the photos in this article could have been shot with a P&S camera, others could not. If you’re willing to work within the limitations of a P&S camera, you can take beautiful photographs.

The single most limiting aspect of P&S cameras in my opinion the depth of field. It is very hard to get good background blur (“bokeh”) with a point and shoot. If you search on flickr for bokeh, you will see many beautiful photos with backgrounds blurred into such simple washes of colour that they look like they might have been taken against a studio backdrop. They were all taken with SLRs.

If you decide to get a P&S camera, seriously consider the Canon G9 for its RAW image support (the importance of this is discussed later). As an absolute minimum, make sure that the camera gives you access to the three important exposure modes: aperture priority, shutter priority and full manual. Also, some P&S cameras have a “hot shoe” to which you attach an external flash. This is indispensable if you want to take photos of moving subjects at night, since on-camera flashes produce shockingly poor results.

When you take a picture with a digital SLR you allow an amount of light through a lens, focusing it onto a bit of silicon called a image sensor that contains light-sensitive cells that record an image.

The amount of light that you allow in is called the exposure. Getting the correct exposure is most of the effort of learning photography, and hence the main thrust of this article. Playing with creative effects like long exposure is much easier once you have exposure down to second nature.

Focal length

Focal length is the most obvious way in which a lens affects a photo: it controls the angle of view, and hence how much of the scene is included in your photo. The reason that it is measured in focal length rather than degrees, is that the angle of view yielded by a certain focal length depends on the size of the camera’s image sensor. This relationship is easy to see in a diagram of a pinhole camera, where the focal length is the same as the distance between the pinhole and the film:

Focal length explanation

With a drum roll to celebrate the first time in my life that trigonometry has had any practical purpose, the angle of view is given by the formula arctan((<sensor size>/2) / <focal length>) x 2, for reasons that should become obvious if you split the diagram above into right-angled triangles.

If this doesn’t sound like an intuitive way of working out the angle of view, try this: you can visualise how focal length will affect the angle of view by imagining looking through a piece of card with a rectangular hole in it the same size as your camera sensor (36mm x 24mm for a full-frame camera, 15mm x 22.5mm for an APS-C camera). If you hold the card 200mm from your eye, that’s the view through a 200mm lens. Hold the card twice as far from your face and you’ll see half as much through it.

So doubling the focal length is just like cropping the photo to half of its width and height and blowing up the result to full size, except without the loss of resolution that would occur if you did that in Photoshop. Everything else about the picture remains exactly the same.

landscape photo at 18mm
A landscape at 18mm, the white box marking 1/5 of the width and height
landscape photo at 90mm
The same landscape at 90mm: the focal length is 5 times longer so the area marked by the white box fills the whole scene

Geeky aside – camera body crop factors

this fascinating bonus content

The landscape shots above were taken with a Canon 30D digital SLR with an APS-C image sensor about 22mm wide. Traditionally, lenses are designed for film cameras with a 35mm film size, so the lens will project a 35mm wide image onto the back of the digital camera. Because of the smaller sensor size, the camera body effectively crops the image down to 0.625 of the width and height. Recall that cropping an image to 1/2 its size is exactly the same as multiplying the focal length by 2 (1/0.5 = 2), so cropping an image to 0.625 of its size is exactly the same as multiplying the focal length by 1.6 (1/0.625 = 1.6). This is why the 30D is said to be a ‘1.6x crop factor camera body’.

It is often said that these cameras multiply the focal length of a lens by 1.6. Now that you understand focal length you know that they do nothing of the sort – the focal length of the lens stays the same, but the smaller sensor size yields a narrower angle of view, equivalent to a lens 1.6 times longer on a 35mm camera.

A smaller sensor may seem bad, and it’s true that they offer slightly lower quality and resolution than full size sensor cameras like Canon’s 5D, but they are popular with photographers who use telephoto lenses a lot since they boost the effective length of their lenses for free, saving them money and weight compared to buying a lens 1.6 times longer. With the growing popularity of APS-C camera bodies, lens manufacturers have begun to make lenses that don’t cast a full 35mm image on the camera, such as Canon’s EF-S series or Sigma’s DC series. This means that they can be made smaller, lighter and cheaper for the same image quality, the only downside being that they can’t be used on cameras with full size image sensors like Canon’s 5D. You therefore save money twice: once because you can buy shorter lenses, and again because you can use cheap EF-S and DG lenses.

On the other hand however, high crop factor bodies limit your choice of wide angle lenses. For APS-C camera users there is simply no lens as wide as the Sigma 12-24 or as fast as the Canon 24mm f/1.4. If you want the highest quality wide-angle photography, especially in low light, you have little choice but to buy a full-frame camera.

Canon and Nikon offer 1.6x and 1.5x bodies respectively, while Olympus and other members of the four thirds group offer 2x bodies.

Focal length and perspective: OK, backpedaling time. If two photos are taken from the same position at different focal lengths, then the longer focal length photo will look like a crop from the middle of the shorter focal length photo. However, often a photographer will change position as she changes focal length. When you’re shooting a specific subject you will use a wide angle lens and get right up close to the subject, or a telephoto lens and stand back; either way, the subject fills the whole frame, but the perspective will look very different:

Focal length and perspective diagram

Using a wide angle lens means that the camera is much closer to the subject than the subject is to the background. This exaggerates perspective and makes the background seem small and distant. The reverse is true with the telephoto shot, which includes less of the background while making it appear closer to the subject. This thistle was shot with 3 different focal lengths:

10mm 20mm 40mm

Stops and exposure settings: the basics

When you take a picture you allow an amount of light through the lens, focusing it onto the image sensor. The amount of light you let in is measured in stops. Stops are a relative measure of lightness: you can’t say “there are three stops of light coming from that surface”, but you can say that one surface is three stops brighter than another. Adding one stop means doubling of the amount of light that the plate records. In fact, ‘a stop’ is really just photographic slang for a doubling. On old cameras, stops were literally dents in a dial that made it easy to stop when you reached the desired setting. We measure light like this because the human eye perceives each doubling to be an equal increase in light.

Using a relative measure makes sense because there is no such thing as a standard amount of light that equals grey. How bright grey is depends on how strongly lit the scene is; a dark granite rock in bright sunshine actually has more light reflecting off it than than snow at twilight. It is not the absolute brightness of objects in your scene that matters, but their brightness relative to each other, or how many stops apart they are. When photographing these objects you adjust the exposure settings to make sure that the twilight snow still looks white and the sunlit rock looks dark.

The amount of light you record is controlled by the camera’s exposure settings: aperture, shutter speed and sensitivity. Opening the aperture by a stop or decreasing the shutter speed by a stop or increasing the sensitivity by a stop all have the effect of doubling the brightness of your scene. However, the shutter speed and aperture have other aesthetic effects that affect how your picture looks in a way that is very hard to remove or replicate in Photoshop, so you must make a decision when you shoot.

Shutter speed

The shutter speed is considered an exposure setting because opening the shutter for twice as long lets in twice as much light which increases the exposure of the whole scene by a stop. However you can also use it aesthetically: faster shutter speeds freeze a moving subject, slower speeds record a motion blur. Neither is ‘correct’: a photo of a stream with a 1/800 second shutter would record each sharp sparkling droplet of water frozen in mid-air, whereas a 4 second exposure would render the stream as a softly flowing ethereal smoke. Either can look beautiful.

Fast shutter speed
A shutter speed of 1/800 second freezes this baseball in mid-air
Slow shutter speed
A 10 second exposure produces streaked lines of headlights and a ghost of a car that was parked for half of the exposure.

Aperture, or 1, 1.4, 2, 2.8, 4, 5.6, 8, erm, what the f***?

Lenses have an aperture to control the amount of light entering them. This is an iris that can open and close to allow more or less light in. Aperture is measured in ‘f numbers’ – written f/x where x is the ratio of the focal length to the aperture width. Low f-numbers mean wide apertures letting in more light. Aperture has a reputation for being complicated so some guides suggest that you just memorize the f-number sequence and ignore the internal details. Being a geek, you’ll find it much simpler when you understand why it is measured like this.

The first supposedly confusing thing about aperture is that it is not measured as a width but as a ratio of focal length to width. This makes more sense if you consider that the scene you’re photographing is a light source. Recall that doubling the focal length will half the width and height of the bit of the scene that you project onto the camera plate. Therefore at double the focal length, only 1/4 of the scene area is providing light, so the aperture area must be 4 times as large to compensate (i.e. the aperture width must double). A constant f-number means a constant amount of light entering the aperture regardless of the focal length.

The next supposedly confusing thing about aperture is that the f-number sequence goes in stop increments: 1, 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22, 32. There is a logic to this. A 50mm lens with a 50mm aperture will have an f-number of f/1 (the ratio of the focal length to the aperture diameter: 50/50 = 1). If you want to halve the amount of light reaching the sensor you must halve the area of the aperture. To half the area of a circle you divide the diameter by 1.4 (give or take), and since diameter is the denominator in the f-number equation, this means that the f-number is increased by a factor of 1.4. Each f-number is 1.4 times the previous one and lets in half as much light. When someone says “close”, “reduce” or “step down” the aperture, they mean increase the f-number.

Like shutter speed, aperture affects the look of the photo, specifically the depth of field. At narrow apertures the whole of a scene will be in focus, whereas at wide apertures only the bit of the scene that you focus on will be on focus; as is clear in the case of these cheap fake flowers:

Narrow aperture
At f/16 the background is distracting
Wide aperture
At f/1.4 the background is reduced to a blur, but not all of the subject is in focus either.

The nature of the out-of-focus blur that an aperture produces is called bokeh, a term coined by a magazine editor sick of hearing people mispronounce the Japanese word ‘boke’ (meaning blur) to rhyme with smoke. Good on him, but I’m still not sure how I’m supposed to pronounce it.

Long focal lengths and bokeh: Using a long focal length lens appears to make the background more blurred. In fact the background is just as blurred, but is larger. This is easier to see in a photo that only contains the out of focus background:

Wide angle bokeh
A blurred leafy background at 30mm, f/2.8
Telephoto bokeh
The same shot at 85mm, f/2.8.

In both shots each leaf is just as blurred relative to its own size, but in the wide angle there are more leaves and each one is smaller. In either shot you would reposition the camera so that the subject filled the whole frame. The long focal length therefore increases the size of the background relative to the subject, increasing the apparent blur. This is useful in portraits, when background detail only serves to distract from your subject.

Depth of field in greater… ahem… depth (sorry)

Depth of field is a huge topic, so I’ve written another article exclusively about it.

Sensitivity

Historical aside

this fascinating bonus content

Film photographers had to choose a sensitivity when they loaded a new roll of film into their camera, and some professional photographers carried two camera bodies loaded with different film to give them a choice of sensitivity when shooting a scene. As a digital photographer you can change the sensitivity for each shot. Which is nice.

While we’re on the subject of choosing film, many old school photographers loved a film called Fuji Velvia which boosted the saturation in their photos. Digital SLRs usually come with a saturation setting which allows you to emulate this. I often shoot with +50% saturation on my Canon 30D, which gives a similar intense colour to scenes.

The sensitivity of the camera’s plate is measured in ISO sensitivity units which were originally used to measure the sensitivity of chemical film. Most digital SLRs offer a range from 100 to 1600, with 100 being the least sensitive. Some offer lower or higher ISOs; as of September 2007 the champion is the £3,400 Nikon D3 with a maximum ISO setting of 25,600.

Sensitivity is a very useful exposure setting, because it (almost) doesn’t affect the look of the final image, so can be used to help you achieve a combination of aperture and shutter speed that gives you the look you need. Take this shot for example:

Senitivity example
Large depth of field (click for a larger version)

The extreme depth of field required a narrow aperture of f/22, ensuring that the grass and mountains were sharp, and my camera’s meter decided that a shutter speed of 1/15 second was required to correctly expose the image. A breeze was causing the grass to sway so much that a shutter speed of 1/60 was required to freeze it. 1/60 is 4 times faster than 1/15, so the scene would be underexposed by 2 stops. I increased the sensitivity by 2 stops from 100 to 400 and the scene was correctly exposed.

There is a caveat: noise. At very high sensitivities the picture becomes noisy. This is because at higher ISOs you are making an image from a smaller amount of light, so the signal to noise ratio drops. As a last resort you can try to remove this noise in Photoshop, but this can also remove fine detail so it is better to get a clean photo in the first place.

The following set of magnified images show individual pixels from a photo of a lamp fitting at various ISOs. These results will hold true for most digital SLRs. However, top of the line professional models will have lower noise at high ISOs.

Photo at ISO 100
At ISO 100, no noise is visible
Photo at ISO 500
At ISO 400 the picture is still excellent
Photo at ISO 800
At ISO 800, noise becomes visible
Photo at ISO 1600
At ISO 1600 the image is very noisy

However, noise is less obvious in print than it is on screen, so you may well be able to get away with high sensitivities.

As a rule of thumb you should shoot in the lowest ISO that gives you the shutter speed and depth of field that you need. If you need more depth of field but don’t want to reduce the shutter speed, increase the ISO and reduce the aperture. If you need a faster shutter speed and don’t want to lose depth of field by opening up the aperture, increase the ISO and the shutter speed. If you’re shooting a still landscape on a tripod at ISO 800 and 1/100 second shutter speed, you’re just wasting image quality: reduce the ISO to 100 and the shutter speed to 1/12 second. Some SLRs and most Point and Shoot cameras have an Auto ISO setting, which selects the lowest ISO that will give you a reasonable shutter speed. What qualifies as “reasonable” is an exercise left to the manufacturer, so you may still need to set the ISO manually if your camera’s choice isn’t appropriate.

I find that far more of my shots are ruined by motion blur caused by slow shutter speeds than by noise so don’t hesitate to crank up the sensitivity if you need to. In addition, it is often possible to remove much of the noise on in processing. The following crop is from a picture that had to be taken at my camera’s highest sensitivity, then processed to remove noise:

Noise reduction example
The top half of this image is processed with the Photoshop plugin Noise Ninja

Metering

Digital SLRs have built-in light meters that calculate the required exposure settings to expose the object you’re pointing the camera at as a medium tone. However, the camera doesn’t know what you’re pointing at, and will happily expose a white subject as grey unless you correct the exposure settings. You use the exposure dial to tell the camera to render the object that you are pointing at as a lighter or darker tone.

There are around 5 stops between apparent black and white in a typical photo, so black is 2.5 stops below mid-toned and white is 2.5 stops above mid-toned (take this as read for now, I cover it in more detail in the next section). Strangely, my Canon 30D’s exposure dial only covers 2 stops, so I have to use manual mode if I need absolute whites or blacks.

Metering dial image
The dial at the default setting: the metered object will be mid-toned
Metering dial image
The metered object will be near-white
Metering dial image
The metered object will be near-black

You can set your camera to spot metering which meters a small area in the centre of the scene, centre-weighted metering which meters the whole scene but pays more attention to the middle, or evaluative metering which meters the whole scene. Especially for evaluative metering, check the histogram (see the next section) right after shooting to make sure that the exposure came out correctly.

The metering lock button lets you meter a specific object, lock the exposure settings for that meter reading, and then point the camera somewhere else to take the picture. This is how you meter an object that is not right in the middle of your composition.

Waterfall image
This waterfall looked white, so I spot-metered it and dialed in 2 stops of overexposure to make sure that it looked like it appeared in real life
Cloud image
You can also change the suggested exposure values for creative effect. This moody nebulous image was actually a bright cloudy sky. I metered the cloud at the bottom, used the metering lock button to record that reading, then dialed in 2 stops of underexposure to render it near-black

Digital SLRs have four useful exposure modes that work with metering. Program mode chooses an aperture and shutter speed for you, leaving you free to think about composition. Aperture priority mode lets you choose an aperture, and the camera will set the shutter speed to correctly expose the scene; this is the most useful mode because it makes it easy to get the best depth of field possible (set to minimum aperture) or the fastest available shutter speed for the current lighting (set to maximum aperture). Shutter priority mode lets you pick a shutter speed and the camera will set the aperture. In all of these automatic modes, you point the camera at an object and then use the exposure dial to tell the camera how light or dark that object should be.

Metering dial image
The exposure dial indicating that with the current settings, the metered object will be 2/3 of a stop above mid-toned

In manual mode the exposure dial works the other way round: you choose an aperture and shutter speed, and the metering system will set the exposure dial to tell you how light or dark the object you’re pointing at is:

When I’m taking time to work a subject, carefully setting up shots with specific effects in mind, I like to use manual mode since it forces me to think about the exposure settings. When I’m walking around looking for interesting moments to take snap shots of, I stick to the automatic modes.

Histograms

Digital SLRs come with a histogram display so that you can tell how an image is exposed. Set your camera to show you an RGB histogram of each shot after you take it so you can tell if it is correctly exposed and retake the shot if necessary. Later in this guide I show you how to correct a poor exposure on a computer, but you’ll get better results and a smug feeling of competency if you get it right in the field.

Incorrectly exposed images produce histograms with large spikes at either end; correctly exposed images look like smooth bell curves. There is an example of each in the next section.

Looking at the histogram after each shot is the fastest way to get a feel for correct exposure.

Stops and exposure: advanced stuff

Recommended article:

Notes on the Resolution and Other Details of the Human Eye.

A fascinatingly geeky comparison of the dynamic range and other optical properties of the human eye those of a camera.

Every device for capturing light has a dynamic range – the number of stops between the darkest black and the lightest white that can be captured. Shades outside this range will be clipped, appearing featureless black or white. This is why, when somebody shines a torch at you at night, you can’t see their face – the human eye can perceive 15 stops of dynamic range, and the torch bulb is more than 15 stops lighter than their face.

On a film camera there are 5 stops between the darkest black and the lightest white. This is a much smaller dynamic range than the human eye can detect. This means that if you have a scene with say a bright cloudy sky and a dark shaded valley, you can see both in detail at the same time but a camera can not. If the shadows in the valley are more than 5 stops darker than the white of the clouds, then either the clouds will be a wash of overexposed white or the shadows will be a mass of underexposed black.

Digital SLR camera sensors actually capture much more information that just the 5 stops that you see on your screen. My Canon 30D captures 9 stops in total: 2 stops on each side of the 5 stops you can see. It uses this information internally to adjust white balance, but in order to reproduce the rich, high-contrast look of traditional film the 9 stops are clipped down to 5 to produce a JPEG file that looks like a traditional film print.

Traditional film photographers got around the 5-stop limit by using graduated neutral density filters – attachments for the front of a lens that shaded the sky, decreasing its brightness so that the sky and shadows could both be properly exposed. Don’t bother: the digital artist has two tools not available to the film photographer that are far more flexible. By using RAW image adjustment and combining multiple shots in Photoshop, you can create your perfect exposure back in the office, leaving you free in the field to focus on choices that can’t be changed later like motion blur and depth of field.

this fascinating bonus content

Software

Your camera may come with image editing software, but really you need something more powerful. Personally I can’t live without Photoshop; I hear good things about other programs, but after 10 years of learning Photoshop I’m not interested.

Photoshop has had RAW import options since Photoshop CS released in 2003, but Photoshop CS3 has by far the best RAW support to date. If you balk at the cost, save 50% by purchasing an old copy of Photoshop on Ebay, transferring the license to you and upgrading. It saves you so much money that you almost feel it must be dodgy, but it’s perfectly legal. Make sure that it is a legally owned copy and that the seller is willing to transfer the license to you.

If you still balk at the cost, try downloading The GIMP and then the RAW conversion plugin UFRaw. Both are free.

Another useful bit of software is Genuine Fractals. Photos from an 8 megapixel camera will print at A4 or even A3 size without modification. However, if you need to crop an image to centre on a small area, you can find yourself with a much smaller image that will become pixelated if you print it large. Genuine Fractals has a scaling algorithm that detects hard edges and preserves them in the scaled-up version. The following crops are from a picture resized in photoshop, and with Genuine Fractals:

Bicubic interpolation example

Genuine Fractals example

Click for larger versions.

RAW image adjustment

Digital cameras actually capture 9 stops of dynamic range and then clip it down to 5 stops when the image is converted to JPEG. However, if you set your camera to shoot in RAW, all the clipped information will be saved so you can change your mind about how you want it to be clipped later.

Here’s an example of a tree that I shot against a bright sky on a sunny day:

Underexposed tree
The foreground is underexposed but the sky is correct

The camera’s automatic metering set the aperture to f/10 and shutter speed to 1/250 second which recorded the sky correctly as a light blue with bright white clouds. However when I looked at the scene in person the tree was a brilliantly backlit bright green, but here it is a dark silhouette – around 2 stops too dark compared to how my eyes saw it. This histogram of all individual red, green and blue pixel values shows the problem clearly; the spike to the left is caused by all the detail darker than the lowest of the 5 stops being clipped to plain black:

Incorrectly exposed histogram

If I manually increased the exposure of the whole scene by 2 stops, say by decreasing the shutter speed to 1/60, the sky would have lost all detail and become a wash of white. The solution is to use a RAW adjustment program to selectively lighten the underexposed shadows without lightening the correctly exposed highlights. Your camera should come with a program that does this, but if Canon’s program is anything to go by it won’t be nearly as usable as Photoshop’s RAW file import dialogue. Canon’s program is said to produce a higher image quality; personally I can’t tell the difference.

Photoshop gives you a ‘Fill light’ slider that increases the brightness of the shadows selectively:

Underexposed tree
The underexposure is corrected without overexposing the sky

And as you can see from the new histogram, the spike at the left is gone and replaced with a nice smooth bell curve:

Correctly exposed histogram

Of course there is a cost – loss of contrast in the highlights, which had to be compressed to make room for the shadow detail. Compare the second histogram to the first. The three peaks for red green and blue to the right of the graph correspond the gradient across the sky. They exist in both histograms, but in the second one they are narrower: the difference between the lightest and darkest bit of sky is smaller than in the first exposure, and hence the gradient across the sky is less dramatic. In this case, the trade-off is easily worthwhile.

Combining multiple shots

RAW image adjustment works well when you have no more than a couple of stops underexposure or overexposure, because if you go more than 2 stops past the 5 stop limit of a scene’s dynamic range, you exceed the 9 stop dynamic range of your camera’s sensor and any detail in the poorly exposed areas is lost for ever.

Outside the window of my Norwegian holiday cabin where my wife is sunbathing it is a bright day; inside where I am hunched over a laptop it is much darker:

Partially overexposed window
In order to get a good exposure of the inside, I needed a shutter speed of 1/3 second at f/4
Partially underexposed window
Exposing the outside correctly required 1/80 second at the same aperture

This 5 stop difference is far more than we can hope to recover with RAW image adjustment. If you shoot both exposures, you can combine them in Photoshop using a layer mask to create an image that would be impossible using a film camera:

Photoshop mask
Using a mask used to combine the 2 exposures …
Photoshop UI
… in Photoshop …
Combined image
… Yields an image that looks more like what my eyes saw at the time.

I created the layer mask by inverting the dark image, blurring it, increasing the contrast and retouching a few areas with the brush tool.

Make sure you shoot with a tripod so that the two exposures overlay accurately (unlike in my hurried attempt, where blurring from hand-holding shows up in the interior shot and rotating / resizing was necessary to realign the images). Then take both photos into Photoshop as layers, add a layer mask, and use the brush tool on the layer mask to literally paint detail into the shadows. It’s surprising how well it works.

White balance

Artificial light is much warmer than sunlight, with more red and less blue in it. Your eyes adjust to the current light temperature and after a while you won’t notice it. Cameras do not automatically adjust however:

Incorrect white balance
This portrait was taken under sodium street lighting, rendering it unusable without correction
Correct white balance
Adjusting the white balance to the lowest temperature that Photoshop’s RAW import dialogue supports was enough to correct this extreme lighting

Cameras have a setting to correct white balance as you take the shot, but I find it easier to leave the camera alone and correct the white balance on my computer.

For a detailed technical explanation of what’s actually happening, check out this article: Understanding White Balance.

Buying kit

this fascinating bonus content

This box used to contain a guide to choosing kit, but I deleted it: there are already plenty of places on the net that will tell you how to spend your money (like the flickr Canon and Nikon groups, and photo.net) and this page is not going to become another one. Instead I want to share a philosophy for buying kit:

Firstly, there are people in the world who are taking better photos than you or I will ever take, with worse gear than you or I will ever own.

Secondly, digital camera technology moves quickly: the cheapest digital SLR camera you can buy today produces better images than the £5000 professional cameras of a few years ago, and the point-and-shoot cameras of 5 years time will probably match the SLRs of today.

Finally, while expensive gear is a pleasure to own, it will not improve the artistic value of your photos. In fact, plenty of artists would agree that having to work within the limitations of a small amount of kit produces better results than owning the ‘dream bag’ of every lens you want.

Here are two ways of spending three grand:

• Spend all £3000 on a 300mm f/2.8 IS lens: an object of beauty and arguably the finest lens Canon have ever made.

• Spend £300 on a 70-300 f/4.5-5.6 IS zoom: not as glamorous, but very sharp, 1/10 the price and only 2 stops slower. Spend the remainder on a month-long holiday to an exotic location taking pictures and learning how to use your equipment well.

Guess which one will yield better photos?

With this in mind, I recommend this path:

The body

First, choose Canon or Nikon. If you have friends who are into photography, choose the same as them so you can swap lenses. If not then choose Canon because nobody ever got fired for picking the market leader.

Buy a body with a high crop factor (see previous sidebar) because the body and the lenses will be cheaper. If you’re on a tight budget this means (as of early 2008) the Canon 450D or Nikon D40 / D40x. If you want to spend a bit more on a larger, sturdier camera with big controls that you can use with gloves on – the image quality is essentially the same – then go for a Canon 40D or Nikon D300.

The lenses

Buying the lenses is harder because there is so much choice, so start with the cheap kit lens that comes with the camera and use it for a month or so until you notice its limitations.

As you grow your kit, only ever buy a new lens when you have a specific kind of photography that you can’t do with your current lenses. Don’t try and anticipate what this might be, or you might end up with expensive lenses that you hardly use.

The most expensive lenses are ones with large apertures for shooting in low light. Before you buy these, buy a good tripod and an off-camera flash unit (or two), and learn how to use them.

My kit

I bought a Canon 30D with the standard kit lens.

I covered the focal length range that I use by buying another two middle of the range, light-weight zooms:

  • Sigma 10-20mm (£220)
  • Canon EF 70-300mm f/4.5-5.6 IS DO (£650)

Then for low-light photography I purchased a couple of wide-aperture prime lenses:

  • Sigma 30mm f/1.4 (£250, and actually my big sister bought it for me, thanks Freddie)
  • Canon 85mm f/1.8 (£225)

The latter is also an excellent portrait lens because of the wide aperture / long focal length combination’s effect on bokeh.

Finally I replaced the kit lens with a Canon EF-S 17-85mm f/4-5.6 IS (£345), because the kit lens produces some artifacts like chromatic aberration that were tedious to remove in Photoshop.

Your mileage may vary: start with the kit lens and buy new ones only when you feel the limitations of your current kit.

Filters

If you have multiple lenses, buy polarising and ND filters for the largest lens and then a set of step-up rings to fit them onto your smaller lenses. You won’t be able to attach lens hoods, but this doesn’t really matter if you’re using a tripod since you can shade the lens with a hand or a hat. If you must shoot with filters and a lens hood then you have to buy filters for each lens size.

Where to buy

I buy my kit from Ebay, which despite its reputation for dodgy sellers is quite safe if you’re careful. Use a specialist camera shop with high feedback and Paypal buyer protection (a guarantee that if the item does not arrive, Paypal will arrange a refund up to £500). If you buy anything that costs more than £500, test them with some purchases under £500 first. Choose a seller from your country to avoid being stung by import duties.

Filters

Filters were an important part of the prehistoric photographer’s equipment. Coloured filters could enhance a scene, warming or cooling it to compensate for different kinds of lighting. Graduated neutral density filters decreased contrast within a scene, allowing a bright sky and dark land to be captured in one exposure.

The digital artist doesn’t need most of the filters because the effects can be applied digitally – white balance settings on your camera or in Photoshop affect the scene warmth, and the advanced exposure techniques covered above are much more flexible than graduated neutral density filters.

There are a 2 filters that are very useful however, because they change the image in ways that can’t be reproduced by a computer:

Polarising filters

If you take any photos outdoors, you need one of these.

Light scattered through the upper atmosphere becomes polarised by ice particles, or something like that, I forget the details. This polarisation survives being reflected off shiny surfaces like sweaty foreheads. However, when light is absorbed and re-emitted from a surface as coloured light, it loses its polarisation. Because of this a polarising filter can do two things: remove white haze from the sky rendering it a deep blue, and remove white reflections from surfaces revealing their true colour. Alternatively, if it is the reflections you are trying to photograph, you can rotate the filter 90 degrees to increase their brightness.

Photography on sunny days can sometimes be disappointing because the scene never looks as colourful as it seemed to when you were there. Polarising filters help capture bright scenes as they appear to the human eye.

No polarising filter
Photo taken without a polarising filter
Polarising filter
The same shot with a polarising filter. The sky is darkened, and the reflections from the petals are removed.

Neutral Density filters

Neutral density (ND) filters are dark filters that reduce the brightness of a scene. You may need them if you like to play with long exposures for artistic effect. Even at the narrowest aperture, a 5 minute exposure will overexpose anything but the darkest night scene. Adding a strong ND filter can allow you to use these extreme settings. An ND filter can also allow you to take photos of very bright subjects without hurting your eyes.

Photo taken with ND filter
An ND filter allowed me to get the 30 second exposure I needed to render this babbling brook as a serene glassy flow
Photo taken with an ND filter
Taking a photo directly into the sun would have hurt my eyes without an ND filter

ND filters are just another way of affecting exposure, so it should come as no surprise by now that they’re measured in stops. How strong a filter you need depends on your requirements. I just metered a daylight scene at my camera’s minimum sensitivity of ISO 100 and minimum aperture of f/32 and was told that I needed a shutter speed of 1/50 to expose it properly. That is therefore the longest shutter speed I could achieve without an ND filter. If I wanted to take a 5 second exposure, I would need an 8 stop ND filter (1/50 doubled 8 times = 5). If I wanted to do a 5 minute exposure, I’d need a 14 stop filter.

Some filters are sold as, e.g. “8x” filters, which reduce brightness by a factor of eight. This is equivalent to three halving of the brightness, so it is actually a 3 stop filter.

ND filters can be stacked together and their stop values add together.

A warning about filters

With digital cameras it is especially important to buy filters with non-reflective coating, because otherwise light reflected from the sensor can bounce back onto it, causing ghosting. If you spend a lot of money on lenses, the best way to ruin their quality is to put a cheap filter in front of them. I use the Hoya PRO1 super-hard multi-coated range (over £50 for a polarising filter) and have no issues with them.

Accessories

Tripod

You can hand-hold a photograph at a shutter speed of around 1/focal length, i.e. with my Sigma 30mm lens on my 1.6x crop factor body, I must have a shutter speed of at least 1/50 second to reliably hand hold it, and even then the occasional shot may have noticeable blurring from camera shake. Buying an image stabilised lens (and they aren’t cheap) can let you hand-hold a photo at 2 or 3 stops slower than usual. For shots that require slower shutter speeds, you’ll require a tripod and a remote shutter release button to avoid shaking the tripod as you press the shutter (though you can use the camera’s self timer for this).

Monopods are one-legged tripods (unless tripods are three-legged monopods) that offer less stability but greater freedom of movement that makes them more suitable for action and event photography.

Another benefit of a tripod is that it makes it easier to compose a shot. Especially in low light and with telephoto lenses, framing and focusing a shot is hard. Using a tripod lets you carefully set up the shot so that you don’t accidentally clip off part of your scene or introduce a wonky horizon.

In fact, I’d go so far as to say that if you don’t have a good tripod, you are wasting your money buying expensive lenses. I have a Manfrotto 458(B) Neotec (£215) and 468 MGRC2 head (£165). This is expensive stuff, but it increases the proportion of my usable shots far more than a new lens five times that price.

Macro dioptres

Macro dioptres are magnifying glasses that screw onto the filter thread at the end of a lens and enable it to focus on very close objects. They are called dioptres because Jessops wouldn’t be able to charge £50 for a magnifying glass, but for a dioptre, now that sounds like a bargain. Long zoom lenses typically have a minimum focusing distance of 1 to 2 meters. With a Macro dioptre attached they can focus much closer, enabling you to fill the whole photo with an insect for example.

Macro image
When working with small subjects close to the lens the depth of field is very narrow – note how only the petals at the front of the flower are in focus
Crop from macro image
An ‘actual pixels’ crop from the image shows that it is extremely sharp. I thought that you would need a macro lens for this kind of quality. I was wrong. Score 1 for my philosophy of trying to get away with cheap kit before getting expensive stuff

Make sure you buy a dual element dioptre, like those from Canon or Nikon. They are optically far superior to the single element ones, and don’t cost much more. A good macro dioptre mounted on a sharp lens produces results just as good as a dedicated macro lens, for a fraction of the price.

The end!

I hope you’ve found this article entertaining.

To subscribe to future photography articles, add this link to your RSS reader.

]]>
https://blog.berniesumption.com/photography/beginners-guide-for-geeks/feed/ 224