Post Production Workflows for High Speed Footage
The following is from an article we had published under the title 'workflows for high speed cameras' in the February 2011 edition (issue 46) of HD Magazine, a UK publication covering all matters concerning the professional High Definition video world.

WORKFLOWS FOR HIGH SPEED CAMERAS
Focusing issues with digital cameras
Following are some internet forum comments regarding the focusing issues people are experiencing with another digital video camera - the Red - and some possible explanations and work arounds:
ISSUE: "I had a fairly serious shoot a couple of weeks ago with two Red cameras - we took a back up camera along - and we had real difficulty getting our sharps - with both cameras. We were using a just-before-checked set of master primes. We checked back focus (which was about 6 microns out) - and were helped in this by an extremely helpful lens technician from Arri. There were two issues:
Firstly, we could really only get the centre of the frame sharp - it got very soft as soon as you moved away from dead centre.
Secondly, the depth of field appeared to be a matter of a few inches when it should have been several feet. We took focus at T1.3 by eye (zoomed in on an HD monitor) and closed down to 5.6 to shoot - that alone should have given us enough leeway to be sharp - but obviously we also set focus with a tape. Eye and tape seemed to match what the lens said they should.
We then looked at the 4k tiffs and they were soft - and I'm not talking about a lack of sharpening. I've shot handheld, pulling focus by eye, using the Red (a different one) on a doc with super speeds wide open and it was all pin sharp."
EXPLANATION 1:
"There may be a small focus shift with the lens stop used when there is an OLPF and sensor cover glass+micro lens array behind the lens, even if you adjust for the Paraxial Ray path length.
There may also be a small focus shift when the angle of the light varies from parallel, some lenses have their ray crossing point closer to the sensor than others, so the distance through the glass plates will be longer or shorter for some lenses or others. The focus shift is about 1/3 the path distance difference or maybe a few 1/10000" or just maybe enoough to be a little soft in the corners.
With f/1.2 lenses the sides of the rays cone take a longer path through the glass plate than the center rays, or in the corners one side is shorther than the center and the other side, so that makes the "best focus" distance shift not only from center to corner, but from full open to stopped down.
To avoid this shift only use f/8 and remark the lenses. Otherwise there will be a small shift, not much for a film camera, but at 4K small shifts might be a little soft.
The mount should be set for your zoom since the zoom cannot be remarked, then remark your prime lenses.
Anamorphic lenses are VERY fussy about the backfocus and cannot be remarked and get right focus, so if you shoot scope, set the mount for the lens at the stop you will be shooting at.
The lens and f/stop used when the mount is setup may not be the same as what you are shooting at, so you should check the backfocus at the stop and with the lens you will shoot with. The error is small, and if you focus on the monitor at 1:1 pixel, the marks do not matter (except for INF), the focus will just be a little soft at large stops, which is what the OLPF is meAnt to do anyway, blur the image over four pixels so that two green and one blue+red pixel are always exposed to avoid colored spots. If the corners go softer than the center, it does not matter except on charts since those parts of the frame do not have in focus things most of the time anyway.
I should add, that if you use lenses that were aligined to their focus mark on a lens projector or collimator, those do not have the optical plates (OLPF and coverglass+microlensarray) so the alignment could show a slight shift when the "Best focus" is checked on the camera. If you adjust the lens mount for one lens aligined on a lens projector or collimator, then put another on that is opticaly at a different distance for the rays to cross, the "Best focus" can be a little different ON THE CAMERA even though the lenses were right on the lens projector or collimator, due to the shift in optical path length with angle through the OLPF and coverglass+microlensarray. So if you rent lenses, rather than have your own, the rental house would probably not have aligined the lenses focus marks on a RED ONE (tm) of the same serial number as yours, and therefore you would need to remark the lenses for each stop if you are going to focus by tape, and hope that you can go to INF when you need to.
The reverse can also happen, if you mark the lenses for use on a RED ONE (tm) and then put them on a film camera, you might get some soft shots on the film camera, but with film being what it is you might not notice so much. Anyway, if you are going to a filmout the lens in the projector will probably be far enough out of focus that the camera lens issues will not be visible from the back of the theater(?)."
EXPLANATION 2:
"What some people seem to get confused about is that they seem to think there is one setting of the backfocus that will fix this problem for all lenses that are made for a film camera, there is no single setting of the backfocus that will correct for this plate thickness since the length of the rays through the plate varies with the f/stop, the angle of the lens (ray crossing point), and from the center to the edge of the frame introducing curvature of field as well as negative spherical aberration.
The "plates" do three things: 1) there is no longer a single "best focus" for all stops (not that there was, its just worse now). The lens needs a new focus mark for each stop. 2) the "best focus" error is larger more for larger stops. (unless lens had positive sperical aberation before). 3) the "best focus" for the center and corner are not the same, i.e. the lens is no longer "flat field" (not that it ever was, just maybe worse now, although it might be better for some odd lens.)
The thicker the "plates" behind the lens the more this shows up. The faster the lens stop, the more this shows up.
If the "plates" are less than 0.01" total its not so bad, maybe just a little shift of the "best focus" on the different lens stops, but thicker you might see something in the end result. Does anyone know the total thickness and index of the OLPF and coverglass+microlens array?
How much shift in backfocus are people seeing between lenses in 1/10000" (I guess they check wide open)? Most lenses are retro-focus or tele-photo so the point where the rays cross is not as different as when "short" focus wide angle and "long" focus lenses were used."
EXPLANATION 3:
"At 4K there is almost no DOF, being just a little off focus means that you are not getting a 4K image of what you want in focus, something may be in focus like the actors ear, but you might be focusing in front of his noise as well.
To get a 4K image, you need the subject in focus. The point of the OLPF is not not get 4K with a high MTF, but to fuzz the image up so that the high MTF comes below 2K. None the less, some detail comes through in the LUMA from the De-Bayer near 4K, and that is what makes the image look in-focus or out-of-focus.
When shooting at f/1.2 using a 75mm at 4K DOF is very short, the charts made for film cameras relate to something closer to a 1K image, check to see what size spot the DOF table was made using.
When you look at the circle of confusion at high magnification you can see the shift in "best focus" for the stops, for film cameras a lens stop shift of 0.001" might just look like the follow focus guy was a little off, but at 4K if you see the image with all its glory 0.001" may be enough shift to notice.
The simple fix is to set the backfocus short so that all lenses will go to INF, then focus by the 1:1 pixel display, and only use prime lenses that focus by moving the whole lens in and out. You cannot set the backfocus too short or you will run out of travel on wide angle lenses. If you have a fixed focus wide angle lens you would need to fiddle with the mount, so try to only use lenses that move and focus on the image not the marks, or tape a new mark on for the stop/lens on the camera...
The thickness of the OLPF can only be properly compensated for if the light is coming telecentric through the filter. In most cases an OLPF has about a thickness of 3mm! So you a talking about a significant amount of glass here. It has about the same dimensions as a regular Tiffen, it just sits in the optical path behind the lens (I heard of DPs who are concerned about this amount of glass in front of the lens). I haven't really understood the details of the OLPF but it seems that it is hard to design a filter which is much thinner than this. It's almost impossible to judge the thickness of the filter in the Red without demounting it, but I would assume it is also in the range of about 3mm.
In addition those filters are often sitting rather close to the sensor. In a Red the OLPF is pretty far out. It's hard to guess, but I would assume it is about 8-10mm away from the sensor.
This helps to not see dust, but the downside to this is, it seems to increase focus shifts and chromatic abberations. If the exit pupil of a lens is rather short the light is entering and exiting the filter on an angle. As there is a significant increase of length (from what I recall about x1.5) of the optical path if light travels thru glass rather then air, the distance to the sensor is not the same if the light is going thru the filter at 90° or on an angle. This could also explain a backfocus shift when stopping down, as the rays of light are more bundled.
As film lenses are always designed for a film plane with no glass in between focus issues and also chromatic abberations will be much more critical when such a heavy amount of glass is introduced in the optical path. I've had very ugly magenta and green edges on highlights, which I have never seen on film before. On 16mm lenses this is much more obvious as they have a very very short exit pupil.
It is very disappointing but it looks like these are the options: 1) Only work with telecentric lenses (get a set of Uniqoptics) 2) Only do focus by eye 3) Collimate each lens with shims to match to each other and create a Red set (which wouldn't be good for film anymore) 4) Use a much much thinner OLPF (if only it would be available) 5) Make different marks on the adjustable mount for different lenses. 6) Mark the scales of your lenses individually if needed."
EXPLANATION 4:
"It may hurt some of you to hear this, as you glance over at your cases of Nikons or antique PL mount lenses. An OLPF is necessary in front of a digital sensor. These filters are made of layers of crystal, alternately oriented north/south & east/west, so that they do not astigmatically reduce sharpness (meaning that they do reduce sharpness evenly). That's fine & dandy for the center of the image, but as one moves off to the corners, the path of light on non-telecentric lenses becomes increasingly oblique, or at an angle. This means that the light is passing through the OLPF at an angle and therefore passing through more of the filter. How much more does this diffuse the light?
Depends on the grade of OLPF needed for the sensor size and design and of course the angle of said light. But this is what RED is referring to with their "optimized for digital" lens designs.
I can tell you from my experience with RED and other Digital Cinema cameras that generally this is really not an issue or an incredibly minor one at best. And this is with testing on optical benches and various metrics charts. And I can also tell you that it should have little to no bearing on why popping one lens on comes up soft while the next appears soft. This would effect corner to corner (edge fall off) sharpness only."
Editing High Speed Footage in Final Cut Pro (FCP)
'Out of the box' both Phantoms and Photrons can deliver 16bit .tif sequences. These can be delivered by Pirate or you can download the relevant software for transcoding the RAW data to DIY. For simple edits, or where budgets dictate an in-house edit, here is a no-cost method for using them in FCP:
- Connect the external harddisk to the Macintosh computer and mount it. It is then best to copy the data to a Mac volume.
- Open Quicktime Pro and open 'Image Sequence' by navigating to the directory and selecting the first .tif frame. Choose the frame rate (usually 25fps).
- Save the sequence as a reference movie (important) in the same folder, that way the reference movies will not get separated from their .tifs.
- Open FCP then import the reference movies you've just made.
- Set up the sequence settings to match the resolution of the RAW data and select the "TIFF" compressor, otherwise FCP will try to render whilst you are editing. Slow. Bad.
- Do your edit (the hard part) and colour correction etc.
- Render the file and export it as an uncompressed Quicktime, but not as a reference movie this time. h) Use whatever software you like to convert / compress the Quicktime master file.
Applications for high speed video
Although Pirate have developed their kit specifically for the TV & film market, other markets are served -
- 'Motion Analysis' of machine tools and industrial processes - for example, at Sarah Lee's factory in Slough a capping problem on a bottling line was solved in an afternoon and at Bentley Motors in Crewe automatic door mechanisms were analysed - they were so pleased with the results they have since bought two cameras like Pirate's!
- 'Sports Analysis' of the performance of athletes, horses (equine analysis) and sports equipment. Analysing movements in slow motion in both training and competition using instant video playback is a simple and immediate means to improve performance. Typical shots include horses running, ball impact on rackets and bats, and athletic movements (notably golf swings). An established company in this field is SportHorizon, whose high speed video camera is the Photron Super 10KC camera: this is a low-sensivity camera having a resolution of only 512 x 480 pixels, up to speeds of only 250 fps - a poor second to Pirate's.
- 'Ballistics' - analysing the trajectory of projectiles in flight and their impact. Pirate would like to tell you what's been shot shot, but would have to kill you.
- 'Natural History' - although we've all enjoyed Natural History on TV, some people, such as the Royal Vetenary Society, use high speed video to learn from nature some tricks to incorporate in future machines.
An excellent resource on the web giving an overview of 'High Speed Photographic Imaging' for belongs to Andrew Davidhazy. It outlines both the uses and the cameras available.
Lighting for High Speed TV and Film Shoots
When shooting high speed both the quantity and the quality of light are important considerations: as the frame rate increases, there is less time to expose the film or video frame, so more light is needed, and flickering not seen by the naked eye or cameras shooting at 25fps becomes all too apparent.
To address the quantity issue, one cannot simply add more lights - not only can the heating effect cause practical problems, but also the flicker effect will remain. For those who have not seen it, flicker looks like the lights are throbbing or pulsating, which indeed they are as the filaments heat and cool in time with the 50Hz mains cycle.
Using Strobe Lights
When shooting with high speed film cameras, one often uses strobe lighting, such as those made by Unilux. They produce very sharp images because the very short exposure time (each flash lasts for about 1/100000th of a second) practically eliminates any motion blur. When using a strobe light, the shutter angle is irrelevant in terms of determining exposure, in fact the shutter acts really as a sort of capping shutter, and exposure is independent of frame rate, so 'ramping' special effects shots are easily achieved (in which frame rate changes during a take). However, it is important to arrange each flash to occur at some point during the open period of the shutter: this synchronisation is achieved by using the widest possible shutter angle (i.e. having the shutter open for as long as possible) and triggering the light source by the camera. The method of triggering varies from camera to camera, but is usually a pick-up attached to the camera motor shaft. Apart from dramatically reducing motion blur, strobe lights produce far less heat - which can be important if shooting living things (e.g. people!) and delicate models.
In fact, the images produced by strobe lighting can have so little motion blur that they look rather unpleasant - in the way that one can tell the difference between film and video because of a lack of motion blur, the footage can look like jarring 'bad' video. The effect is particularly noticeable if the objects are moving rapidly, because they have moved so far between frames. This can be alleviated by adding some motion blur back in - using a small shutter angle and introducing an additional steady light source (see below) with the right intensity.
Although at first daunting, metering strobe lights can be achieved quite simply. Most strobe lights have a 'metering' setting, giving a steady 60 flashes/sec, so by setting a light meter's gate to be 1/60s in the flash (strobe) mode, one can measure the light from a single strobe flash. Of course, one would take several readings to ensure that a single full flash was measured.
A minor detail to bear in mind is that is the 6000K colour temperature, which can be corrected using an 85 or 85-B ge, or of course one can colour correct at the lab or in post.
The main drawback of using Unilux lights is the cost: A typical shot, say a food commercial featuring falling coffee granules, would use two heads running off a single control unit. In the UK, hiring from Panavision this would cost £1380/day for the first head (an H3000) and control unit, plus £485/day for the second head and £385 for the operator. That's £2250/day. The other drawback is that it is not unknown for there to be a problem - when the film rushes return the next day an error in synchronisation can have dire consequences.
Steady Light Sources
There are three choices of steady lighting available to the TV & Film industry - hydragyrum medium-arc iodide (HMI), direct current (DC) lights such as the Longstrike or high wattage tungsten. The last might be surprising, but I will explain: at Pirate we have tested a wide range of tungsten lights using our digital high speed camera. Generally, the lower the wattage, the worse the effect. However, we have found that any tungsten light over 10kW gives a steady enough light that the effect cannot be seen. We believe this is due to the thermal inertia of such large filaments. In any event, most high speed shoots in our studio using our Phantom and Photron cameras have been lit using our 10kW lights, which give sufficient light to work with our 300ASA equivalent cameras - typically, we use two 10KW lights to illuminate a table top set-up. We have also used 5kW lights, but they do marginally flicker and should be avoided if possible.
Other lights which we have used that are flicker-free are the Biese 4kW and the Kinoflo's Vistabeam 600 lights.
Another approach is to use a triplet of lights with each light on a different phase, which produces a steadier overall light than an 18kW HMI for example. The approach is good for lighting large areas evenly. Whilst this is a very cost-effective solution to flicker-free lighting for high speed, the heat output can be a problem for people and models. Hence, at Pirate, we generally build rigs to perform pours, drips etc so that actions can be performed reliably and repeatably without stress.
And yet another approach is to use LED lights, which will not flicker as long as they are not dimmed.
And, finally, the Sun doesn't flicker ...
Why Video Artifacts Occur in High Speed Digital Video Images
All high speed digital video cameras, in common with most digital stills cameras, have a single image sensor, the surface of which is divided into a grid of light sensors. These sensors are only 'black & white' - they only measure brightness. To 'see' colour a red, green or blue filter is fixed in front of each sensor. On our cameras, the colour arrangement of these filters is in a particular pattern known as a 'Bayer' pattern.
The light falling on each sensor, through its own colour filter, is measured by electronics to produce a 'raw' computer data file of red, green and blue values. In the case of the Phantom HD, the file format is called 'cine raw'. Subsequent to downloading the .cine file, the raw data is converted (de-Bayered) into a sequence of colour images, usually .tif files, using a particular algorithm (a sophisticated mathematical recipe for interpreting the colour information in a RAW file into ‘real’ colour).
A consequence of this technique is that when viewing finely detailed scenes, some algorithms are better than others at interpreting the fine detail: all algorithms generate 'artifacts' or errors of these finely detailed areas. At Pirate, we actually use this inherent fault for focusing - by using a focus chart containing fine lines the appearance of these artifacts on the chart indicates pin-point focus.
A great explanation of the Bayer Pattern by Sean McHugh, with pictures(!)