Understanding Stereoscopic 3D in After Effects

This guide helps you understand how to use stereoscopic 3D in After Effects.

Understanding stereopsis and stereoscopy

To understand what stereoscopic 3D is, it's necessary to understand perceived depth. There are many cues that help us perceive depth.

Objects in perspective, occlusion, and relative size are good indicators of depth. An object that is farther away is interpreted as such by our brains if it is much smaller than another object next to it. Our brain already knows how big those objects should be in relationship to one another. If two objects are roughly the same size in our field of view, and one is occluded by or is occluding another object, our brain infers that one of those objects is in front of the other. (Occlusion mean one object is laid on top of the other and obscures the other.) Paintings or games can appear 3D because they obey these rules. After Effects also obeys these rules when you create a 3D composition with a camera.

Another important depth cue is lens blur. If our eyes (or a camera lens) focus on a specific object, and another object appears blurred next to it, our brain knows that the other object is either in front of or behind the object. If there is no blur, our brain thinks that the two are at a similar distance. One can clearly see this phenomenon as our eyes focus on different objects and our retinas blur the out-of-focus objects in the background. Our brain interprets this as a depth cue without us realizing it. This phenomenon is subtle as our brain filters it seamlessly into our perception. It is usually unnoticed to the average person. However, it is possible to train our eyes and brain to experience and be conscious of the depth of field by relaxing the eye muscles and using the following (or similar) technique. Look through a windshield with water droplets on it at night. When you focus outside the windshield, the water droplets turn into little halos of color called bokeh. Similarly, when you focus on the droplets, the streetlights in the background turn into bokeh. This effect can be accomplished with one eye closed. Therefore, it has nothing to do with stereopsis, but instead has to do with our eye’s lens focusing, similar to how a camera lenses focus. Understanding how depth of field is related is important when attempting to create realistic images and works hand-in-hand with stereoscopic 3D in After Effects. Especially with the new and improved Camera Lens Blur effect and related features in After Effects CS5.5.

Finally, arguably the most powerful depth cue is stereopsis. Stereopis is the ability of our brain to take two input images from different perspectives and gain an understanding of how far away two different objects are in relationship to each other. The key point to understand is that since our eyes are spaced apart on our heads, each eye can view a slightly different perspective of the world in front of us. Look at an object nearby and close one eye, then switch eyes back and forth several times. Then try this same exercise on an object that is far away.  You notice that the object that that is nearby jumps from side to side in your field of view a lot more drastically than the object far away. If the close object is in the same general direction as the far away object, the close object switches sides of the far away object. This is the basis of how stereopsis works. Your brain takes the relative horizontal distance between objects in your field of view and compares them to gain an understanding of where those objects are in relationship to each other in terms of depth. It is theorized that pigeons bob their head in order to gain depth perception (since their eyes are on opposite sides of their head and they can’t see depth otherwise). If you look through only one eye, you lose your stereopsis depth cue. However if you bob your head from side to side with that eye still closed, you can get a sense of depth again. This separation between eyes that provides different perspectives is the key to stereopsis.

It is important to keep all these depth cues in mind when constructing a stereoscopic 3D composition in After Effects. In the real world, it is possible to give contrary information to the brain and trick it. Optical illusions like the Ames Room, the Infinite Staircase, or tilt-shift photography are all examples of how depth cues can be manipulated and our brains tricked. (Tilt-shift photographiy is a method in which a post-process depth-of-field blur is added to an image to give a broad landscape the feeling of a miniature.) Since After Effects gives you control of all of these depth cues, it's important to maintain control over their interaction and make sure that they are not giving our brains too many contrary depth cues. In real life, one can mess around with our surroundings in intelligent ways to create optical illusions. But more often than not, inconsistencies in the digital realm are considered unnatural and can even cause eyestrain or brain pain. Stereopsis, being the most powerful depth cue, is no exception. It's important to make sure that it is not painful to look at the stereoscopic result on different screens. Ones viewing experience can change depending on how big the screen is and how far away the viewer is from the screen.

Stereoscopy is a digital technique for allowing our brain to see stereopsis by tricking it. This technique is done is by presenting each eye with a different image. The left eye is presented a view of a scene from some virtual or real camera that shows the left perspective. Similarly, the right eye is presented with an image of the right perspective. In this way, each eye is presented with a different image independently and our brain puts them together, and we perceive depth. When viewing a stereoscopic 3D scene on a monitor, the elements in the scene have a tendency to pop out or sink into the screen. Stereopsis is telling us that the object is closer or farther away from us than how far away the monitor actually is.

Many different devices and systems exist for delivering stereopsis to our brains. But in general the principle behind all of them is the same; get one eye to see one view, and the other to see a different perspective of the same scene. Anaglyph glasses are the oldest method, and by far the cheapest. Different colored lenses color filter each eye’s view differently. Red-blue glasses filter out blue on the left eye and red on the right eye. On the display side, the left image is colored red, and the right is colored blue. Then the images are overlapped. Each eye sees only the associated image. Because of the inherent color distortion, it is difficult to see all the colors accurately using anaglyph. But the setup is very easy and works accurately for judging depth and convergence. Polarized glasses work on a simple principle. Two images are displayed on a screen, one image emits horizontally polarized light only, and one emits vertically polarized light only. The glasses have polarized lens such that each only lets through light polarized in one direction. Active shutter glasses work by blocking one eye at a time at a high rate (usually 60fps) and switching the left and right images every frame while synchronized with the monitor. Some TVs use no glasses at all, such as those from Alioscopy. Aliscopy uses lenticular technology, in which the lens on the monitor itself actually refracts the lights in different directions so that each eye gets a different perspective simply by being in a different location in relationship to the TV. There are many more methods for stereoscopy. Here is a parody of the topic that shows a very unconventional method used by Jonathan Post: http://www.jonathanpost.com/

When dealing with stereopsis in the real world, the only things that can vary are the positions of objects in front of you, and the perspective from each eye can only change based on that. The only way to make an object look closer through stereopsis is to actually place it closer. You can’t easily change the distance between your eyes, your field of view, or the aperture of your eyes (at least not voluntarily) to modify the depth of field you perceive. However, in the digital realm, there are many more variables since all of these aforementioned things can be changed. Therefore, there is a high likeliness to introduce confusing depth cues that are contradictory and cause pain when viewing.

3D depth cues in After Effects

Perspective, occlusion, and relative-size depth cues are all handled automatically for you by After Effects, since it places objects in a virtual 3D space. Moving an object farther away along the z axis of the camera makes that object smaller and place it behind other objects. Changing the camera field of view changes the perspective of the scene. A wide-angle lens gives you more perspective depth cue information than a telephoto lens, for example. Turning on Depth Of Field in the camera layer and modifying the aperture adds lens blur according to the focus distance. Also, stereopsis can be added to any 3D composition in After Effects. In short, the concept is simple: create a left camera view and a right camera view of some 3D scene, and render them out. Then use some sort of stereoscopic display to view the composition in stereo.

Creating a stereoscopic scene in After Effects

Start out by taking any composition that has some 3D layers positioned along the z axis. Right-click a layer and select Camera > Create Stereo 3D Rig. After Effects creates a Left Eye composition and Right Eye composition driven by left and right cameras. It also creates an output composition that puts the two views together into a format that is recognized by some stereo viewing method. If you applied the command to a camera, that camera is the one to control your stereo cameras.

At this point, you canput on red-blue anaglyph glasses and see your composition in stereo. Objects pop out or sink in to the screen according to their distance from the camera. At this point, you can go back to your starting composition and modify your camera position, depth of field, placement of layers, or anything else about the scene. When you switch back to the stereo view, it updates in stereoscopic 3D. Play with your scene. Tt's very easy to see stereoscopic 3D in action if you are animating a camera move, animating objects moving closer to the camera, or animating the depth of field (camera aperture, focus distance, and zoom).

Controlling stereoscopy in After Effects

Once your scene is complete, you can begin to tweak the stereoscopic 3D controls for your scene. No further changes are required in your main composition. Switch to the Stereo 3D composition and find the layer Stereo 3D controls. All of the controls necessary for stereoscopic 3D are in two effects on this layer.

Stereo scene depth

This control is the main control for changing the interaxial separation of the cameras. Increasing this control spreads the cameras out. This effect is the same as if you moved your eyes farther apart. It is very difficult and unnatural in real life, so this control—if used improperly—can have very painful results since our eyes and brain are not used to converging much more than allowed by the distance between our eyes. The last thing you want is to get your viewer to go cross-eyed trying to converge their eyes on an object too close or too far away. Usually, to get the most pleasant results, you want your camera separation to match the separation of your eyes. However, it is very difficult to do, because your final output could be on a (relatively) small 50’’ 3D TV or a very large IMAX screen. In both cases, the distances between the objects on screen could vary drastically and could cause eye strain or cross-eye on one viewing screen, but be fine on another. For this reason, the Stereo Scene Depth property is measured in % of composition width. That way, if you change the size of your stereo composition, the stereoscopic calculation remains unchanged relative to the new size.

Changing the Stereo Scene Depth value has the result of making the stereoscopic 3D scene appear to pop out or sink in to the screen more. Setting it back to 0 gets rid of all stereoscopy, and everything is on the plane of the screen.

To understand what this control is doing, understand that moving the cameras apart has the effect of moving all objects in the scene away from one another horizontally, thus increasing the amount of perceived depth separation. In this way, you can get around moving an object farther back and closer to the camera to get more depth. Increasing this value increases the maximum amount an object can stick out of or sink in to the monitor.

Understanding convergence

When our eyes converge on an object, if there is a difference in horizontal position of that object between the left eye’s image and the right, our mind puts the object together into one and our brain thinks the object is a certain distance away (due to parallax).

When two objects appear in the same location horizontally in both the left and right frame, the objects' distance from the camera indicates the plane of convergence. Any layer that is at the same distance as that plane from the camera is converged upon. Objects that are converged upon appear to reside on the surface of the screen that is being viewed. Everything closer to the camera than that object or others along that plane appears to pop out of the screen. Everything farther away from the camera than that object appears to be pushed deeper in the screen.

Think of the convergence plane as an anchor point for the stereoscopic 3D space. In this way, you can shift your 3D objects back and forth and control directly whether objects are all sinking in to the screen, or only popping out, or a mix of in and out. To understand how much those objects will stick out in either direction relative to the plane, see the section on Stereo scene depth.

Toe-in or parallel cameras and convergence point

Our eyes angle slightly in toward the object we are looking at. This effect is known as toe-in. After Effects does this effect when you select Converge Cameras in the Stereo 3D Controls. Using toe-in can give more control, but there are several factors to be aware of when doing this. When cameras converge, you change the perspective of the view, since the cameras are rotated and thus distortion is introduced. No longer do the perspectives of the left and right cameras line up exactly. When you capture live stereoscopic video, you almost never want your camera rig to have a toe-in. You want to correct for the perspective distortion if you need to change the convergence point in post-production. Real scenes are almost always shot with parallel cameras. Keep this in mind if you are trying to mix and match live footage with digital elements. If your scene consists of only 3D elements in After Effects, then it is probably safe and preferable to use converged cameras.

Converged cameras

In After Effects, it is much easier to change the convergence point of your stereoscopic 3D camera rig because you can change where the cameras are pointing quite easily. Make sure that Converge Cameras is selected, and change the Convergence Z Offset property. Increasing this value pushes the convergence point away from the camera, so all objects in the scene pop out toward you when you view it on a 3D monitor. You can set where the cameras converge by changing the Converge To property. Usually, it is easiest to have the left and right cameras converge to your master camera's point of interest (default). But it is useful to change it to camera position (plus, for example, the focus distance as an offset) when trying to match convergence point and depth of field. Likewise, you can tie the convergence point to the zoom to automatically keep the convergence the same while doing a perspective shift (changing the field of view of the camera while doing a dolly in). See the section on depth of field for more information.

Parallel cameras

You can also use parallel virtual cameras. This technique is useful if you need to match live footage and add digital elements to that scene. Keeping the virtual camera orientations consistent with the cameras used in the footage helps to keep the perspectives of the digital elements and that of the stereo footage aligned.



Changing the convergence plane with live footage is as simple as changing the horizontal alignment of the left and right images. Conceptually it makes sense; each object in the left and right image has a different horizontal offset depending on their depth due to parallax. If you align the left and right images so a specific object in your footage appears in the exact same location when overlapped. Your convergence point is now located at the depth that matches how far that object was from the camera when you shot your footage (or however far that object is from your virtual cameras).

You can change the 3D Glasses effect's Scene Convergence property to change the convergence plane of parallel cameras. Keep in mind, though, that because it simply offsets the final images, it acts as an additional change to the convergence if you have already converged using the Converge Cameras property with an offset. In general, only change the 3D Glasses effect's Scene Convergence property when using live footage or when Converge Cameras is turned off.

Increasing the Scene Convergence property moves the convergence plane farther away from the camera. Everything in the scene pops out from the screen toward the viewer.

In general, your convergence plane with parallel cameras should ideally be at your camera's zoom distance. However, when your cameras are parallel, there is an offset to take into account. The cameras are spaced apart, and the two perspectives are also spaced apart. To have the correct convergence plane you must change the scene convergence to counteract the separation of the cameras. Subtracting the stereo scene depth (interaxial separation) will do this and keep the convergence point from moving when using parallel cameras and virtual 3D elements. However, don't this when using converged cameras. Set an expression on the 3D Glasses effect's Scene Convergence property to automatically account for this. Also make sure that the Units property in the 3D Glasses effect is set to % Of Source to match the units of Stereo Scene Depth in the Stereo 3D Controls effect; otherwise, an additional calculation is necessary. After doing this, you can change the Stereo Scene Depth property, and your scene convergence doesn't change. As a test, try changing the Stereo Scene Depth property with the 3D View in the 3D Glasses effect set to Difference. You should not see the black areas move back and forth—only the separation of the objects in front or behind them. With the following expression for parallel cameras, and the value of Scene Convergence set to 0, the convergence plane will be at the zoom distance of the camera.

3D Glasses effect Scene Convergence property expression

try {

                cameraOffset = effect("Stereo 3D Controls")("Stereo SceneDepth");

                if( effect("Stereo 3D Controls")("Converge Cameras") == false ) {

                                    value - cameraOffset;

                } else {

                                    value;

                }

} catch (e) {

                value;

}

Preview convergence plane with parallel cameras

When working with converged cameras, it is much easier to know how far away your convergence plane is. You have direct access to set the convergence point and offset. See the section on converged cameras to see how.

When dealing with parallel cameras, it is difficult to tell how deep in the scene the convergence plane is. To preview this effect, change the 3D View property in the 3D Glasses effect to Difference. Objects that are aligned turn black. Any objects that are aligned are on the convergence plane. If you then change the Scene Convergence property by dragging the property value, you should see a darker band move through the scene. This band is the convergence plane moving back and forth through the scene. If you switch to the 3D view and put on your glasses, objects on this convergence plane appear to be on the plane of the TV screen.

Match cameras to Maya

A good thing to remember is that normally our eyes are about 6 to 6.5 cm apart. This fact is useful if you are trying to match camera separation from another program, like Maya. If you import cameras (or nulls) from Maya and they are not lining up with the stereo rig camera positions, try adding the following expression to the interaxial separation (Stereo Scene Depth property) to handle the conversion to After Effects units. In this case, the Maya default units are in cm, and they are dealing with absolute units. It's necessary to counteract the composition width percentage calculation. However, it possible you have to rework any keyframes if you change your output size. Using this equation allows you to drag the property value as you normally would. It takes that value and modifies it as needed.

Stereo Scene Depth (interaxial separation) expression to match Maya cameras:

value * (100.0 * 6.5 / thisComp.width);

If your cameras are in the wrong location, make sure to verify where the master camera from Maya is in relationship to the left and right. Remember that you can change the configuration in your Stereo 3D Controls effect in After Effects such that the master camera is centered between the left and right cameras, or in the same location as the left (hero left), or the same location as the right (hero right).

Match depth of field to convergence

To get any sort of realistic scene, you usually want to add depth of field, though usually it is subtle unless you are using a telephoto or macro lens. Usually, you want your focus to match the convergence plane of the cameras. With parallel cameras, it is more difficult, and a little bit of eyeballing is required (See the section on ETLAT and previewing convergence plane with parallel cameras for more information).

When working with converged cameras, it is very easy to match your focus distance and convergence planes. Here are a few methods.

If you want your focus distance to simply follow your point of interest, use the new command by right-clicking the camera layer in the timeline. Choose Camera > Link Focus Distance To Point Of Interest. Then make sure that your Stereo 3D Controls effect properties are set to converge to camera point of interest with a 0 offset.

If you have already keyframed your main camera’s focus distance and want your convergence point to match it, make your cameras converge to camera position. Set an expression on the Convergence Z Offset property to match the focus distance of the camera. Now your convergence point follows your focus distance.

Make sure to replace YourCompName with the correct name for your main composition.

Expression to set on the Convergence Z Depth property:

comp("YourCompName").layer("Master Cam").cameraOption.focusDistance

If you have keyframed your Convergence Z Offset property, you can set an expression on your focus distance to match the convergence z offset. Make sure to remember where your convergence point anchor point is. If you are converging to camera position, no further work is required beyond linking your focus distance to point of interest as described earlier. However, if you are converging to the camera point of interest, add the distance between the camera point of interest and the camera position to the z offset for the focus distance expression using the length function. If you are converging to the camera zoom, add the cameras zoom value to the z offset for the focus distance expression.

Make sure to replace YourCompName with the correct name for your stereoscopic 3D composition.

Expression to set on the Focus Distance property:

stereo_comp = comp("YourCompName Stereo 3D"); 

s3d_controls = stereo_comp.layer("Stereo 3D Controls").effect("Stereo 3D Controls");

converge_to = s3d_controls("Converge To");



convergence_z_offset = s3d_controls(8);

converge_to_pos = (converge_to == 1);

converge_to_poi = (converge_to == 2);

converge_to_zoom = (converge_to == 3);



if( converge_to_pos ) {

convergence_z_offset;

} else if (converge_to_poi) {

convergence_z_offset + length(transform.position, transform.pointOfInterest);

} else if ( converge_to_zoom ) {

convergence_z_offset + cameraOption.zoom;

Composite digital 3D elements with stereo footage from real-life cameras

You can work with real footage and integrate 3D elements in After Effects. The workflow currently requires a little bit of manual work. In general, use your stereo footage as a background plate, and then composite your 3D elements on top of it. The reverse could be the case if for example you are trying to put a stereoscopic video (like a TV screen replacement) into a virtual stereoscopic scene and the convergence of the scene needs to be different than the convergence of the footage.

For the purpose of simplicity, here is the workflow using stereoscopic footage as a background plate.

First, start out with your 3D scene and create a stereoscopic 3D rig (Camera > Create Stereo 3D Rig). Import your stereoscopic left-eye and right-eye footage items. Drag your left-eye footage item into your Left Eye Comp composition and your right-eye footage item into your Right Eye Comp composition at the very bottom of your layer stack and leave them as 2D layers. Now, if you switch to your stereo 3D view, you should see your 3D elements composited with your stereoscopic 3D footage. Great!

One final thing needs to be done in order to truly control the convergence of the footage. Add a Slider Control expression control effect to your Stereo 3D Comp composition and name it Footage Convergence. Set an expression on the X position of the left and right footage layers. (You'll need separate dimensions of Position first: Animation > Separate Dimensions.) The left layer adds the slider value converted into percentage of composition width, and the right layer subtracts it. Make sure to replace YourCompName with the correct name for your stereoscopic 3D composition.

Expression to set on the left-eye footage layer's X Position property:
transform.xPosition  + (comp(“YourCompName Stereo 3D").layer("Stereo 3D Controls").effect("Footage Convergence")("Slider") / 100 * width )

Expression to set on the right-eye footage layer's X Position property:
transform.xPosition  - (comp("YourCompName Stereo 3D").layer("Stereo 3D Controls").effect("Footage Convergence")("Slider") / 100 * width )

Now you can drag your footage convergence slider to change the convergence plane of your stereoscopic 3D footage, and use the Stereo 3D Controls effect to control the convergence of your 3D elements. 3D glasses change the convergence of both together. It is best to try and get the convergence planes to match as close as possible in this situation.

You can’t change the stereoscopic scene depth of footage after you've shot it. Doing so would involve changing the interaxial separation of the cameras and shooting the footage with new perspectives for each camera. It is very difficult to get different perspectives from an image that has already been recorded (though there is research happening in this area). Your best option is to set the Stereo Scene Depth property of your 3D elements to match as closely as possible the separation of the cameras that were used on the shoot. Matching it might be somewhat difficult. Normally, cameras are spaced 6.5 cm apart to be similar to eye separation. But depending on the camera size, it can vary (especially if the body of the camera is wider and it is not possible to place the cameras that close together). It's necessary to do some sort of calculation to compensate for the dimensions of the footage. Also take into account correct units as mentioned previously since After Effects operates in units of pixels, not centimeters. It can be easiest just to manually adjust it in this situation.

Remember that to get the convergence point of the footage to match the camera zoom value, it's necessary to subtract the cameras' separation amount from your footage convergence. Using the difference mode is probably the easiest and fastest way to align the object you want to be on the convergence plane. To have the best composite possible (and least painful), make sure to match the convergence plane of your 3D elements with that of your stereo footage.

 

ETLAT (edit this, look at that) 

When editing with stereoscopic 3D, it is usually invaluable to be able to see what exactly is happening and how the parameters you are changing affects your stereoscopic 3D rig. There is a simple way to get a sense of this in After Effects:

  • Open a new composition viewer, and set one to view your initial scene composition, and one to view your final stereoscopic 3D composition. Make sure to lock those views so that they do not switch.
  • With your Stereo 3D composition selected, click the controls layer and lock the Effect Controls panel so it isn't hidden.
  • Now go back to your initial composition and turn on camera wireframes. Choose View > View Options > Camera Wireframes > On. Then switch to a custom view so that you can see your cameras in 3D space.

At this point, you should be able to see three cameras: your master camera, as well as your left and right ones. Changing your settings under Stereo 3D Controls should update the cameras in your initial scene. Try changing the Stereo Scene Depth property to see the cameras separating or tweak your convergence options to see where the cameras are pointing.

This technique is especially useful when debugging problems, and when trying to match your depth of field to the convergence distance. Both the focus distance and the convergence point are shown when the cameras are converging. With parallel cameras, you can still see your focus distance or point of interest and you can see how this lines up with the perceived convergence point in your final output using the difference mode technique as described earlier.

Hook After Effects up to a 3D TV

It's pretty simple to edit while previewing the stereoscopic 3D effects that you are changing. Anaglyph mode is an inexpensive way to do this. If you happen to have a 3D TV accessible, follow these steps to see your composition and edit in stereoscopic 3D live.

  • Connect your 3D TV to your computer as a second monitor (DVI or HDMI).
  • Make sure that your composition dimensions exactly match the resolution of the 3D TV. Check your resolution settings for the second monitor.
  • Change the 3D View property in the 3D Glasses effect to match one that your 3D TV supports: either Stereo Pair (Side By Side), Over Under, or Interlaced Upper L Lower R.
  • Create a new composition viewer for your stereoscopic 3D scene, and drag it out of the After Effects frame onto the 3D TV. Make sure to lock this viewer.
  • Make sure that your Magnification Ratio in the viewer is set to 100%. 
  • Press Ctrl+\ (Windows) or Command+\ (Mac OS) twice to make the size of the viewer full-screen on the 3D TV.
  • Turn on the associated 3D mode on your 3D TV.
  • Put on your glasses, and you should be viewing your composition in stereoscopic 3D.

Lights and cameras and the rig

Left Eye Comp composition and Right Eye Comp composition can produce different camera views because they are precomposed with Collapse Transformations on. They do not inherit camera or light data from the containing composition, but instead use the modified left and right cameras. This is good because the cameras automatically create the correct angles for the stereoscopic view without any manual work.

However there are two limitations that this introduces:

You cannot use multiple cameras, since each stereoscopic 3D rig is always linked to only one master camera. If you need multiple cameras, you will need to make multiple stereoscopic 3D rigs linked to each individual camera, and then edit the stereoscopic 3D scenes together in another composition.

Lights do not transfer into the precompositions with collapsed transformations. If you create a light in your main composition, that light isn't used in your Left Eye and Right Eye composition and not in your Stereo 3D composition either. If you need lights, manually copy your lights into the Left Eye and Right Eye compositions. Make sure that the lights are identical to your original lights in the main composition. Otherwise, you can get different shadows or colors in each eye, which can be a cause of visual discomfort. Adobe recommends that if you need to add lights, connect the lights in the left and right compositions via expressions to their counterparts in the master composition. Make sure to link all properties in the lights, including positional, directional, and light parameters. You can do this step easily using the pickwhip tool. Open up two timelines to simultaneously show your main and either left or right composition. Option-click the stopwatch for each property in the light and using the pickwhip tool drag it to the associated light property in the main composition. 

Ghosting

When viewing your composition through glasses, you can see areas that appear twice, called ghosting. You can test this phenomenon by closing your right eye. If you see any part of the image that only the right eye should be able to see, you know you have a problem. This problem is usually an issue with the way the display is showing your content. In general, though, suppress areas that are ghosting. Sometimes it happens when there are sharp contrasts of color and the glasses are not able to entirely block out that image from the incorrect eye. But most likely it is a display synchronization issue or similar problem with the 3D TV or display device.

Avoiding stereoscopic problems

As you can tell, there are many moving parts when working with stereoscopic 3D. As first discussed, you have access to so many more variables than in real life. Therefore, there is much more opportunity for them to not be aligned, providing contradictory depth cues and causing eyestrain or brain pain. The following are some general principles to keep in mind.

  • Make sure that your depth cues are not giving contradictory information.
  • Check your camera zoom; wide-angle lenses cause more distortion if your cameras are converging (toe-in).
  • Match your focus distance of the master camera to the distance to the convergence plane; it could be subtly confusing if they are not (it can give you the sense that something is wrong but you can’t tell what).
  • If integrating live footage, affirm that your camera angles match that of the cameras from the footage (usually parallel), and that your convergence distance also matches that of the footage.
  • Avoid introducing an extreme amount of parallax. In difference mode, look at the horizontal spacing of the left and right eyes between the closest and farthest object and make sure this is not too extreme.
  • If your eyes cannot converge or it is painful to see the image you might try these solutions:

    • Move farther away from the viewing screen when looking through your 3D glasses.
    • Make sure that your convergence point is somewhere predictable and not far off in the distance or very close to the camera where your eyes would go cross-eyed.
    • Reduce the Stereo Scene Depth (interaxial separation). If your convergence plane is reasonably located and an object that is far away from the convergence point causes you to go cross-eyed, this can be painful. Remember that it is the relationship between the objects in the scene that matters; compare the horizontal separation of the closest object to the farthest. If the two images overlaid look drastically different, this could cause strain. 

Ghosting can happen from things that are out of your control. Factors that can cause ghosting are hardware synchronization between glasses and monitor, battery power of glasses, dynamic range of the monitor, or refresh rate. But there are some things you can do to make it better. If you are getting ghosting, try the following:

  • reducing the high-contrast areas
  • increasing brightness
  • reducing scene depth so the separation between elements is reduced
  • checking your display's stereoscopic 3D troubleshooting guide

A final experiment

One interesting thing to try is to reverse the depth cues on purpose, and gain a sense of what happens when things go wrong. In this case, you can easily contradict your occlusion and stereoscopic 3D depth cues to give an interesting illusion. If you select Swap Left-Right in the 3D Glasses effect, it reverses all the convergences. Therefore, everything that was sticking out is now pushed in. This method is unintuitive, but the effect is that an object that is in front of another with regard to occlusion, relative size and perspective appears like it is behind the other in the stereo depth cue. It looks as if the background layer is cut out and the foreground layer is sinking into it. This effect is strange, but experiencing it helps to understand how important these depth cues are and how important it is to make sure that they are all in alignment and agree.

 Adobe

Get help faster and easier

New user?