Light Field Viewer Software

Screenshot of Garfield Light Field
Screenshot of Plastic Light Field


I recently wrote a light field viewer that runs in real-time by exploiting the graphics hardware (especially multitexturing support).  Click here to find out more about how it works.  The program is written in C++, and uses OpenGL for all the rendering.  I have tested this software on a Dell Laptop (Latitude C840), with the Nvidia GeForce 440Go graphics card, running Redhat Linux 8.0.

This viewer works specifically on the hemispherical/spherical data sets that we are using in our experiments.  (To work with non-hemispherical data sets, you will have to hack the code a bit, specifically the ViewHash class).  If you do have a hemispherical light field, then you will have to get it into our format.  You will also have to specify a geometry for this light field in this format.

Download source code and example files
lfviewer.tar.gz (includes Makefile, source and header files)
lfPlastic256.tar.gz (example light field directory - a textured plastic ball)
geoPlastic.txt (example geometry file - for the above light field)

Please note the following:

LFViewer Version 0.1 - light field viewer
Copyright (c) 2003 Prashant Ramanathan
This program comes with ABSOLUTELY NO WARRANTY,
including any implied warranty of fitness for any purpose,
or of merchantibility.

Unpacking, compiling and running the code

To unpack, type at the prompt:
> gunzip lfviewer.tar.gz
> tar xvf lfviewer.tar
> gunzip lfPlastic256.tar.gz
> tar xvf lfPlastic256.tar

To compile:
> make

To run:
> LFViewer Plastic256/ geoPlastic.txt
where the first argument is the directory containing the light field image and parameter files, and the second argument is the file that specifies the geometry.

To navigate: hold on the left mouse button and drag to rotate around the object; hold down the right mouse button and drag to zoom in or out; hold down the middle mouse button and drag to shift the object up, down or sideways.  The controls are very simplistic right now, and don't always do the intuitive thing, so feel free to change them.  One other note: recall that since this is a static light field, the object is stationary and you are moving around it.  Likewise, the lights are fixed.  

Light field format

The LFViewer program requires you to specify a directory that contains the files related to the light field, specifically: a file, a lightfield.parm file, and all the image files in ppm format. format:

[focal length in x (float)] [focal length in y (float)]
[parameter file name (string)]
[number of light field images (int)]

[scan order - integers from 0 to number of light field images - 1]
[filename of light field image 0 (string)] 0
[filename of light field image 1 (string)] 0
[filename of light field image 2 (string)] 0

Here is an example file.  Also see the class InfoFile for more details.

lightfield.parm format:

[image 0 anglex)] [image 0 anglez] [image 0 angley)] [image 0 tx] [image0 ty] [image0 tz]
[image 1 anglex)] [image 1 anglez] [image 1 angley)] [image 1 tx] [image1 ty] [image1 tz]

The quantities anglex, anglez, angley (note the order) specify the rotation matrix for the view, and the quantities tx, ty, tz specify the translation matrix.  The rotation matrix R and the translation matrix T gives us the transformation between a point in world coordinates (pw), and in view coordinates (xv):
    pv = R*pw + T
You can use the following Matlab file AxAzAy_from_R.m to obtain the angles from a rotation matrix.  The translation matrix is T = [tx ty tz] (transposed).  Here is an example lightfield.parm file.

image format:

The filenames of the images are all specified in the file.  The files must be ppm files.  (You can modify the code to work with your file format, too.)  Currently, because of OpenGL/graphics hardware limitation, only image sizes that are powers of 2 are usable by the viewer.  Here is an example light field (including images and and lightfield.parm files).

Geometry file format

The geometry is specified as a set of vertices and faces, in format that is very similar to the OBJ format.  (If I had known about OBJ, then I wouldn't have invented my own version.)  The format is as follows:

[V] [F]
[vx 0] [vy 0] [vz 0] [nx 0] [ny 0] [nz 0]
[vx 1] [vy 1] [vz 1] [nx 1] [ny 1] [nz 1]
[vx V-1] [vy V-1] [vz V-1] [nx V-1] [ny V-1] [nz V-1]
[vertex index (face 0)] [vertex index (face 0)] [vertex index (face 0)]
[vertex index (face 1)] [vertex index (face 1)] [vertex index (face 1)]
[vertex index (face F-1)] [vertex index (face F-1)] [vertex index (face F-1)]

where V is the number of vertices, F is the number of faces, (vx,vy,vz) are the x,y,z-coordinates of the ith vertex, and (nx,ny,nz) are the normals of the ith vertex.  The program ignores these values, so you can set them to any random value.  The vertex index refers to the order in which the vertices are specified from 0 to V-1.  See the example geometry file geoPlastic.txt, or the Geometry class for more details.

Principles of this light field viewer

The basic idea with this viewer is that we can use the hardware-supported texture-mapping available in many mid- to high-end graphics cards to perform most of the rendering work.  In this sense, it is very similar to the view-dependent texture mapping work of Debevec et al.

We treat each light field image as a texture map.  We assume a geometry model, so that we can texture map it on to something.  For instance, we could choose a geometry model which a plane that is coincident with the focal plane, and we would end up with Levoy and Hanrahan's light field rendering (given that we could support texture mapping with that many textures at once, and we had a large enough number of light field images).  Better yet, we could use a geometry closer to that of the object or scene (which is what I have tested so far).

The texture coordinates for each vertex in the geometry, for each image, can be generated automatically by OpenGL (check out glTexGen), using the camera view parameters.  The only difficulty with this approach is that texture coordinates are generated for vertices that are not visible from that view, so we need to explicity perform occlusion checking.  We pre-compute the visibility of all vertices from all views, and store that for use during rendering.

During rendering, when the user selects a new virtual view, the program first selects the K best images to use to render that view.  Here K depends on the number of texture units available on your graphics card.  The code right now only supports K=2 texture units which, surprisingly, gives fairly good rendering quality.  It is important to note that using the same 2 views for all of the geometry is not necessarily, exactly the right thing.  This is accurate when our virtual view is near the hull of capturing cameras, but becomes less of a good approximation as we move away from that hull.  You could potentially mitigate this by using different K=2 textures in different parts of the scene or object, and rendering in multiple passes.  I haven't tried this, and don't know how it will affect performance.  The other option is to get a graphics card with more texture units.

Once we have the K=2 image that we are using for texturing, we have to figure out how to blend between them.  I calculate blending on a per-vertex basis.  The weights (based loosely on Buehler et al.'s  unstructured lumigraph rendering work) depend on visibility, the angle between the virtual camera and the reference K=2 cameras from the vertex, and the relative distances to each of the K=2 cameras.  Since we have pre-computed visibility, this is actually fairly quick.

That's pretty much it!  Getting it to work is a bit tricky, but the principles are fairly straight-forward.


I gratefully acknowledge the mountain of resources out there on the web.  One site, especially, ( had exactly the code snippet I was looking for.  Thank you!

Last modified: May 22, 2003    Prashant Ramanathan