Skip to content

Image Gallery

See also the Tutorials section, which contains several examples and images of the Visualization Library Business Resources.

Volume rendering of a section the Visible Woman (http://www.crd.ge.com/esl/cgsp/projects/vw/).
A 3D plot generated with the vlVolume::VolumePlot class. A 3D function is computed on a scalar field and then passed to a vlVolume::MarchingCube object which extracts the isosurface.
Human skull extracted from CT data using the vlVolume::MarchingCube class.
Multiple isosurfaces and transparency management.
The vector graphics example showing the capabilities of the new VectorGraphics class which brings the power of OpenGL to the 2D vector graphics world.
The new KdTree based simple terrain scene manager can handle very large data sets by storing only the heightmap on the GPU memory and generating all the geometry on the fly!
The KdTree test gives a visual feedback of how the KdTree nodes are organized.
The lights demo shows how to use the standard OpenGL lights: point, directional and spot lights. Any light can follow a transform or the camera.
The OpenGL Shading Language image processing example uses the GPU to implement several image processing effects.
The famous “Stanford Bunny” from The Stanford 3D Scanning Repository http://graphics.stanford.edu/data/3Dscanrep/ rendered with a geometry shader that creates a polygonal fur.
The famous “Stanford Bunny” from The Stanford 3D Scanning Repository http://graphics.stanford.edu/data/3Dscanrep/ The original model is on the right. On the left you can see its simplified version. Visualization Library simplifies the model form the original 70000 polygons to 1400 polygons (2%) in 4.05 seconds (Intel Core 2 Duo 2GHz). Lowering some quality settings can be even faster. The polygon reduction algorithm is based on the “Quadric Error Metrics Surface Simplification” algorithm by Michael Garland and Paul S. Heckbert.
Volume rendering of the CT scan of a human head. The volume is rendered using 512 screen aligned slices. Note that the volume can be freely rotated and the camera can look at it from any point of view since the slices matain a screen aligned profile and the UV texture coordinates are dynamically adjusted in real-time.
The Mandelbrot fractal animated and rendered in realtime on the GPU using a GLSL fragment shader!
The famous “Stanford Bunny” from The Stanford 3D Scanning Repository http://graphics.stanford.edu/data/3Dscanrep/ rendered with a shader that makes it look like a panter.
A PLY model of a Chinese dragon from The Stanford 3D Scanning Repository: http://graphics.stanford.edu/data/3Dscanrep/ rendered with a “heat” shader. The model has 871.414 triangles and 437.645 vertices.
A PLY model of the bones of a hand from The Stanford 3D Scanning Repository: http://graphics.stanford.edu/data/3Dscanrep/ rendered with a contour-enhancing shader. The model has 654.666 triangles and 327.323 vertices.
In this picture the 2D text labels follow the position on the screen of the planets rotating around the Sun. Note the “Moon” label drawn with a transparent background and black border. The circular lines are rendered with line smoothing.
This demo shows how to create applications like 3D editors by using four cameras/rendering pipelines smoothed lines, wireframe polygon filling and double side lighting. The object in the picture is the well known monkey primitive from Blender (http://www.blender.org) loaded from a 3DS file.
Here you can see five objects drawn with five different OpenGL Shading Language vertex and fragment shaders. The top left uses toon-shading or cel-shading which imitates the look and feel of a cartoon by applying a quantization on the light intensity. The top right shows a per-pixel shader, where the light equation is not computed per-vertex but per-fragment achieving a much higher realism. The bottom left shows a “heat” shader that simulates what you could see through a heat sensor. The range goes from blue, to green, yellow and then red, where blue represents the coldest surfaces and red the warmer ones. The bottom right one applies an interlaced-effect on top of the per-pixel lighting shader. The middle one use a geometry shader to simultate polygonal fur.
Here you can see a possible use of the legacy render to texture technique available already in OpenGL 1.1. This demo renders the right camera first then copies the frambuffer to a texture which in turn is applied to the plane and the box on the left camera.
This demo uses the Frame Buffer Object extension to render some yellow rotating rings directly on a 2048×2048 texture which is then applied to the plane and the box in the next rendering pipeline. On the plane is evident how the texture quality of this demo is superior to the previous one.
This demo shows a mixed usage of Frame Buffer Object, Multiple Render Target and OpenGL Shading Language. First a Frame Buffer Object with two 2D texture attachment points is created, then with a single OpenGL Shading Language fragment shader we write at the same time on both of them using two different techniques, a “heat” shader and a per-pixel lighting shader. Finally the two textures are applied to the two boxes in the next rendering pipeline.
This demo demonstrates the use of shader-LOD, geometrical-LOD, multipassing, LOD-aware geometrical animation and LOD/multipassing-aware shader animation. Note the outlines of the spheres which are drawn using multipassing. In order to lower the amount of work the GPU has to do Visualization Library lets you use a more detailed geometrical level and heavier shaders for closer objects while using lighter shaders and simpler geometries for objects far away. The whole system works also meanwhile the shaders and the geometries are being animated!
This demo loads an MD2 (Quake 2) model creating 4000 instances of it by sharing their geometry. The geometry is uploaded to the GPU using VBOs (Vertex Buffer Objects) and the models are animated by interpolating their vertices directly on the GPU using a vertex shader. Interpolating this amount of vertices on the CPU would be at least an order of magnitude slower. The model in the picture is the Perelith Knight by James Green.
In this demo the title is drawn in red with a black border to make it stand out more from the background. The four parts of the poem are drawn with different text alignments: left, center, right and justified.
Here you can see how the text rendering engine of Visualization Library handles Unicode fonts and characters. Note that the Hebrew and Arabic texts are rendered using the right-to-left layout.
This demo shows a possible use of point sprites to create a hi-performance and highly-dense scatter plot.
This demo shows a combination of alpha blending, alpha testing, multitexturing and double side lighting.
Here you can see how Visualization Library lets you manage complex clipping planes interactions. The right box is not clippable. The center and left yellow boxes are clippable but inside the left yellow box there is a green box which is not clippable. The central yellow box is inside a cylinder on which two opposite clipping planes are applied via multipassing, thus creating a slice-shaped hole on it. The grayish plane represents the clipping plane applied to the yellow boxes. Note that in the demo the visible plane and the clipping plane are attached to a Transform and dynamically animated.
This demo demonstrates how to use hierarchical transforms simulating the movement of a mechanical/robot arm.