VRSG's Core Image Generator Features
MetaVR Virtual Reality Scene Generator™ (VRSG™) supports the typical features required for image generators used in flight, ground vehicle, and infantry training simulators, and many other applications. Image generators are typically driven by users’ simulator host model, such as a flight model. VRSG renders a virtual world as it is specified by host parameters such as location and field-of-view.
VRSG's core image generator features include:
VRSG supports full-featured light points; processing runs entirely in vertex shader programs downloaded to the graphics chipset, providing exceptional performance. VRSG light points were developed with input from subject matter experts, such as commercial and military pilots.
All light points, including directional light points with unique per FOV edge attenuation behavior, run entirely in the vertex shader, providing exceptional performance. Light point features include:
VRSG provides realistic light lobes that yield per-pixel radial attenuation and per-vertex axial attenuation. VRSG light lobes are flexible enough to support landing lights, taxi lights, headlights, and searchlights. VRSG light lobes do not require multiple database render passes or hardware that can store alpha information in the frame buffer. Instead, VRSG light lobes are rendered single-pass, which affords minimal performance degradation when enabling a light lobe. You can configure multiple concurrent, independent light lobes. No drastic impact on fill rate or geometry processing penalties is incurred when enabling light lobes. VRSG supports independent, concurrent, steerable light lobes for video cards that support Pixel Shader Model 5.0.
VRSG supports a highly optimized dynamic lighting pipeline, which uses per-vertex color, blended with per-polygon material, combined with ambient lighting conditions and directional light sources for efficient and convincing dynamic lighting effects.
VRSG supports environment and weather effects such as:
VRSG uses an ephemeris model to calculate sun position, moon position, star position, and moon phase from date, time, and geographic location. Lighting conditions can also be automatically calculated from date, time, and geographic location.
Using CIGI or DIS SetDataPDUs, users can instantiate multiple volumetric clouds from our library of over 13 cloud models. A cloud can be positioned, oriented, scaled, and moved over time. Volumetric clouds are particle masses that model light absorption, creating a realistic reduction in visibility when flown through. VSRG features a real-time dynamic lighting model for clouds that models light absorbtion as a function of particle depth into the cloud along the line-of-sight to the sun. Clouds can have a optional precipitation effect modeling either rainfall or snow. The precipitation effect is also a volumetric mass extending from the cloud base through ground level, creating a realistic reduction in visibility during flight. The rain or snow precipitation effect is generated dynamically, so it can be applied to any cloud instance, at any cloud altitude.
VRSG supports multiple mechanisms for adding 2D overlays to the 3D display. You can describe static overlays in an ASCII file without coding. Static overlays can be made dynamic though a UDP-based interface to the visual system. A simulation host can send commands to the visual system to enable, disable, scale, rotate, and translate overlay primitives.
MetaVR provides a plug-in mechanism for users who want to generate overlay graphics using a low-level graphics API. The end user develops a dynamically loaded library (DLL), which the visual system loads at run time. The visual system makes calls into functions exported by the DLL, which pass the thread of execution to the user-written function. From within the DLL, the user can use Direct3D to render customized overlays.
MetaVR also provides a limited OpenGL emulation layer that allows legacy OpenGL-based HUD implementations to be ported to a VRSG plug-in with ease. This mechanism allowed the Lockheed Martin F-22 HUD originally developed for SGI platforms to be easily ported to VRSG.
VRSG supports an unlimited number of viewports per channel. Multiple viewports on a single visual channel may be overlapped or spatially disjoint. Viewports can be horizontally mirrored to support applications that demand this (such as, rear-view mirror), or display systems whose optics imposes a horizontal reversal of the image.
VRSG supports the basic mission functions requirements to meet the needs of ground -based vehicle simulators up to fast moving fixed-wing aircraft. VRSG channels support one laser range per channel, per frame at 60 HZ, with single frame latency. VRSG also supports an Above Ground Level (AGL) response per channel at 60 HZ. A library that can be integrated into the simulation host provides for features such as point-to-point intervisibility, terrain height lookup, and collision detection with terrain or dynamic model geometry.
Mission rehearsal applications require the ability see long distances (that is, far horizon) and process large amounts of geospecific imagery draped upon terrain elevation data. MetaVR's visual database format, round-earth Metadesic format, meets these and other mission rehearsal requirements.
MetaVR VRSG real-time scene of an A-10C entity flying over the geospecific 2 cm per-pixel resolution synthetic 3D terrain of the Prospect Square area of the U.S. Army Yuma Proving Ground (YPG).
VRSG provides the ability to create a wire-frame threat dome, in a bubble or cylindrical shape, that represents the detection and lethal ranges of a Surface to Air Missile (SAM) or similar threat system. You can describe the radius of the dome in meters, and the color of the dome in terms of the red, green, and blue components of the intended color.