Oculus Rift
may be leading the pack currently, but I'm sure there will be more
contenders for the virtual reality throne, shortly. So, while the
Oculus Rift plugin was a good start I think it is time to look into
what it would take to support more devices. The architecture
established for the Oculus Rift plugin is good enough and I decided
to see how much effort it would be to implement a basic virtual
reality API for Android. After all, the low-budget
Google
Cardboard probably makes it the most accessible device of all.
Usage
It's implemented with ease of usage in
mind. An application wishing to use it needs to do two things.
Create a VRAppState instance and
supply a suitable HeadMountedDisplay (currently either a
DummyDisplay or AndroidDisplay).
AndroidDisplay
display = new AndroidDisplay();
VRAppState
vrAppState = new VRAppState(display);
stateManager.attach(vrAppState);
For controls, get the
StereoCameraControl from the VRAppState and add it as a Control to a
Spatial. It will now follow the Spatial through the world.
Node
observer = new Node("");
observer.addControl(vrAppState.getCameraControl());
rootNode.attachChild(observer);
See AndroidVRTest
for an example implementation.
Method
Like I stated in the beginning, it
follows closely what has already been implemented in the Oculus Rift
plugin but classes have been abstracted to allow for more diverse,
future, implementations.
It revolves around class called
VRAppState.
This class sets up two viewports and a StereoCameraControl
which handles the two different views.
The StereoCameraControl
class gets its data (currently only rotation) from a class
implementing a HeadMountedDisplay
interface. In this example it's called AndroidDisplay.
The AndroidDisplay
class accesses the Android application and registers itself as a
SensorEventListener
for the Accelerometer and Magnetometer. The default update delay is
way too slow, so it uses SENSOR_DELAY_GAME
instead.
sensorManager
= (SensorManager)
JmeAndroidSystem.getActivity().getApplication().getSystemService(Activity.SENSOR_SERVICE);
accelerometer
= sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
magnetometer
= sensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
sensorManager.registerListener(this,
accelerometer, SensorManager.SENSOR_DELAY_GAME);
sensorManager.registerListener(this,
magnetometer, SensorManager.SENSOR_DELAY_GAME);
Once sensor data is updated it's
received by the onSensorChanged
method. It updates our local values and confirms that data has been
received before getting the rotational data of the device in the form
of a Matrix.
This is stored in a temporary field and then orientation is
interpolated towards it. This was due to using the raw data was much
too jittery.
public
void onSensorChanged(SensorEvent event) {
if
(event.sensor.getType() == Sensor.TYPE_ACCELEROMETER) {
gravity
= event.values;
}
else if (event.sensor.getType() == Sensor.TYPE_MAGNETIC_FIELD) {
geomagnetic
= event.values;
}
if
(gravity != null && geomagnetic != null) {
boolean
success = SensorManager.getRotationMatrix(R, I, gravity,
geomagnetic);
if
(success) {
SensorManager.getOrientation(R,
orientationVector);
tempQuat.fromAngles(orientationVector[2],
-orientationVector[1], orientationVector[0]);
orientation.slerp(tempQuat,
0.2f);
}
}
}
It also needs to know about the
physical size of the screen. This is used by the distortion shader.
With some conversion it can be deducted from the Android applications
WindowManager.
DisplayMetrics
displaymetrics = new DisplayMetrics();
JmeAndroidSystem.getActivity().getWindow().getWindowManager().getDefaultDisplay().getMetrics(displaymetrics);
float
screenHeight = displaymetrics.heightPixels / displaymetrics.ydpi *
inchesToMeters;
float
screenWidth = displaymetrics.widthPixels / displaymetrics.xdpi *
inchesToMeters;
This and other information is stored in
a class inspired by the Oculus Rift HMDInfo,
called HeadMountedDisplayData.
This contains data on the HMD itself, like distance between lenses,
distance from screen to lens, resolution, etc.
The shader is using the same principle
established early in the Oculus Rift plugin which itself was inspired
by an example implementation on the Oculus Developer web site (it
seems it has since been removed from the website. If anyone has a
link, please let me know). Each display has a post processing filter
and the necessary distortion correction is done in a fragment shader.
It begins with the class called BarrelDistortionFilter
which is instantiated in the VRAppState
class.
The BarrelDistortionFilter
takes the information from the HeadMountedDisplayData
and creates a projection matrix for the Camera
associated with its ViewPort.
It also prepares some variables for the shader.
The scaleFactor
value is an arbitrary number used to fit a specific screen. This most
likely needs a formula for different screen sizes.
References
jMonkeyEngine Oculus Rift plugin:
Sensors overview:
Registering sensors and reading
orientation data:
Google Cardboard: