lördag 3 december 2016

Playing video in jMonkeyEngine 3 using vlcj

A little while ago I made a basic example of how to play video on a texture in jMonkeyEngine 3. I used vlcj as the backing video player due to its relative ease of use and good tutorials. I'm making this companion post to describe the method (code) and how to set it up (config).

This describes how to set it up in Netbeans on Windows (should be the same for other Maven projects).
First, set up vlcj and build it:
  1. Download vlcj.
  2. Set it up as a new project in Netbeans.
  3. Change config to 'release'
  4. Download suitable VLC.
  5. Place libvlc and libvlccore in project root/win32-x86-64
  6. Distribution ends up as a zip the project root/target folder.
Now you can set up a project that will use the vlcj build and point it to use the .jars in the target folder as libraries.

I created a JmeRenderCallbackAdapter class based on this Direct Rendering tutorial. The class takes a Texture2D as an input and will update its ImageBuffer on every onDisplay call.
The method is very basic and there's an improvement to be made by using the native buffer directly.

The TestVlcj class shows how to display the result.
The setupView method creates a Quad and an Unshaded material to display the video on.
It also creates a Texture2D (which is passed to JmeRenderCallbackAdapter) and an Image the same size as the application.
The way BufferFormatCallback and DirectMediaPlayerComponent work are again taken directly from the Direct Rendering tutorial.
TestVlcj takes a path to a video as input argument. The input argument can be overridden and mediaFile can be hard coded for testing.

måndag 13 juli 2015

Running jMonkeyEngine 3 on Android using AndroidHarnessFragment

There's a new way of running jMonkeyEngine 3 on Android, using fragments. Since I couldn't find a description of how to do it or a use case, I thought I'd write down how I did.
The old (and still functional) used a class called AndroidHarness that extended Activity and contained all the app specific information in it. This post describes how to use the new AndroidHarnessFragment.
  • First of all, AndroidHarnessFragment contains a lot of app-specific settings. The most important of these are appClass, which is a String containing the qualified name of the Application to run. This and a lot of other fields are protected and it seems the intended way of using the class is by extending it. This way you can set all of them in the constructor of the new class, even if there are other ways more aligned with Android conventions (such as using a Bundle).
  • Previously the auto-generated MainActivity class extended AndroidHarness. Now it should just extend Activity.
  • To create the fragment, we can create a layout file for the activity:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
>
<fragment android:name="com.mycompany.mygame.MyAndroidHarnessFragment"
android:id="@+id/app_fragment"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</LinearLayout>
  • Finally we tell MainActivity to use the layout file we just created.
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
}
That's all that seems to be needed to run jMonkeyEngine inside a fragment!

söndag 22 februari 2015

Google Cardboard support for jMonkeyEngine

I've just commited support for Google Cardboard in jMonkeyEngine 3. I plan on doing a post with more details of the inner-workings, but in the meantime here's a brief outline and how to use it.

First of all, it's a completely separate integration from the Android VR support project I started. This uses the Google Cardboard API directly and in a way that didn't fit with the architecture in that project.

Why is it useful?

It enables those who wish to use a complete game development package to deploy their application as a Google Cardboard app.

So, how do I use it?

Download the jme-cardboard.jar from the repo and add it to your jMonkeyEngine project.
  1. Turn on Android deployment for your project (Properties/Application/Android)
  2. In the generated MainActivity.java (Important Files/Android Main Activity) have it extend CardboardHarness instead of AndroidHarness.
  3. Change the appClass in the same to your project's application file.
That should be it. For an example application, check out CardboardStarTravel example in the test package.



If you wish to build the sources, it needs to have an android.jar attached to the project.



Known issues.
Drift with the accelerometer is pretty bad. Don't know if there is anything to do about it on the application side.
Movement is fairly jittery. Adding a filter for the accelerometer might be desirable.


Like I said, there will be more to come!

lördag 24 januari 2015

Designing a virtual reality HMD for smart phones

Update: Design is now available on Thingiverse.

This is supposed to be about software, I know. But without proper hardware it's impossible to write any software. I realized when putting together the Android VR library for jMonkeyEngine 3.0 that I had nothing to test it on.
The quickest solution to that problem would be cutting out a Google Cardboard. I'm not particularly fond of cardboard, however and without it being laser-cut it would look terrible. I have the benefit of owning a 3D-printer (Prusa i3) so I thought I'd have a go at designing my own HMD inspired by the Google Cardboard schematics. They can only be used so far, though, since they're meant to be cut and folded. Using a 3D-printer there are benefits of being able to print complex geometry directly.
I decided to build it in 3 steps, each as simple as possible to avoid overhang problems. The first one would be the cradle where the phone would rest.
I based the measurements around my Samsung Galaxy S4 and tried to design it to allow access to the buttons on the side of the phone as well as the USB and audio. In general, I tried to leave as much space as possible on the sides for different phone types.
Printing time is always an issue with hobby printers, which is why I left the back side open. This and to allow the battery some fresh air. There are holes in it as well (the design anyway. The printer doesn't really make them). These are for if at some point one would like to mimick the Oculus Rift DK2's positional tracking by placing some LED's there.

 
Next, I went to the piece next to the eyes as the middle part would just be about creating some distance between the lenses and the screen (or so I thought). I happen to have an Oculus Rift DK1 which isn't seeing much use now, so I decided to butcher two of the eye cups for lenses. They are 36mm in diameter, which should give some additional FOV compared to the 25mm recommended for the Google Cardboard. Apart from making good fittings for the lenses the biggest challenge with this piece was making it fit well around the face.


I think I actually spent most of the time making the middle piece. Modelling a good cup for the nose was a big challenge and I've scrapped several prints due to it not being well printed. It's very spacious, and should suit most nose shape and sizes. The other thing with it is that it's slightly wider at the bottom than at the top. This is because I made the piece next to the face slightly more narrow than the phone cradle. In the case I build a Note-sized cradle, this would be even more pronounced.


Below is the current state of the prototype. It's working very well together with the Google Cardboard demos. I can see some 5 mm outside of my S4 screen at the top and bottom, so maybe an S5 would be perfect for these lenses.



I want to have a fitting for a magnet on it as well but I seem to have lost the magnets I bought, for now..
I plan on sharing these as well as a BOM for a complete HMD once I'm happy with the design. Stay tuned for more.

fredag 2 januari 2015

Virtual Reality for Android using jMonkeyEngine

Oculus Rift may be leading the pack currently, but I'm sure there will be more contenders for the virtual reality throne, shortly. So, while the Oculus Rift plugin was a good start I think it is time to look into what it would take to support more devices. The architecture established for the Oculus Rift plugin is good enough and I decided to see how much effort it would be to implement a basic virtual reality API for Android. After all, the low-budget Google Cardboard probably makes it the most accessible device of all.

You can find the repository for the project here: https://github.com/neph1/jme-androidvr

Usage

It's implemented with ease of usage in mind. An application wishing to use it needs to do two things.
  1. Create a VRAppState instance and supply a suitable HeadMountedDisplay (currently either a DummyDisplay or AndroidDisplay).
AndroidDisplay display = new AndroidDisplay();
VRAppState vrAppState = new VRAppState(display);
stateManager.attach(vrAppState);
  1. For controls, get the StereoCameraControl from the VRAppState and add it as a Control to a Spatial. It will now follow the Spatial through the world.
Node observer = new Node("");
observer.addControl(vrAppState.getCameraControl());
rootNode.attachChild(observer);
See AndroidVRTest for an example implementation. 

Method

Like I stated in the beginning, it follows closely what has already been implemented in the Oculus Rift plugin but classes have been abstracted to allow for more diverse, future, implementations.
It revolves around class called VRAppState. This class sets up two viewports and a StereoCameraControl which handles the two different views.
The StereoCameraControl class gets its data (currently only rotation) from a class implementing a HeadMountedDisplay interface. In this example it's called AndroidDisplay. The AndroidDisplay class accesses the Android application and registers itself as a SensorEventListener for the Accelerometer and Magnetometer. The default update delay is way too slow, so it uses SENSOR_DELAY_GAME instead.
sensorManager = (SensorManager) JmeAndroidSystem.getActivity().getApplication().getSystemService(Activity.SENSOR_SERVICE);
accelerometer = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
magnetometer = sensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
sensorManager.registerListener(this, accelerometer, SensorManager.SENSOR_DELAY_GAME);
sensorManager.registerListener(this, magnetometer, SensorManager.SENSOR_DELAY_GAME);
Once sensor data is updated it's received by the onSensorChanged method. It updates our local values and confirms that data has been received before getting the rotational data of the device in the form of a Matrix. This is stored in a temporary field and then orientation is interpolated towards it. This was due to using the raw data was much too jittery.
public void onSensorChanged(SensorEvent event) {
 if (event.sensor.getType() == Sensor.TYPE_ACCELEROMETER) {
  gravity = event.values;
 } else if (event.sensor.getType() == Sensor.TYPE_MAGNETIC_FIELD) {
  geomagnetic = event.values;
 }
 if (gravity != null && geomagnetic != null) {
  boolean success = SensorManager.getRotationMatrix(R, I, gravity, geomagnetic);
  if (success) {
   SensorManager.getOrientation(R, orientationVector);
   tempQuat.fromAngles(orientationVector[2], -orientationVector[1], orientationVector[0]);
orientation.slerp(tempQuat, 0.2f);
  }
 }
}

It also needs to know about the physical size of the screen. This is used by the distortion shader. With some conversion it can be deducted from the Android applications WindowManager.
DisplayMetrics displaymetrics = new DisplayMetrics();
JmeAndroidSystem.getActivity().getWindow().getWindowManager().getDefaultDisplay().getMetrics(displaymetrics);
float screenHeight = displaymetrics.heightPixels / displaymetrics.ydpi * inchesToMeters;
float screenWidth = displaymetrics.widthPixels / displaymetrics.xdpi * inchesToMeters;
This and other information is stored in a class inspired by the Oculus Rift HMDInfo, called HeadMountedDisplayData. This contains data on the HMD itself, like distance between lenses, distance from screen to lens, resolution, etc.

The shader is using the same principle established early in the Oculus Rift plugin which itself was inspired by an example implementation on the Oculus Developer web site (it seems it has since been removed from the website. If anyone has a link, please let me know). Each display has a post processing filter and the necessary distortion correction is done in a fragment shader. It begins with the class called BarrelDistortionFilter which is instantiated in the VRAppState class.
The BarrelDistortionFilter takes the information from the HeadMountedDisplayData and creates a projection matrix for the Camera associated with its ViewPort. It also prepares some variables for the shader.
The scaleFactor value is an arbitrary number used to fit a specific screen. This most likely needs a formula for different screen sizes.

References

jMonkeyEngine Oculus Rift plugin:
Sensors overview:
Registering sensors and reading orientation data:
Google Cardboard:

lördag 6 december 2014

Free floating VR menu in jMonkeyEngine with Oculus Rift

The recommended way of displaying GUI and menus in VR is to not have it static, tied to the ”screen”. This despite the first thing you're being greeted by when using the Oculus Rift is a big warning screen pasted on your face.
The idea behind a free-floating GUI is that it's more natural and adding to the comfort of the user (which is a big thing in VR!). When it comes to menus this also has the benefit of making the user able to do selections without any other controller than the HMD itself.
In this tutorial we'll create a menu that lets the user select menu items simply by looking at them for a certain amount of time. I've chosen to write it similar to Packt's Cookbooks since it's something I'm familiar with and it will hopefully make owners of the jMonkeyEngine 3.0 Cookbook feel right at home!

Disclaimer: The API for the Oculus Rift plugin is in no way final and it might change after this tutorial is written.

Preparation

We'll need the Oculus Rift plugin for jMonkeyEngine. How to download and set it up can be found here: https://code.google.com/p/jmonkeyengine-oculus-rift/
You can also find the full code for this tutorial in the "examples" folder.
This tutorial won't explain how the OVRApplication works, for this it's recommended to check out the project site's wiki. The tutorial will be implemented in two classes. One application class containing the basics and an AppState which will contain all the menu specific code.

Implementation

We'll start with the application class which is implemented in 5 steps.
  1. For this example we start by creating a class that extends OVRApplication.
  2. In the simpleInit method we start by removing the default GUI elements like this:
setDisplayFps(false);
setDisplayStatView(false);
  1. Next we set up the GUI node for manual positioning and scale it somewhat.
ovrAppState.getGuiNode().setPositioningMode(OculusGuiNode.POSITIONING_MODE.MANUAL);
ovrAppState.getGuiNode().setGuiDistance(0.5f);
ovrAppState.getGuiNode().setGuiScale(0.5f);
  1. Like in the example application for OVRApplication we create an observer node and assign the StereoCameraControl to it.
Node observer = new Node("Observer");
observer.setLocalTranslation(new Vector3f(0.0f, 0.0f, 0.0f));
observer.addControl(ovrAppState.getCameraControl());
rootNode.attachChild(observer);
  1. Now we add two lines to the simpleUpdate method to keep the GUI in place.
guiNode.setLocalTranslation(cam.getLocation().add(0, 0, 0.5f));
guiNode.lookAt(cam.getLocation(), Vector3f.UNIT_Y);
Apart from revisiting to add the AppState, that's it for the application class! The basics for the menu will be done in the following 8 steps.
  1. We start by creating a class called MenuAppState that extends AbstractAppState.
  2. We define a number of colors we'll use to apply to the menu items to indicate their status.
private static ColorRGBA DEFAULT_COLOR = ColorRGBA.DarkGray;
private static ColorRGBA SELECT_COLOR = ColorRGBA.Gray;
private static ColorRGBA ACTIVATE_COLOR = ColorRGBA.White;
  1. Its constructor should receive a width, height and the OculusGuiNode, all which it stores in private fields.
  2. We also create and store a Vector2f called screenCenter which should be initialized with width * 0.5f and height * 0.5f;
  3. In the initialize method we start by creating and storing a Node called menuNode and a Ray called sightRay.
  4. We set up a simple unshaded material to apply on the menu items and give it the DEFAULT_COLOR.
Material mat = new Material(app.getAssetManager(), "Common/MatDefs/Misc/Unshaded.j3md");
mat.setColor("Color", DEFAULT_COLOR);
  1. Next we create a number of menu items in the form of Quad's. They're set up in a grid and each gets its own clone of the material before being attached to the menuNode.
for(int x = -2; x < 2; x++){
for(int y = -2; y < 2; y++){
Geometry menuQuad = new Geometry("Test ", new Quad(width * 0.25f, height * 0.25f));
menuQuad.setMaterial(mat.clone());
menuQuad.setLocalTranslation(x * width * 0.25f, y * height * 0.25f, 0);
menuNode.attachChild(menuQuad);
}
}
  1. Then we attach the menuNode to the guiNode.
  2. Now we override the setEnabled method and add some logic so that the menuNode is added to the scenegraph when the AppState is enabled and removed when it's disabled.
if(enabled && !this.isEnabled()){
guiNode.attachChild(menuNode);
} else if (!enabled && this.isEnabled()){
guiNode.detachChild(menuNode);
}
Now we have a basic menu showing but no way of selecting anything. Adding this functionality will consist of an additional 16 steps.
  1. First of all we'll add another static field called ACTIVATE_TIME which is a float and set to 5f.
  2. In addition we need another float field called timer, a CollisionResults called collisionResults and a Geometry called selectedGeometry.
  3. We create a new method called selectItem which takes a Geometry called g as input.
  4. Inside, we set selectedGeometry to g and the color of its material to a clone of SELECT_COLOR.
selectedGeometry.getMaterial().setColor("Color", SELECT_COLOR.clone());
  1. Lastly we set timer to 0.
  2. Now we create another method called unselectItem.
  3. Inside, we check if selectedGeometry is not null and if so we set the color of its material to a clone of DEFAULT_COLOR before setting selectedGeometry to null.
  4. Next we override the update method and start by setting up the Ray we created in the initialize method. It should originate from the camera.
sightRay.setOrigin(app.getCamera().getWorldCoordinates(screenCenter, 0f));
sightRay.setDirection(app.getCamera().getDirection());
  1. We do a collision check between the Ray and menuNode and see if any of the Geometries inside are being looked at. If any of them are we perform the following piece of code to set the Geometry to the selected one.
Geometry g = collisionResults.getClosestCollision().getGeometry();
if(g != selectedGeometry){
unselectItem();
selectItem(g);
}
  1. If instead none is being looked at we call unselectItem to null a potentially selectedGeometry.
  2. After performing this check we call the clear method on collisionResults.
  3. Before doing the last bit in this method we create another (empty for now) method called activateItem.
  4. Continuing in the update method we check if selectedGeometry is not null.
  5. If it is assigned we increase timer by tpf.
  6. Then we perform the following piece of code to make the color brighter of the selectedGeometry.
float interpolateValue = timer / ACTIVATE_TIME;
((ColorRGBA)selectedGeometry.getMaterial().getParam("Color").getValue()).interpolateLocal(SELECT_COLOR, ACTIVATE_COLOR, interpolateValue);
  1. Lastly, we check if timer is greater than ACTIVATE_TIME in which case we call the activateItem method.
  2. The final thing we need to do is to return to the application class and create an instance of the MenuAppState class.
MenuAppState menuAppState = new MenuAppState(settings.getWidth(), settings.getHeight(), (OculusGuiNode) guiNode);
stateManager.attach(menuAppState);

Explanation

In the beginning of the application class, we telling the OculusGuiNode that we'll manage the positioning of the GUI ourselves. AUTO mode will otherwise always place it in front of the camera and rotate with it. We still want it to be in front of the camera which is why we manually place it there. The difference is that the cameras rotation won't be propagated to the guiNode's translation. We also want the GUI to always face the camera even if it's not strictly necessary for this example.
In the MenuAppState we create a bunch of placeholder menu items in the form of Quads which we align in a grid. In this example we attach and detach the menuNode via the setEnabled method but depending on the rest of the application it might be neater to do it using the stateAttached and stateDetached methods.
In the update method we use raycasting originating from the camera to detect if any of the menu items is in the center of the screen. When this occurs we change the color of the item to indicate this to the user and also begin a counter. We also start interpolating the color of the selected item to indicate that something is happening. When the timer exceeds 5 seconds it triggers an activation of the item. It's not as quick as if one would have used a mouse but controlling methods are different for VR and this enables the user to navigate menus without having an additional controller.
The one thing the example leaves out is what to do when an item is selected. Since the example is using instances of Geometry there's not much to do. A simple approach would be to create a MenuItem class that extends Geometry or even better; add a custom Control to the Geometry which is called through the activateItem method.

torsdag 9 maj 2013

Java C++ wrapper for Oculus Rift

I've wanted a VR helmet ever since i tried one for the first time in the early 90s. Looks like we're finally getting somewhere close to a commercial variant! Even if we don't, I'll be happy with the devkit I've ordered.Now, I'm a java developer, and thus I'm not too comfortable developing with the C++ SDK. Given that I have plenty of time before it arrives, I decided to make a wrapper, so I could continue to use jMonkeyEngine, which I'm more proficient with.

The wrapper itself is not dependent on jME in any way. To build the C++ part, i recommend following the "Minimal Oculus Application" tutorial in the developer section of the Oculus web site.

Let's start with the java side.

HMDInfo.java
This is pretty much a direct mapping of the HMD(Head Mounted Display) data, straight from the C++ code, hence the naming convention is not 100% java.


 package oculusvr.input;  
 public class HMDInfo {  
      protected int HResolution;  
      protected int VResolution;  
      protected float HScreenSize;  
      protected float VScreenSize;  
      protected float VScreenCenter;  
      protected float EyeToScreenDistance;  
      protected float LensSeparationDistance;  
      protected float InterpupillaryDistance;  
      protected float[] DistortionK = new float[4];  
      protected int DesktopX;  
      protected int DesktopY;  
      protected String DisplayDeviceName = "";  
      protected long DisplayId = 0;  
   public int getHResolution() {  
     return HResolution;  
   }  
   public void setHResolution(int HResolution) {  
     this.HResolution = HResolution;  
   }  
   public int getVResolution() {  
     return VResolution;  
   }  
   public void setVResolution(int VResolution) {  
     this.VResolution = VResolution;  
   }  
   public float getHScreenSize() {  
     return HScreenSize;  
   }  
   public void setHScreenSize(float HScreenSize) {  
     this.HScreenSize = HScreenSize;  
   }  
   public float getVScreenCenter() {  
     return VScreenCenter;  
   }  
   public void setVScreenCenter(float VScreenCenter) {  
     this.VScreenCenter = VScreenCenter;  
   }  
   public float getEyeToScreenDistance() {  
     return EyeToScreenDistance;  
   }  
   public void setEyeToScreenDistance(float EyeToScreenDistance) {  
     this.EyeToScreenDistance = EyeToScreenDistance;  
   }  
   public float getLensSeperationDistance() {  
     return LensSeparationDistance;  
   }  
   public void setLensSeperationDistance(float LensSeperationDistance) {  
     this.LensSeparationDistance = LensSeperationDistance;  
   }  
   public float getInterpupillaryDistance() {  
     return InterpupillaryDistance;  
   }  
   public void setInterpupillaryDistance(float InterpupillaryDistance) {  
     this.InterpupillaryDistance = InterpupillaryDistance;  
   }  
   public float[] getDistortionK() {  
     return DistortionK;  
   }  
   public void setDistortionK(float[] DistortionK) {  
     this.DistortionK = DistortionK;  
   }  
   public int getDesktopX() {  
     return DesktopX;  
   }  
   public void setDesktopX(int DesktopX) {  
     this.DesktopX = DesktopX;  
   }  
   public int getDesktopY() {  
     return DesktopY;  
   }  
   public void setDesktopY(int DesktopY) {  
     this.DesktopY = DesktopY;  
   }  
   public String getDisplayDeviceName() {  
     return DisplayDeviceName;  
   }  
   public void setDisplayDeviceName(String DisplayDeviceName) {  
     this.DisplayDeviceName = DisplayDeviceName;  
   }  
   public long getDisplayId() {  
     return DisplayId;  
   }  
   public void setDisplayId(long DisplayId) {  
     this.DisplayId = DisplayId;  
   }  
   @Override  
   public String toString() {  
       return "HMDInfo [HResolution = " + HResolution +   
           ", VResolution = " + VResolution +   
           ", HScreenSize = " + HScreenSize +   
           ", VScreenSize = " + VScreenSize +   
           ", VScreenCenter = " + VScreenCenter +   
           ", EyeToScreenDistance = " + EyeToScreenDistance +  
           ", LensSeperationDistance = " + LensSeparationDistance +   
           ", InterpupillaryDistance = " + InterpupillaryDistance +   
           ", DistortionK = [" + DistortionK[0] + ", " + DistortionK[1] + ", " + DistortionK[2] + ", " + DistortionK[3] + "] " +   
           ", DesktopX = " + DesktopX +   
           ", DesktopY = " + DesktopY +   
           ", DisplayDeviceName = " + DisplayDeviceName +   
           ", DisplayId = " + DisplayId + "]";  
   }  
 } 

OculusRift.java
The wrapper on the java side. Should be pretty self-explanatory with initialization and getter methods. One note, due to the overhead involved in calling native methods, I decided to grab all the rotation and acceleration data in one go (update()).
Please note that you might need to change the lib names depending on how you name your C++ project.

 /*  
  * To change this template, choose Tools | Templates  
  * and open the template in the editor.  
  */  
 package oculusvr.input;  
 /**  
  *  
  * @author Rickard  
  */  
 public class OculusRift {  
   static {  
     if(System.getProperty("sun.arch.data.model").equals("32")){  
       System.loadLibrary("OculusLib");  
     } else if (System.getProperty("sun.arch.data.model").equals("64")){  
       System.loadLibrary("OculusLib64");  
     }  
   }  
   public native boolean initialize();  
   /**  
    *   
    * @return array of [roll, pitch, yaw, acc x, acc y, acc z]  
    */  
   public native float[] update();  
   // HMDInfo  
   public native int getHResolution();  
   public native int getVResolution();  
   public native float getHScreenSize();  
   public native float getVScreenSize();  
   public native float getVScreenCenter();  
   public native float getEyeToScreenDistance();  
   public native float getLensSeparationDistance();  
   public native float getInterpupillaryDistance();  
   public native float[] getDistortionK();  
   public native int getDesktopX();  
   public native int getDesktopY();  
   public native String getDisplayDeviceName();  
   public native long getDisplayId();  
   public native void destroy();  
 }  




That's all the java that's needed. To the C++ code!

OculusRift.h
The much needed header file. Generated with javah:

 /* DO NOT EDIT THIS FILE - it is machine generated */  
 #include <jni.h>  
 /* Header for class oculusvr_input_OculusRift */  
 #ifndef _Included_oculusvr_input_OculusRift  
 #define _Included_oculusvr_input_OculusRift  
 #ifdef __cplusplus  
 extern "C" {  
 #endif  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  initialize  
  * Signature: ()Z  
  */  
 JNIEXPORT jboolean JNICALL Java_oculusvr_input_OculusRift_initialize  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  update  
  * Signature: ()[F  
  */  
 JNIEXPORT jfloatArray JNICALL Java_oculusvr_input_OculusRift_update  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getHResolution  
  * Signature: ()I  
  */  
 JNIEXPORT jint JNICALL Java_oculusvr_input_OculusRift_getHResolution  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getVResolution  
  * Signature: ()I  
  */  
 JNIEXPORT jint JNICALL Java_oculusvr_input_OculusRift_getVResolution  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getHScreenSize  
  * Signature: ()F  
  */  
 JNIEXPORT jfloat JNICALL Java_oculusvr_input_OculusRift_getHScreenSize  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getVScreenSize  
  * Signature: ()F  
  */  
 JNIEXPORT jfloat JNICALL Java_oculusvr_input_OculusRift_getVScreenSize  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getVScreenCenter  
  * Signature: ()F  
  */  
 JNIEXPORT jfloat JNICALL Java_oculusvr_input_OculusRift_getVScreenCenter  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getEyeToScreenDistance  
  * Signature: ()F  
  */  
 JNIEXPORT jfloat JNICALL Java_oculusvr_input_OculusRift_getEyeToScreenDistance  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getLensSeparationDistance  
  * Signature: ()F  
  */  
 JNIEXPORT jfloat JNICALL Java_oculusvr_input_OculusRift_getLensSeparationDistance  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getInterpupillaryDistance  
  * Signature: ()F  
  */  
 JNIEXPORT jfloat JNICALL Java_oculusvr_input_OculusRift_getInterpupillaryDistance  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getDistortionK  
  * Signature: ()[F  
  */  
 JNIEXPORT jfloatArray JNICALL Java_oculusvr_input_OculusRift_getDistortionK  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getDesktopX  
  * Signature: ()I  
  */  
 JNIEXPORT jint JNICALL Java_oculusvr_input_OculusRift_getDesktopX  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getDesktopY  
  * Signature: ()I  
  */  
 JNIEXPORT jint JNICALL Java_oculusvr_input_OculusRift_getDesktopY  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getDisplayDeviceName  
  * Signature: ()Ljava/lang/String;  
  */  
 JNIEXPORT jstring JNICALL Java_oculusvr_input_OculusRift_getDisplayDeviceName  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  getDisplayId  
  * Signature: ()J  
  */  
 JNIEXPORT jlong JNICALL Java_oculusvr_input_OculusRift_getDisplayId  
  (JNIEnv *, jobject);  
 /*  
  * Class:   oculusvr_input_OculusRift  
  * Method:  destroy  
  * Signature: ()V  
  */  
 JNIEXPORT void JNICALL Java_oculusvr_input_OculusRift_destroy  
  (JNIEnv *, jobject);  
 #ifdef __cplusplus  
 }  
 #endif  
 #endif  

OculusRift.cpp.
This is pretty much an adaptation of the "Minimal oculus application" found on the oculus site.

 #include <jni.h>  
 #include <iostream>  
 #include <stdio.h>  
 #include "OVR.h"  
 #include "OculusRift.h"  
 using namespace OVR;  
 using namespace std;  
 Ptr<DeviceManager>     pManager;  
 Ptr<HMDDevice>     pHMD;  
 Ptr<SensorDevice>     pSensor;  
 SensorFusion     FusionResult;  
 HMDInfo     Info;  
 bool InfoLoaded;  
 //#define STD_GRAV 9.81 // What SHOULD work with Rift, but off by 1000  
 #define STD_GRAV 0.00981 // This gives nice 1.00G on Z with Rift face down !!!  
 JNIEXPORT jboolean JNICALL Java_oculusvr_input_OculusRift_initialize  
  (JNIEnv *env, jobject thisObj) {  
   printf("Initializing Rift...\n");  
   System::Init();  
   bool initialized = false;  
      pManager = *DeviceManager::Create();  
      pHMD = *pManager->EnumerateDevices<HMDDevice>().CreateDevice();  
      if (pHMD){  
           printf("pHMD created\n");  
           InfoLoaded = pHMD->GetDeviceInfo(&Info);  
           pSensor = *pHMD->GetSensor();  
      }  
      else{  
           pSensor = *pManager->EnumerateDevices<SensorDevice>().CreateDevice();  
      }  
      if (pSensor)  
      {  
           printf("Attaching sensor\n");  
           FusionResult.AttachToSensor(pSensor);  
     initialized = true;  
      }  
   return initialized;  
 }  
 JNIEXPORT void JNICALL Java_oculusvr_input_OculusRift_destroyDevice  
      (JNIEnv *env, jobject thisObj) {  
           pSensor.Clear();  
     pHMD.Clear();  
           pManager.Clear();  
           System::Destroy();  
           return;  
 }  
 JNIEXPORT jfloatArray JNICALL Java_oculusvr_input_OculusRift_update  
  (JNIEnv *env, jobject thisObj) {  
   jfloat data[6];  
   Vector3f acc=FusionResult.GetAcceleration();  
   // yaw, pitch, roll  
   FusionResult.GetOrientation().GetEulerAngles<Axis_Y, Axis_X, Axis_Z>(&data[2], &data[1], &data[0]);  
   data[3] = acc.x / STD_GRAV;  
   data[4] = acc.y / STD_GRAV;  
   data[5] = acc.z / STD_GRAV;  
   jfloatArray result;  
   result = env->NewFloatArray(6);  
   if (result == NULL) {  
     return NULL; /* out of memory error thrown */  
   }  
   env->SetFloatArrayRegion(result, 0, 6, data);  
   return result;  
 }  
 JNIEXPORT jint JNICALL Java_oculusvr_input_OculusRift_getHResolution  
  (JNIEnv *env, jobject thisObj) {  
   return Info.HResolution;  
 }  
 JNIEXPORT jint JNICALL Java_oculusvr_input_OculusRift_getVResolution  
  (JNIEnv *env, jobject thisObj) {  
   return Info.VResolution;  
 }  
 JNIEXPORT jfloat JNICALL Java_oculusvr_input_OculusRift_getHScreenSize  
  (JNIEnv *env, jobject thisObj) {  
   return Info.HScreenSize;  
 }  
 JNIEXPORT jfloat JNICALL Java_oculusvr_input_OculusRift_getVScreenSize  
  (JNIEnv *env, jobject thisObj) {  
   return Info.VScreenSize;  
 }  
 JNIEXPORT jfloat JNICALL Java_oculusvr_input_OculusRift_getVScreenCenter  
  (JNIEnv *env, jobject thisObj) {  
   return Info.VScreenCenter;  
 }  
 JNIEXPORT jfloat JNICALL Java_oculusvr_input_OculusRift_getEyeToScreenDistance  
  (JNIEnv *env, jobject thisObj) {  
   return Info.EyeToScreenDistance;  
 }  
 JNIEXPORT jfloat JNICALL Java_oculusvr_input_OculusRift_getLensSeparationDistance  
  (JNIEnv *env, jobject thisObj) {  
   return Info.LensSeparationDistance;  
 }  
 JNIEXPORT jfloat JNICALL Java_oculusvr_input_OculusRift_getInterpupillaryDistance  
  (JNIEnv *env, jobject thisObj) {  
   return Info.InterpupillaryDistance;  
 }  
 JNIEXPORT jfloatArray JNICALL Java_oculusvr_input_OculusRift_getDistortionK  
  (JNIEnv *env, jobject thisObj) {  
   jfloat* distortion = Info.DistortionK;  
   jfloatArray result;  
   result = env->NewFloatArray(6);  
   if (result == NULL) {  
     return NULL; /* out of memory error thrown */  
   }  
   env->SetFloatArrayRegion(result, 0, 6, distortion);  
   return result;  
 }  
 JNIEXPORT jint JNICALL Java_oculusvr_input_OculusRift_getDesktopX  
  (JNIEnv *env, jobject thisObj) {  
   return Info.DesktopX;  
 }  
 JNIEXPORT jint JNICALL Java_oculusvr_input_OculusRift_getDesktopY  
  (JNIEnv *env, jobject thisObj) {  
   return Info.DesktopY;  
 }  
 JNIEXPORT jstring JNICALL Java_oculusvr_input_OculusRift_getDisplayDeviceName  
  (JNIEnv *env, jobject thisObj) {  
   char *name = Info.DisplayDeviceName;  
   jstring result = env->NewStringUTF(name);  
   return result;  
 }  
 JNIEXPORT jlong JNICALL Java_oculusvr_input_OculusRift_getDisplayId  
  (JNIEnv *env, jobject thisObj) {  
   return Info.DisplayId;  
 }  

Edit: Update to the code. Formalized it a bit better