Saturday, 6 March 2010

...More than meets the eye...

Prompted by a visit from Global, where some of my real-time applications on our Skunkwerks box failed to launch, I've been doing a bit of bit-rot fighting.

I got all the OpenSceneGraph stuff working. Furnished with confidence and knowledge from a fortnight and a gig hacking Fluxus and Scheme, I decided to add a 'camera tickle' feature to our Panda3D Flickr Equirectangular viewer. This 'camera tickle' was just a bit of sin/cos based movement on the fore/aft left/right of the camera. Inspired by a conversation I had once with D'nardo from Elumenati. The idea being that the movement would stop the image 'flattening' onto the dome surface.
...well that turned out to be a can of pythons

While trying to ascertain the effect/affect of the 'camera tickle' - judging it by the amount of vection induced - we realised that the panoramic images weren't 'unfolding' properly onto the dome. I tried hack the variables in the code, some were better, some worse, but none were right. In particular, objects that should appear vertical as they move behind you, were laying down. One thing we did realise is that tiny adjustments in dome to fisheye calibration have a huge effect on vection induction.

So the code was a bit crufty, and I'd been hacking on it since sometime in 2007/8 - so I decided to rewrite it properly. I made my fisheye script into a proper class. Got rid of the invisible RoamingRalph. I changed the code so that it didn't create a sphere per equirectangular file it loaded, rather it changes the texture on a single sphere. I also remodelled the sphere, to use more polygons, and reused it as the cubemap sphere.

Things were looking good, we loaded it up in the dome...and still had the same issues. Now, with the 'knobs' for camera position and tilt cleanly exposed, the issues were more obvious. So a good couple of hours of head scratching, hacking, reading and modelling. I decided to go back to the old testament.

Paul Bourke

Now, my approach had been to use a dynamic cubemap, mapped to a sphere, with an orthographic camera looking at this. I culled the front face of the spehere - a hangover from testing the approach with perspective lenses, where it makes a difference. My approach produced a fisheye like this:

...You should be able to notice that the concentric circles get closer toward the edge. Compare Paul's render of a Equiangular fisheye:

...Here the concentric circles are the same distance from centre to edge. Suddenly it made perfect sense - a threshold concept moment. Of course an orthographic camera would get narrower towards the edge. The effects can be mitigated a bit by using a perspective camera. But as you can see from Paul's site, the variation in concentric ring spacing is shifted to somewhere else between the centre and edge:

This was taken with a 60 degree camera.

This was a 90 degree camera.

So, close, but no cigar...
This was quite a blow. I have a lot of code written using Panda3D, and these subtle fisheye differences make big differences to induction of vection - so as a platform to base a series of presence experiments on, it was looking shaky. Panda3D has a fisheye class built in to it - I think Elumenati were involved with Carnegie Mellon's ETC, and helped them develop it. But I'd never got it going...

I think I was suffering from poor Panda3D/Linux interaction at the time. I tried again with the code, and got it to work. About 8 hours later, I'd managed to get the geometry correct for our 4:3 rear-truncation, applied a mask, and had my old code running on the new class. I had some issues with the interaction of setFimSize() setFilmOffset() and our 25 degree tilt - instantly you can see that the fisheyes are equiangular though.

This is what the finished item looks like:

NLEquiAutoFlickr from domejunky on Vimeo.