You can view RSS feeds from my friends and colleagues.
Last February at Ignite Sydney we thought we'd try something a little different to get the crowd involved.
While most people were downing a few drinks, a bunch of lovely lads and ladesses with iPads were circulating through the audience asking people to draw (with their fingers) what inspires them. On the iPads was a drawing application that recorded the time and position that each stroke of their fingers made and that data was used to create a 3D timelapse visualisation of their drawing on the big screen behind the stage. I've put together a little video of the end result:
It was actually quite exciting to see what people came up with. And equally exciting was seeing the looks on their faces as they interacted with the iPads and then waited with anticipation to see how their drawings would be interpreted by the foreign shapes appearing on the screen.
To pull off this stunt we used all open web technologies: a webpage running my "DrawPad" Canvas application that allowed people to draw, and captured the movements of their fingers; storage of the stroke data in JSON on the backend (thanks to Tim Lucas); and visualisation of those strokes in 3D using WebGL via Mr. Doob's wonderful three.js library.
To get people drawing, I took a look at 37signals' Chalk but it lacked one important feature: multi-touch drawing, so I decided to write my own drawing app.
If you're never done multi-touch event handling it can be a mysterious process (it certainly was to me) but once you get your head around the notion of an event object that contains multiple points of interaction, then it's actually quite fun.
I've uploaded the source code for the drawing app (and the 3D visualiser) into a DrawPad Github project if you want to take a closer look (or improve it in the countless ways that it could be improved upon). Certainly the 3D visualiser is a result of cramped deadlines. I really would have loved to create the stroke paths as true 3D meshes, but had to settle for a series of spheres that follow the path of the stroke instead. (Kind of like voxels.)
But at the end of the day the technology didn't matter. What mattered was the outcome: an easy-to-use way for people to draw what they wanted, and a pretty-as-a-picture translation of what they drew. The fact that it was done in a browser made no difference at all.
I seem to have left pieces of myself scattered around the internet. This is my attempt to pull some of those pieces together.