Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing
Leap Motion and Processing

Over the last couple weeks I've been exploring using the Leap Motion for creating generative art. There seemed to be an inherent ability with the device to draw images, or, perhaps, sculpt images out of thin air. I didn't really have a "concept" starting out other than this notion of creating some sort of drawing tool using this technology. Another thought about this device is using it as a controller for live visual performance, something I will probably pursue later on.

To begin with I started downloading the various libraries for Processing (my coding environment of choice for art things) and found a couple things to watch out for.

One is all the Leap libraries for Processing have the same package name, meaning you can’t install more than one of them at a time without Processing getting confused as to which one you want to use. The workflow for trying each library is a bit annoying, as you have to remove the library you just tried (put it on the desktop or something) and then drop in the new one you want to try, restart Processing, and then try the examples that come with the lib.

The other crunchy bit is that most of the libraries only support the Leap API 1.0, probably because most of them were created during the “land rush” when the Leap was just released. The first Leap API is pretty good, but it doesn’t seem to have a very robust model of what a hand should be, and just seems to look for shapes that look like a hand roughly. The result is that tracking isn’t as stable as could be.

Leap API 2.0 however does seem to have a good idea of what a hand should be and therefore matches what it sees to a model of ideal hands. That provides for much greater stability, particularly when the user places their hands at odd angles or flips them over entirely.

The library I settled on does have support for the 2.0 API and, as of this writing, is in beta. It is fairly stable, and for the most part completely usable. This library is aptly named Leap Motion for Processing (LMfP). The author of that lib goes by the handle Voidplus. To get the beta version of this lib go check it out on Github here:

Leap Motion for Processing

After settling on the library of choice I started playing around with the example files. Essentially one creates an instance of a Leap object in setup() and then checks for the presence of Hand objects during the draw() loop. Each hand has an array of Finger objects these fingers in turn have joints.

Interestingly enough the joints for each finger do not come back as an array, making them a bit difficult to loop through. What the LMfP library provides (based on the Leap API itself) are methods you can call on each finger to get XYZ coordinates in the estimated world space for each joint in the finger. To get the location of the tip of a finger you’d call finger.getPositionOfJointTip() which then returns a 3D PVector you can store in a variable, or PVector array, for later use.

This in and unto itself took some time to parse out, so here’s a little cheat sheet for folks who arrive here wondering:


PVector[] joints = new PVector[4];
joints[0] = finger.getPositionOfJointTip(); // tip of the finger
joints[1] = finger.getPositionOfJointDip(); // next joint down
joints[2] = finger.getPositionOfJointPip(); // next to last joint
joints[3] = finger.getPositionOfJointMcp(); // joint at the bottom of the finger

So far everything else in the library makes intuitive sense, except for rotations. The Leap returns rotations in degrees, however Processing works with radians. So a simple conversion can be done:


hand_roll = radians(hand.getRoll());
hand_pitch = radians(hand.getPitch());
hand_yaw = radians(hand.getYaw());

To actually use these rotations, say for rotating a cube on screen, there needs to be some additional offsets as well:


rotateX(hand_roll+HALF_PI); // add a 1/4 circle turn
rotateY(hand_pitch); // stays the same
rotateZ(-hand_yaw); // reversed to create a mirror effect on screen

Check out the initial tests on Github here:

leapbeta_first_test

Making Something Interesting

Once this was all sorted out it was time to experiment with making a drawing tool with the Leap. Because the Leap data is inherently 3D I started immediately working in Processing’s P3D drawing context.

At first I just made a vertex mesh that attached all of the joints. The original sketch is here:

shapeForms

This wasn’t super exciting but a good start to understanding how to use the Leap. To take it a step further I brought in a class I’d written previously that creates volumetric geometries called Geode. Geode essentially constructs a sphere which is then randomly distorted.

Using pushMatrix(), popMatrix() and translate() a Geode is attached to each joint of each finger. Each of the Geodes has random transparency and colors within ranges. This is where the aesthetic choices come into play.

A giant sphere() with low opacity surrounds the entire scene. The graphics buffer is never completely cleared creating a “tracer” effect that builds up on each successive frame. The initial version of this code, which now has provided the basis for several variants, is here:

shapeFormsGeode

Once this foundation code base was developed (discovered?) the rest was all about tweaking numbers in the code, running the code, checking the output, and then going back again. Rinse, wash, repeat. These are the versions I’m keeping to myself (at least for now), as they are where the magic is. These variants are now collected under the moniker Crystallographer and the images and video attached to this post are from the Soft variant. The goal of Crystallographer: Soft was to match a sort of dutch master palette, though in the end it looks a little more glitch rococo.