Virtual Gesture

Searching for new indexical means to record moments and movements, I have begun to trace motion using VR controllers, staging hugs and other greeting gestures. The audiences of these gestures are Oculus Rift sensors. The participants’ self-awareness is heightened beyond my previous photographic staging; controllers and cables make the whole enterprise more cumbersome and staged.

Further, by using an Oculus application (Medium), the tracing only occurs when the Rift Headset senses a head. So the participants must take turns wearing the headset and performing blind. Again another level of performance and pantomime enters the process, in that wearing a headset one must perform what one knows a hug to be, rather than performing a hug to their partner specifically. The partner must make up for any hesitancy or imprecision by taking the lead. To capture both sides of the hug, they must swap controllers and headset. The final model, then, is also an aggregation of multiple discrete hugs.

The resultant models are enigmatic and opaque, but also vaguely bodily, perhaps intestinal.

3D models tracing gestural interactions. Image © Tyler Calkin

They can also be printed as miniature models. These miniature hugs are more delicate than the spherical prints, which communicates something of the sensitive original act. In November of 2017 I performed with one of these miniature hugs, handing it to pedestrians on a street corner in Culver City, CA in a performance called Smallest Gesture. I was giving a small hug, and getting it back in return. The exchange was brief and fragile, and often declined. But by injecting this form into the public sphere, the occasional pedestrian was a moment to consider the power and value of acknowledging and embracing another body in public.

Smallest Gesture, 2017. © Tyler Calkin