Posted: December 16, 2010
Zuegen is a good example of both the theoretical sensibilities I bring to my work and the technical skills I practice. With Zeugen I am combine wood, acrylic, polymer and metal milling practices with machine vision software and creative electronics hardware in a single prototyping process. To build Zeugen I also used body casting, painting and mold making fine art practices. I also used mechatronics robotics practices. I used a wide variety of techniques that I have learned over the years and combined them to make 32 motion-tracking human cast faces that have moving eyes for watching and tracking patrons’ movements in the gallery space.
Viewers experience a visceral embodiment as they interact with the work.
Zeugen also investigates the ‘sense of being stared at’ and the gaze. Larger questions of surveillance as a mediated experience and the social implications of being watched are challenged by Zeugen. We plainly see that we do not need another human ‘observer’ to feel as though we are being watched. I am still in the process of uncovering the underlined meanings of Zeugen but even at first glance significant research questions are revealed.
Other Zeugen Projects
- Zeugen V1
- Zeugen V3
Zeugen: Historical Context and Process
Marcel Duchamp’s “Rotary Demisphere (Precision Optics)” (1925) was “an early example of ‘interactive art’ […] the viewer becomes an active participant in the art … required to turn on the optical machine and stand one meter away” (Rush 1999). I do not fully agree with this definition of interactive art, which simply requires participation. Instead, I feel that interaction and interactivity are a result of a reciprocity of actions. My view is best illustrated by a simple example: a ping-pong game. If I simply respond to the ping-pong ball by returning it to the other side of the net, then I am not acting so much as I am reacting to the physical principles of the game. If, however, I place a spin on the ball so as to thwart my opponent’s attempt at volleying the ball back to me, then I am engaging in interactive game play. Therefore, interactivity requires a cycle of action whereby both (or all) participants actively contribute as opposed to merely respond to the experience. Artists now use microcontrollers and sensors when building interactive electronic projects. These technological advances allow new media artists to receive inputs, calculate processes, and control outputs (all of the basic components required for interactivity). The resulting interactive artworks use the principles of electricity that Duchamp experimented with, with an added layer of interaction.
Daniel Rozin’s “Wooden Mirror” (Rozin 1999) is a creative electronic artwork that explores the mind and reflexivity. The “Wooden Mirror” (Rozin 1999) is made up of small wood chip ‘pixels’ and reflects light in such a way as to display a ‘wooden mirror’ image of the observer. Just like a glass mirror, the ‘wooden mirror’ changes with the viewers’ movements, but reflects a pixilated image. In order to successfully reflect a recognizable form that also induces ‘super-conscious’ ‘hyper-real’ self-recognition, it is critical to include the critical elements required to generate this reflection at a high enough ‘resolution’. There are many similarities between “Zeugen” and the “Wooden Mirror” (Rozin 1999). Both works capture video surveillance images and process video image data, then transmit the data to display an interactive servo-controlled visual experience. However, “Zeugen” reflects both the literal image of the viewer and presents artificial ‘observers’ that, when taken together, create the necessary layers of simultaneity required for the reflexive experience.
Like “Zeugen”, David Rokeby’s “Watch” (Rokeby 1995) also uses surveillance technology. However, he establishes a dialogue surrounding the power and domination-driven viewing practice of surveillance. The work consists of a surveillance camera filming the exterior public space of the Holly Solomon Gallery in New York City. It shows the video feed on a display inside of the privileged space of the art gallery. The work is more of a reflection (contemplation or consideration) of surveillance rather than a reflexive moment replicating ‘the sense of being stared at’ or representing seeing and being seen. The audience is not aware that they or others are being surveiled until after the fact, and do not directly interact with the work in real time. “Watch” was intended by Rokeby to become an “artificial perception system” using surveillance technology. However, and quite obviously, the ‘loaded’ implications of the work make the discourse of power politics and surveillance unavoidable. Surveillance technology is the vehicle for a strictly power-driven observation as it is unidirectional and in many cases completely clandestine and subversive. In surveillance technology, “an arms race between agents supporting and opposing unobservable surveillance techniques has been unfolding for at least two decades.” (Parsons 2009) It is within this race that Rokeby makes critical works of art assessing the vehicle and implications of unidirectional surveillance. In contrast, the viewers of “Zeugen” can see and be seen by the work, but the work is always in plain view. There is no element of clandestine observation; viewers on both the tech side and the face side of the work can always see each other and the technology that drives the work is open to examination.
“Watched and Measured” (Rokeby 2000) is a commission for the science museum in London, England. “Watched and Measured” (Rokeby 2000) is a direct example of invasive video surveillance systems as the system identifies human presence and draws an outline around the heads of people walking into the Wellcome wing of the museum. Whereas Rokeby’s “Watch” (Rokeby 1995) began a critical discourse of surveillance, “Watched and Measured” (Rokeby 2000) expanded the discourse and drew closer to the ‘sense of being stared at’ by directly demonstrating the capacity for machine vision to identify humans in video footage. “Watched and Measured” (Rokeby 2000) does not generate a completely embodied and self-reflexive moment of awareness to elicit the ‘sense of being stared at’. The viewer is only made aware of the fact that they were under surveillance after the fact and not in a single, personalized reflexive moment. That is to say, Rokeby’s work deals more directly with surveillance than reflexivity.
Marie Sester’s “Access” (Sester 2006) is an interactive new media artwork that was installed at the ZKM Center for Art and Media in Karlsruhe, Germany late in 2006. The artwork allows online users to “track anonymous individuals in public places, by pursuing them with a robotic spotlight” (Sester 2006). Sester’s project uses a spotlight to evoke thoughts of prison camps and detention facilities. The audience of the gallery is put under the careful watch of people using the artwork online in remote locations. Questions of remote control and the distant observer are contained in a raw and visceral event (that is, in real-time). The artwork is controlled by people, but mediated by technology. The ‘observer’ is not directly looking at the subject in the gallery space and so the sensation of ‘being seen’ is muted by technological mediation and the unpredictability of the online human observer. During studies of the sensation of being stared at “most people … being stared at say they have turned around and looked straight at the person staring at them.” (Sheldrake 2005) In the case of Sester’s “Access,” viewers cannot face the ‘observer’ because they would be looking into a blindingly bright light on the ceiling. Once again, a power dynamic is in place, disrupting the pure and embodied experience of ‘being stared at.’ Instead, the works of both Sester and Rokeby focus my attention predominantly on surveillance. However, “Access” does provide me with another key element that is essential to replicating ‘the sense of being stared at’: real-time tactile interaction.
Golan Levin’s “Eyecode” (Levin 2007) is another new media investigation of seeing and being seen. “Eyecode” (Levin 2007) “is an interactive installation whose display is wholly constructed from its own history of being viewed. By means of a hidden camera, the system records and replays brief video clips of its viewers’ eyes. … The unnerving result is a typographic tapestry of recursive observation.” (Levin 2007) Levin uses a matrix of eyes and adds the essential element of interactivity. One difference between “Zeugen” and “Eyecode” (Levin 2007) is that “Zeugen” uses physical objects and “Eyecode” (Levin 2007) mediates the visual experience by presenting the eyes on a computer monitor. Another difference is that the eyes in “Zeugen” track the movements of the viewer, whereas the eyes in “Eyecode” (Levin 2007) replay the movements of the viewers’ eyes. Both works are interactive new media explorations into the gestures of the gaze and both works explore sensations of sight.
Levin expands his exploration to include ‘being seen’ with the work “Opto-isolator” (Levin and Baltus 2007). The “Opto-isolator” (Levin and Baltus 2007) “presents a solitary mechatronic blinking eye, at human scale, which responds to the gaze of visitors with a variety of psychosocial eye-contact behaviors” (Levin and Baltus 2007). This work exhibits most of the parameters that I have discussed for replicating ‘the sense of being stared at’. The “Opto-isolator” has an eye and interactively tracks the motions of a viewer. However, this work lacks two key elements that I feel are necessary for a completely immersive and self-reflexive sensation of ‘being seen’. First, the work does not have a sufficient anthropomorphic signifier (other than the eye) to indicate the signified ‘other’ (person). The eye is installed in a black box and not a recognizable face. Second, there is only one eye, meaning that the dominant ‘observer’ in this ‘visual exchange’ is the viewer in the gallery space as they are equipped with two eyes. The eye could belong to an animal but if that were the case, the animal would be one that is mythical because “there are no animal species with just one eye.” (McDonnell 2009) “Zeugen” overcomes these challenges by installing the eyes in a very large grid of human cast faces.
“Zeugen” is a contribution to new media movements in expressing experiences of seeing and being seen. “Zeugen” contributs to both the visual arts arena and new media technology developments. I have shown “Zeugen” at the Emily Carr University of Art + Design Concourse Gallery. I have also shown the work at the Surrey Art Gallery opening for “Interactive Futures 09: Stereo” Conference (a conference directed by Maria Lantin and Julie Andreyev). Recently, I have also been invited to submit “Zeugen” and participate in the internationally recognized Prix Ars Electronica competition (2010) under the interactive art category.
Zeugen: Creative process (methodology)
Nikola Tesla is a source of tremendous inspiration for me. He writes: “when I get an idea I start at once building it up in my imagination. I change the construction, make improvements and operate the device in my mind.” (Tesla 2007) Tesla’s creative process is analogous to my own methodology of making, though, unlike Tesla, I cannot “rapidly develop and perfect a conception without touching anything”. (Tesla 2007) I don’t take extensive notes, build scale maquettes or make sketches when planning my work. I simply ponder over the ideas one at a time, formulating the total picture conception of my work. I then build the individual components one at a time. Of course, I encounter many problems once I begin building, such as availability or physical limitations of materials, damaged or poorly manufactured parts, lack of time, and financial constraints. Nevertheless, I establish a sort of confidence to proceed by fully forming certain ideas in my mind, leaving other parts of the plan open to variation once I begin the process of building. This section deals with the trials and challenges encountered in the development of “Zeugen”: it is a record of my creative journey. I will not be sharing all of the details of my experimentation, which are well beyond the scope of this paper. Instead, I will focus on the anthropomorphic, interactive and technical elements that I was most concerned with when making “Zeugen”.
I began with running a singular eye experiment in order to test my theory of using a spring as a universal joint. The experiment worked, and I simulated an event where the eye tracked my movements. However, a singular eye does not offer the anthropomorphic qualities I was seeking (for reasons previously discussed). I then began mounting the eyes in pairs to simulate the movement of a pair of eyes. At this point, I was already convinced that the eyes would have to be placed in a face. Before I cast all thirty-two faces, I experimented with a wide range of casting materials including silicon rubber, fiberglass, latex, and mold-making styrofoam. I was most concerned with casting the faces in such a way as to maintain human elements, but somehow minimize stereotypically-ascribed identifiers of race, gender, and age.
The face casting process was quite challenging. My first consideration was examining the individual facial features. In particular, what facial features (other than the eyes) are important to anthropomorphosis of the face, yet do not require interactions? There are expectations of the mouth to speak or make facial gestures such as smiling. There are less, yet existing, expectations of the ears to listen. Because of these expectations, I omitted the ears and the mouth from the face casts. The nose, however, seems to evoke the least amount of expectation for interaction in the viewer. The nose is also the visual feature linking the eyes together, as it rests perfectly between them. Taken together, the eyes and nose are a powerful anthropomorphic visual.
The face-casting process involved several steps. The first step was finding and casting many volunteer models. I built a special face-casting contraption that acted like a mask and could easily be used repeatedly. I used an alginate and water mold-making solution that is suited for body casting; this material maintains a high degree of detail and is hypo-allergenic. It should be noted that the working time of alginate is relatively short because the mold becomes dry. There is a narrow window of about an hour to obtain the plaster master-mold once the alginate casts is poured. A considerable amount of fine crafting of the molds was also necessary.
Next, I sealed the plaster master-mold with a thin coat of finish to protect the surface detail and prevent chipping of the plaster surface. The master-molds were then ready for vacuum casting. I used polyethylene sheets as the vacuum casting material in order to produce a ‘backup’ negative of the mold. An interesting limitation of the material that I found was that I could not vacuum cast more than a few faces in a row before the actual plaster positive would heat up. Because I had to cast nearly a hundred vacuum forms in order to get the best possible result, this problem became a serious time consideration. As “Zeugen” evolved, I began using polystyrene materials in order to get a stronger surface and visual feel to the work. Conveniently, polystyrene melts under considerably lower temperatures and this reduced the heat problems enough that the expansion, melting and miscasting effects became negligible.
The thin vacuum formed faceplates were remarkable. Very minute skin details were visible by shining a light behind the polyethylene faceplate. With light shining at the front of the face plate, the result was a milky, shiny surface.
I knew there had to be more than one face or the identity of the singular face would become a central theme of the work. The question was: how many faces? I suspected that two faces would focus the critical discourse around the work to the specific relationship between the two faces. In my mind’s eye, I could see the work as occupying my whole mind when I thought about it. I therefore decided to cast as many faces as required to occupy most of my visual plane. Now, when I stand at the appropriate distance from the work looking straight at it, my whole visual field is filled with glowing faces looking back at me. I also began using tri-colour (red, green, and blue) frequency-modulated light emitting diodes (RGB LED) to experiment with lighting effects.
The faces were installed into a four by eight grid. I used a computer numerically controlled (CNC) milling router to cut the frame, with the assistance of the CNC technician Steven Hall. The highly polished and perfectly cut acrylic surfaces and plywood sub-frame resulted from the use of the CNC machine. I was also able to save time during the construction phase relative to constructing the frame ‘by hand’.
The frame was designed so that the planks could be assembled into a grid using interlocking joints. In early versions, the wooden frame and acrylic face plate were white. Later, I decided to maintain the natural color of the plywood and a black acrylic face plate for aesthetic reasons.
After assembling and finishing the sub-frame and acrylic surfaces, as well as casting and forming all of the face plates, I began to research methods of face-recognition and tracking. I found a desktop webcam that could track human faces. My idea was to use the webcam and hardware to physically control my own custom hardware, and therefore move the servos that control the movements of the eyes. This approach had several limitations. First, with a standard desktop face tracking webcam, it is only possible to track one face. Second, the hardware found in almost all devices is ‘closed-box,’ meaning that it cannot be custom-programmed. Thus, it is not possible to program ‘personality’-based robotic eye gestures or movements with these systems.
Despite hardware limitations, I pushed forward to complete the first version of “Zeugen.” I wanted to identify any other limitations that would require reworking. In order to control the thirty-two faces in the first version of “Zeugen,” I had to find a way to control six servos per face. There were two servos controlling the upwards and downwards movements of the eyes, two servos controlling the left to right movements of the eyes, and two servos controlling the eye lids. I also needed to control four RGB LED’s per face. I was familiar with microcontrollers such as Arduino, but I was also aware that my project requirements extended well beyond the power of an Arduino microcontroller. At the time, the most powerful Arduino microcontroller could only control fifty-four output devices, while I needed to control a total of three hundred and twenty devices. In order to develop the project, I collaborated with electrical engineer Randy Glenn to design a new microcontroller called Displayduino (see Appendix 1 for details). An extensive amount of computer software and hardware programming was encountered as I made my first attempt to build thirty-two motion tracking human cast faces.
I showed the first version of “Zeugen” at “Not That Grad Show” in the Concourse Gallery at Emily Carr University of Art + Design in 2009. I was disappointed with the results for two reasons. First, many of the servos controlling the work failed the night before the show. They simply could not take the stress loads that were required to move the eyes. Second, the motion tracking system was not fully operational and relied on a single webcam.
To address these limitations, I went back to the drawing board. I began by redesigning the mechanical contraption that controlled the movements of the eyes. In the first version I used a universal spring joint. I decided to design a new system utilizing the mechanical tools of LEGO. I was able to dramatically reduce the amount of servos required for each face from six to three and I was able to reduce the torque required to move the servos by around ninety percent.
At “Not that grad show”, I noticed that viewers were trying to see the mechatronic elements on the reverse side of the work. This observation led me to redesign the work such that it could be viewed from both sides. The idea of technological transparency became central to my work: it is important for me to share with my viewers both a visual experience and the technology that makes the experience possible.
I programmed my own face recognition and face tracking software to eliminate the limitations inherent in a single face tracking webcam. In order to accomplish this, I had to learn how to program in C++ to take advantage of the Open Frameworks open source libraries.
As a testament to the success of my vision of the work and the form it took, I showed the second version of “Zeugen” at “E-Mixer” in the Surrey Art Gallery (Interactive Futures ’09). The work was well-received by the international attendees of the conference.