Baptism


Baptism is a performance that explores shamanic practices of rituals, divination and healing. It shows a digital interpretation of the shamanic process of contacting a spiritual world while in an altered state of consciousness. Staged in a spiritual dimension and with a focus on the practice of exorcism, Baptism represents three different stages of the Shaman’s journey through ecstasy/trance, travelling and healing.

The first stage depicts the process of entering into a trance like state in order to reach the other world. The Shaman must be careful, if he lose control, he risks losing his soul. The second stage represents travel. Whilst still in a deep trance the Shaman must journey to meet the “Mother of the Sea”, Angakkoqs and her spirit helpers. They are the only spirits who can approach the Shaman during his ritual. The third stage symbolises the exchange of the soul within the patient that exorcises the demon.

Wearable technologies assist the performers in embodying the idea of transformation. High voltage, dimmable LED’s are controlled with IMU data (Acc, Gyro and Magnetometer) collected from the performers movements. Ascendant bodily language is used to dominate beams of lights and a feedback loop between the physical world and the spiritual world is created using DMX (protocol) and a Kinect (Xbox). These techniques, combined, create an immersive depiction of a new age ritual.

The aesthetics and scenography of Baptism are outlandish, grotesque and vaporwave, creating a feeling of synesthesia within performers and spectators.

Baptism from Friendred on Vimeo.


There is no single agreed-upon definition for the word “shamanism” among anthropologists. Some people were saying the term refers to “anybody who contacts a spirit world while in an altered state of consciousness or limited to at the behest of others”. Some people attempt to distinguish Shamans from the other magico-religious specialists who are believed to contact spirits, such as mediums, witch doctors, spiritual healers or prophets.

Under the circumstances of a spiritual dimension in theology, I want to represent shaman contacting a spiritual world while in an altered state of consciousness or limited to at the exorcism behest of others by using a grotesque or outlandish way.

At first, I was planning to combine the Shaman costumes into the automaton around the part of the body through wearable technology. Depicting the exorcism cult by installation in the central part. Choreographing how shamans communicating with the external spiritual world by getting bodily data and IMU data.

testing of getting EMG and ACC through Bitalino and Opensignals

IMG_3156 from Friendred on Vimeo.


I will use Kinect skeleton tracking getting limbs position for multiple people as well as applying to a different projector to control the beams of lights. I will get the movements data, for example, ACC and EMG data through Bitalino and 9 DOF IMU to manipulate the pitch of sound also the intensity of the lights. Arduino will be used to control the robotic wearables.

I was trying to LSM9DS1, there was two different way to connect, one is through I2C(Iner Integrated Circuit), the other is SPI(serial peripheral interface), I2C is pretty much easy to communicate, as it only need VCC, GND, SCL(serial clock) and SDA(serial data). It matches how data transfer through the serial port at the same time.
The problem is how can I construct the structure to make 4 performers transferring data through LSM9DS1 to central Arduino and through machine learning to train the intensity of the 100W LED. First, it must need 2 Arduino board and serial port(or maybe one Arduino board receiving OSC and transferring OSC)(that must be cool).
The effect I’ve ready implemented was one axis ACC manipulate the sound. The next step is how can I use 9DOF to make the whole data through Wekinator and be trained.

Test of getting date from 9-OFD IMU

IMG_3399 from Friendred on Vimeo.


test of sending data into MAX

20170723_174804 from Friendred on Vimeo.


for solving the machine learning part, I got a bit stuck, first of all, to get the data, for example, pitch value, heading value, roll value and 3 axes of rotation value from Arduino to the Wekinator, I did some research of how to send data through OSC from Arduino to Wekinator, then output to Openframework, but it couldn't work on my project. eventually end up with sending the message through MAx, because in MAX, not only I can receive data from serial port, also I can separate the message by one specific function 'Zslice', thus the only thing I need to do is print the value in serial monitor and use space to separate them. However, when I try to send message through host, it seems the data getting from LSM9DF1 break board was too sensitive, I need to add a smoothy filter.

MAX + Wekinator/Osculator + Arduino

IMG_3471 from Friendred on Vimeo.

Connection schematic Time schedule

choreography schedule


there is a big issue of hooking up, after I done the prototype, it worked fine, the SM9DF1 getting the pitch value controlling the dimming of the light. Also controlling the NeoPixel light strip, basically, the light strip mimic data transfer, light strips will change to pink colour since you pitch the board. At the same time, I set up another mode of light, when the pitch value equal to 3, means the ball is keeping static, then the light strip will be the rainbow cycle mode.

IMG_3579 from Friendred on Vimeo.


I've been stuck for two days of solving noises of rainbow mode, also when I switch on the power supply, the whole system crashed, after I reconnect all the cable, adding more voltage and rehook up another MOSFET onto the copper board, it still couldn't work, the reason was the wire of connecting negative of power to the MOSFET was too thin and too long, unfortunately, after I did some research, the problem with long wire seems not easy to solve in a short time, thus I have to change the output from PWM to digitalWrite(), so when the pitch value goes up no matter what direction it is as I did mapping function before, the light will switch on, if the pitch value goes done, then LED will be switched off, when rainbow mode is triggered, light will be flashing.
Demo of Caculated pitch value control the dimming of the light and lighting process of the light strip.

IMG_3581 from Friendred on Vimeo.


Test of controlling the two different light at the same time. So when dancer performing, two of each will control 4 different lights but two colours by the accelerometer. dancers have to move in one direction, then they can keep lighting up. the whole procedure of this part more simulates a cult ritual. when Shamen travel to the spirit world the audience is often asked to sing him along away and therefore plays an active role in inducing the altered state of consciousness.

solving the power supply problem and extend the sensor cable, also hook up all the LSM9DS1 onto the copper board. there is an obvious power supply problem which is 3 50W 380nm LEDs and 3 100W 580nm LEDs has different power requirements, but they can work well on 36V after the testing, so I prepared 36V power AC-DC adaptor for them, unfortunately, all 5 power adapters I got are generated 47V, so I have to find a replacement power supply in a short time. then I used 10Ammp 12V power adaptor with a step up converter, them at least I can make sure all these LEDs have the basic power input which is 30. they can separate the amperage when they connect parallelly. Also, I found out actually it has no big difference how many MOSFET I use, they will be the same as long as they are sharing the same power sources.

IMG_3675 from Friendred on Vimeo.


connecting the pipe which is going to be on the performer, I was trying to getting the IMU data from performer and send it through OSC message and train the data by using machine learning. But I found an easy way which is also working mimic machine learning. when performer moves forward align x axis, the light will be stronger with speeding up. If dancer changed the direction of movements, eg, towards up, then the two LED will staggered lighting in between. if performer moves towards y axis, modes of neo pixels will be the rainbow cycle.

IMG_3682 from Friendred on Vimeo.

Kinect with skeleton tracking for controlling the beam of lights


3d printing the maks and conjunction. for 3D printing part, I was using the Ultimaker3, it also need few pototypes for testing different width of the models to get the most robust mask. Also the conjunction which used for jointng the ball, pipe and case printed for containing the LED and heatsink.


testing the mask and helmet with connecting to the main program.

IMG_3717 from Friendred on Vimeo.


Making mask

IMG_3721.MOV from Friendred on Vimeo.


using Kinect V2 with skeleton tracking on Mac was pain in the ass, using skeleton tracing OpenNI and Nite as a library. Skeleton capturaing was running on GPU, so it's quicker than CPU, also it doesn't support CUDA. basically, as a control part, I was trying to use skeleton traking for capturing joints data for DMX and for creating a mask drawing abstract Shaman patterns by OpenNI.

IMG_3476 from Friendred on Vimeo.

particle system testing
testing scene of using particle system and FboBlur addon for drawing the trace of the posture.

IMG_3545 from Friendred on Vimeo.


I was trying to build a simple mapping system to map the texture onto the performer's body. First of all, there is an issue, the content is reversed because of projectors, usually, this is the easiest problem of Kinect, but in this program, it's slightly different, because I wasn't using the wrap of OF, I was using the OpenNI and NiTE directly from its own library, so I have to use the function slightly different. but actually I was thinking too much, the only thing I need to do is mirro the ofPixels. before I did this, I tried using the most traditional way to flip over the pixels which are use width minus pixels position. which also worked fine.

the other problem is about mapping the mask, because I was using mask to cover the texture, so I created a GUI for mapping the maks onto the body position and scale.

the next step is to figure out how to map the skeleton position onto the performer's body as well.

IMG_3729 from Friendred on Vimeo.

IMG_3734 from Friendred on Vimeo.


12th of August, we did the second rehearsal. mainly for testing the first scene, and trying to make the performers get used to the ACC IMU data and the helmet or the mask. Also deciding how to transform the ideal shamanism communicate the spiritual work in a digital way. compare with holding board or bind onto the stick, I think the best way to symbolize the ritual is combined with the bodily movements. In this process, there is really important thing is use the most understandable to interpolate the conception of the first stage which is trance/ecstasy also make the audiance understand how I use the tech to represent the digital shaman.

IMG_3779 from Friendred on Vimeo.


13th of August, rehearsal with Yerin, there still have a lot of arguments need to be adjusted according to the movements choreographed by the dancer. the central object has three differents modes of lights, the light inside the ball will stitch on and off when performer use the pitch value calculated by the 9OFD board with the light strips setting the different amount of lights changing the colour. the second mode is flashing blue and white lights on the lights strip. the third mode is when performer holding the ball and make the ball has up and down orbit. Then the light will have the mode the other lights had. the whole scene would become the culmination of the first stage. the next step is to arrange all the bits in the first stage according to the scenography rather than scattered everywhere.

IMG_3789 from Friendred on Vimeo.

IMG_3829 from Friendred on Vimeo.

IMG_3825 from Friendred on Vimeo.


testing the second stage with projector and kinect. Also rehearsal with music which made by John and Joy. I found for some reason there will be some delay or low framrate happenned at some point. In the second stage, there will always draw a "small man" on the left top corner, I found the reason is I'm getting depthImage as a texture mask, even if I bind texture onto the planePrimitive, there are still have the original depth image on the left corner, so nomatter how I change the draw function, the robust small man will be there. the solution is I draw the plan onto the ofFbo and using the fbo texture as mask instead of using the depthtexture.

IMG_3890 from Friendred on Vimeo.


Setting up the light in the space. I choose to put two beams on the floor, when goes to third stage, the main part is performer interacting with the beams by getting skeleton position. three Strobes on the wall, in the end the three strobes will shine syclelly.
the profile on the ceilling and on the back is used for creating gobo mode and prism effect or back light glow effect.

Also for testing, I create all the functions of each single lights out of Beam, Strobe, Wash and Profile. And thanks for Terry offering the code of controlling the beams tilt and pan direction of getting skeleton position. the way to let lights know where is people is physically measure the distace between the lights and kinect also passing the position of dancer, and caculate by atan() function.


the floor plan when doing the performance

the floor plan when not doing the performance. For safty measure reason, there will be blocked when not doing the performance, then audience can not access to the peforming area.

controlling the two beams by getting left and right hand position. Also seperating the skeleton reading, so I can make different performers control different lights.

IMG_3927 from Friendred on Vimeo.