It’s been quite some months since I last posted any update of my programming experimentations. Lately, I’ve been too busy on my studio work and music. Nevertheless, interactive systems of sound are an important chapter of my life and wouldn’t like to give that up. While I need more time to develop a new idea I like to call “Resonant Theremin” I think I should at least introduce here my previous project: Mental Palindrome.
“Mental Palindrome” is a system that generates real-time computer music while using EEG (electroencelography) input data. The idea is to create a machine that can perform improvised music along with a human partner based on human’s biofeedback.
On the Vimeo video below you can have a taste of the first ever real-time demonstration of Mental Palindrome. The guy with the guitar is my friend, neurologist and ambient artist Aristidis Katsanos. While this may sound to you totally chaotic, the live experience, and being inside the 4 speaker square space, was a completely different story. Do not forget that computer and human have to spend much more time together to perform a more commercially “friendly” musical result even (that’s not the point though).
Mental Palindrome can detect user’s emotional status by combining his facial expressions and other brain procedures (engagement, frustration, meditation, excitement etc). The sonification algorithm is complicated and uses EEG data to manipulate rhythmic patterns, melodies, tones and synthesizers’ parameters. Human can also control directly the 4-channel panning system of the CG music with the movement of his head (using gyroscope data) and can also train a specific thought -like bending a guitar string- so that the system can recognize and use it in the sonification algorithm.
I used an Emotiv Epoc headset to get the EEG signals and MindYourOSCs software for managing the EEG data. The code was written in Super Collider for programming the interactions and sound.
“Mental Palindrome” was a project created with the help of professor Andreas Floros in the Audio Visual Department of Ionio University (Corfu, Greece) during 2016-2018. The system was demonstrated in the Panhellenic Acoustics Conference 2018, Patra, Greece.