Face-Expressions Synth
Can we use our face as an audio instrument? Ca we translate the biometric of our face into sound data?
Using PureData, a software for sound synthesis, and FaceOSC, a face recognition software, I tried to make an experimental set up that produces different sounds based on facial expessions. Each part of the face is the input for different parameters, which are interconnected. By moving the eyebrows, closing the eyes, opening the mouth, the software generates different data value that are translated into audio output.
body, biometric data, face recognition, sound
analog, visual