H(n)MI
Last updated
Last updated
The Human and Machine Interaction class assembles a journey to translate physical to digital information across different software and hardware, that when combined, can be reframed as powerful tools to develop systems of interaction between humans, electronics, and new modes of artificial intelligence available today. Within a practical approach during class, we could explore new digital tools by learning new coding models and dive deeper into the electronics we've been getting intimacy with since the beginning of the work, as the Barduino.
On the first day of the workshop, we were introduced to the schedule/activities that were about to start while creating our first soft capacity sensor using simple materials such as textiles, conductive tapes, and wires within groups. By creating a DIY input device, we progressively evolved and learned new software to our skills by connecting the Barduino and Processing to create simple graphics to visualize the interaction such as balls that could change color, size, and movement by connecting the sensor, microcontroller, and computer into one piece.
In sequence, the second day was marked by the introduction of .p5, a Javascript library responsible for creating live and interactive graphics on the web and even producing simple games with Arduino and other peripherals available on your computer, such as cameras and microphones. To improve our code capabilities, it was introduced the possibility of merging our ideas with already trained AI models from ml5.js, another Javascript library focused on artificial intelligence, capable of classifying sound patterns, body poses, images, and so on.
At last, on the third day of the workshop, we explored the libraries we've been exploring within our groups by integrating and assembling different parts into new compositions, as an interface in which signs from the capacitor-crafted sensor were translated into code within the .p5 library to produce a visual timeline of the interactions we had with the machine. Then, to finish the class, a briefing was proposed to develop a project in which our interactions with the machine would be translated into visuals using the software and libraries studied in class.
The Good Boy project relates to the idea of creating an artistic artifact that has the objective of creating an interaction where the user would pass a surveillance experience with an AI model that recognizes posture patterns made through photographs and outputs through a vibration actuator and sign that would control the user while he/she/they interact with the computer. The idea started with a healthcare approach, where a camera would be positioned beside a person while working to give a sign to correct the posture when it recognizes, through the patterns trained with the AI, a bad one.
While working on the project with the resident students who enjoyed our journey, the group was splited into hardware and software, where two people focused on the use of TouchDesigner to make the visuals, and the other two focused on how to use the Arduino to implement the actuators needed to create the interaction. In the beginning, we noticed a technical issue while connecting the AI model which reads body gestures with Arduino, which made us change to a trained model of people using their computers in front of their webcams.
As a consequence, this change facilitated the connection with the TouchDesigner software proposed by the resident person in our group but limited the possibilities considering the low understanding of the machine that couldn't read or classify very well the parts of the body that we aimed to correct. While doing this, some tests were made using some actuators to understand how could we connect more than one vibrator sensor to make different location vibrations, which haven't worked very well considering the time available to explore and assemble the circuits.
With a lot of changes and adaptions along the process, we discovered a flaw connecting TouchDesigner and Barduino, which made us use the original microcontroller to make the integration work properly across the devices, create the visual, and then activate the vibrator to interact physically with the user. At last, the project changed meaning where bad poses turned into a safe space and correct ones activated the vibrator inside of a necklace written "Good Boy", as an ironic a sarcastic narrative where the user is domesticated by the machine, the AI who recognizes its gestures.
During the Human-Machine Interaction class, we had the opportunity to explore new software and how it can be used to translate reality into digital information, something that contributed to obtaining new skills of coding that weren't scary because of its simplicity and intuitive structure. However, the class made me reflect on the possibilities and limitations of sensing our environment, about not only converting physical to digital but to questioning what we are missing to perceive around us and how could we develop new tools and technologies to make us visualize the invisible, as a new reality to be framed by our eyes, skins, ears, something that could improve our relation with the external.
Even if the project we've developed in our group connected the Machine Learning tools presented in class to generate new visuals within a kinky storytelling related to surveillance by obeying the internet when the objective was to make a posture detection to improve health quality while working, I believe that some mental blocks complicated the group and limited my vision while producing the project. I always had a deep interest in understanding the unseen, and what parameters can be extracted from an invisible reality, this could benefit my final project and narrative by relating other biological senses from other species, an opportunity to create new diagnosis tools, something that wasn't very explored, but must be in future projects.