Thoughts and Prayers

May 27, 2022

Make Art with AI: Interactive Experience with Body Movement

Thoughts and Prayers Don't Change Headlines

With recent events in the United States, I wanted to articulate some of the anger that I have been feeling around legislative inaction surrounding gun control. Specifically, I saw an opportunity to use machine learning models for pose detection to speak to the futility of the oft heard sentiment of "thoughts and prayers". Using blazepose, I am trying to identify if a person is making a prayer gesture. The idea being that the prayer gesture alone is not going to be enough to change the all to common headlines that we are seeing in this country about gun violence.

Try it here, by making a prayer hands pose here: https://bettyfileti.github.io/thoughtsAndPrayers/

The code can be viewed here: https://github.com/bettyfileti/thoughtsAndPrayers

Process

With a rough idea of what I wanted to make I started by using Moon's fanstastic start file for Blazepose. The first thing I did was just run it to see what datapoints I was able to access and what would be the easiest way to define the gesture using those data points. After some short observations, I focused in on the shoulders, the wrists, and the pinky fingers.

Just testing out the ML model

Still just testing, but with more focus on relevant points

While I was initially hesitant to focus on the pinky fingers, due to the amount of jumping, I found that they were consistent enough for me to at least start to explore. I defined the points that I was measuring and then gave myself a little data reader in the upper left corner. This was super helpful for me to have this information easily glanceable while I was exploring the range of motions and measurements.

Check out my little data reader in the upper left. So helpful.

Next was to try to use those values. I was able to get most of the way there by evaluating the distance to be greater than what I was observing. Here you can see I've got the first test of defining PRAYING versus NOT PRAYING.

PRAYING vs NOT PRAYING, take 1.

Even though this was working, it was pretty noisy data. The pinky points are jumping around a lot. I came onto a great resource about smoothing data points by capturing a collection of them and using the average values. Simple Smoothing for Pose Net by Lisa Jamhoury. This helped me start to get a more stable and smooth user interaction. (I also later on realized that I hadn't correctly updated all of my variables, so it got even smoother later on!)

Excellent! https://javascript.plainenglish.io/simple-smoothing-for-posenet-keypoints-cd1bc57f5872

With the primary interaction mapped out, I spent a little time designing some of the surrounding elements for the experience.  I observed and collected a number of lower-thirds graphics from news programs reporting on the Tops shooting in Buffalo, NY and the Uvalde Elementary school shooting. I knew that in order to make this successful, I needed to pick up some of the visual language and cues of these graphics.

Starting to employ some of the visual language of the "news"

With some structural stuff worked out, I went back to working on the interaction. I started by associating the PRAYER boolean with displaying one of the screenshots I had of the lower thirds. I hadn't yet decided if I wanted to simply use the images or if I should have them be redesigned with type and p5 elements. After a quick little experiment, I ended up decided to build them with type due to inconsistencies in the screen shots that I took. But I still think it could be a strong approach to this sketch.

Rapid testing the effect by pulling in an image

Starting to build with text

At this point, I more or less have the core functionality and experience of the sketch in place. However, upon experiencing it I realized a crucial flaw. Despite the fact that the headlines aren't changing in the sense that I meant it (they are still about mass shootings), the user was literally making the headlines change each time they prayed. So while the metaphors were kind of there, the user experience was not landing on the point that I wanted to make. I did a little user testing with Sarah, who helped me come up with a simple solution. By switching the experience to always have a headline be displayed, the user was better confronted with the reality of the news and the futility of the prayer gesture, which in a sense "makes things worse."

Adjustments to the User Experience helped to better make the point

Finishing Touches

Testing out the sketch in different environments and situations was really helpful. I realized that there were still things that I could do to improved the body input interaction. Notably, adding some additional parameters to the pinky distance could help me be even more explicit about the gesture. Since I'm lazy, I ended up creating a little shorthand for some of the values. If I was NOT PRAYING because the pinky confidence score was low, then I set the pinky distance to 1000 + leftPinky.score. If I was NOT PRAYING because the vertical distance between pinkies was too great, the score became 2000. This was a really fast way for me to get a read out of what was happening.

I also realized that using a set distance was pretty dependent upon the user sitting a specific distance away from the computer. When I was working in a different position where I was closer to the camera, the values needed were totally different! To compensate for this, I replaced the set value with a comparison of the distance between the eyes. While this is still not perfect, it did help to improve the depth of range that a person could be sitting within.

Next Steps

  1. Improved the depth perception to better qualify the prayer hand gesture.
  2. Build out the user experience with onboarding and communicating to the user what they are supposed to do.
  3. Look at performance on mobile.

No items found.