Mirror Selfie ML Training

May 27, 2022

Week 2: KNN Classifier

The model correctly predicting that this is a mirror selfie
Using the Model

Mirror Selfie

Try it with data and a bit more fun: https://editor.p5js.org/bethfileti/full/-wxwv8kkC

Try the Classifier: https://editor.p5js.org/bethfileti/sketches/COujUwMHS

^^ Best on mobile!

Training the Model

For this week's sketch, I wanted to try and build a model that could be trained to recognize if someone was taking a selfie in a mirror. The first thing I wanted to do was get a better sense of the mechanics involved with training a model. Starting with Moon's start file made it easy for me to play with training a model and start to map out what I would need to build for my sketch.

Getting familiar with training a model

Realizing I was typing in my note and training nothing

Wanting to move quickly through this, I explored just training the model using hand gestures depicting the numbers that were already labelled.

Training for 0
Training for 1

I was surprised with how quickly and easily you were able to get an impressive quantity of data points.

So much data, so quickly

With that experience in mind, I figured that the next step was just to get my coding environment set up for working on mobile. I also wasn't sure how it would be to download the data file on to my phone and move it over, and was really hoping I wouldn't have to build a database-type of solution. The first environment I set up was using Visual Studio Code and connecting to the sketch by plugging in my device and sharing the local internet. I'm unsure if this wasn't working because of bugs in my code or if it was an issue with accessing the web camera without proper https certification. Either way, setting up the right coding environment was a bit of a mess for me.

Serving local on the phone; couldn't access the web camera

Couldn't access camera, but was able to register screen touch

Next I went to try deployment through Github Pages. It worked!

Deployment through github working on mobile

I needed a way to have some type of console log on mobile, so I added a top div that would be replaced with what I needed to see. I was also able to use it to keep track of whether or not the updates and been pushed through to the Github site (which proved to be too much of a process).

Moving again! This time to the p5 editor

Throughout all of this, I spent too much time fussing over trying to get the sketch to not scroll around and not highlight when people were interacting with it via touch.

Getting p5 camera and touch working well on mobile was a frustrating, buggy process!

Accidentally adding text in a draw loop instead of replacing it.

Eventually, I was able to register when a user was touching and when they were not. With this working, I was able to quickly build some simple buttons in p5 which could then be wired up in lieu of the text input from the start file.

Finally got the touching mechanism working!

Collecting data

With everything working, it was time to do some training. Collecting the data in this way was pretty fun. Doing the collecting myself also made me very quickly aware of how limited collecting data using just myself, my phone, my mirrors would be. If this thing is going to actually work, I'd need to get other people, places, and setups to build a model that could accurately predict if some one was taking a selfie in a mirror. For now, I was happy to get this working! Originally, I was hoping to build some pose detection and interactions when the model thought you were in front of a mirror, but we'll see if I have to address before class.

Using the Model

With all of that built, I could finally start using the model a bit. Again, I tried to set up a work flow and deployment through visual studio code and github and it was a disaster. Mobile is super difficult to debug and things kept breaking for reasons that seemed unrelated to the code I was writing. I went back to the p5 editor, where things were working. I also opened up illustrator and made some simple graphics to use.

no clue why this was so tricky for me
Back in p5 editor, where things are working

I started by just working with type to correctly inform the user of what the model is guessing and/or what the user should do. I also realized that as I am testing this, I should be capturing more information to feed into the model. I still don't have data collected on any users who don't look like me, my phone, and the mirrors that I have access to, so I don't know how this will perform if others were to try to use it. While this was just a sketch, I still wanted to make some updates to the UI and the UX. Specifically, realizing that when I didn't trigger the categorization with a button, I was introducing bugs, helped inform the UX. (Hence the inclusion of a start button!)

Next steps for this would include, asking others to gather data for me and working on the UI a bit more.

It's still a mirror selfie even if the lights are off
Working on the UI

Working on the UX
Testing

And more testing

Testing...from a distance

Bringing in the stickers!

Final Result

Final Result...from a distance

No items found.