Glitching, analyzing in real time (sort of)

Following the video research I did with analysis algorithms and data corruption: https://blog.felixjely.fr/2021/12/19/algorithms-analysis-of-an-city/ , I wanted to pursue it somewhat in “real time”. As I may have mentioned, this research will have to be presented in a room someday (in july) and having this effect apply in realtime could in fact be projected into a wall, analyzing people around the space and make it glitchy. Thought, you’re a part of the project, you’re “analyzed” in realtime by algorithm; It could led to intresting realisation.

So I did a small program that can make it happens. It uses two camera to render a small video. It follow this logic : it captures the video, analyze, combine the outputs, then it glitchs and convert it to make sure the final output is readable (by vlc, or any webplayer). The system shows the last render in a loop, and in the same time it calculate the next step, once rendered it swap with the previous one. It’s not in “real time”, you have to have a delay at least to convert and glitch the video output.

The analyse follows the same logic as the video research: it runs a YOLO algorithm that check if certain objects are in the scene and put a bounding box around them. And it runs a mediapipe holistic model that check detail on person, such as biometric shape of faces, position of the body, hand tracking …