Consentful Interface
As a preface, I'd like to warn that the implementation of the ideas on my flowchart was severely limited by the use of a GLSL shader. I don't regret learning how to use shaders specifically for this project's first iteration, but in a world where I had any degree of foresight, I likely would've stuck to raw p5 code.
As with the first iteration of this sketch, the program only runs with Google Chrome. This fact in-and-of-itself isn't particularly consentful and is one among the many reasons shaders proved to be a shortcoming here.
The Sketch:
The goal with this sketch was to allow the user a sense of autonomy when viewing a sketch that does a rather large amount of unprompted, unsolicited work. Within the sketch is a description of the program's general process printed out into the console, with two toggles that control the implementation of eye-tracking and shader math. I was particularly interested in the notion of consentful technology being reversible, a novel idea I never considered thta proved to be incredibly effective in implementation. With two check boxes, the user can control exactly what's accessed by the program and easily toggle between different states of the sketch based upon their level of comfort being seen by their webcam.
Included below is my flowchart, something I initially hoped would serve as a thorough framework for the sketch as a whole. The images are meant to be viewed left to right as in Miro, but I since they proved too large to screencap entirely, they're organized here as one on top of the other. While I believe I succeeded in creating a prototype that offers roughly the same level of user consent that the flowchart demands, the final product falls quite a bit short:
As I mention in the short preface, GLSL implemtation in p5 massively limited the implementation of DOM in the ways I intended. I would've loved real-time disclaimers and a true control over whether the camera feed is live or not; instead, due to strange bugs in p5's use of WebGL, I could only offer parallels between what I intended and what I could execute. While the user is able to toggle the eye-tracking "on or off," what they're actually doing is controlling how the eye-tracking is drawn within the sketch. I really wanted to implement control over RGB values of the sketch as a whole, but pixel arrays seemingly don't function in WebGL and haven't for some time. The end result is a shadow of what I intended, with my flowchart giving a much better look into my intentions.
Overall, this project was remarkably informing in terms of consentful technology and unfortunately an learning experience in tempering high hopes. If I had the time, I would definitely implement the same sort of manipulation the shader does in raw p5 using some form of pixel array; this, however, proved to be beyond my paygrade, and while the final sketch is more bare-bones than I ever intended, I managed to spend hours of time attempting and somewhat succeeding in implementing DOM into a shader.