a design research practice by catherine griffiths for critical computational enquiry and future aesthetics in the age of algorithm ethics
visualizing algorithms: a prototype software to visualize a machine learning algorithm, a decision tree classifier.
simulated data flows through the algorithm, showing decisions being made in real time.
built procedurally as an interactive tool, so that any classifier of the same type can be loaded and visualized. the ui supports the self-organization of the algorithm structurally, and to aide analysis. the loaded examples present different shapes of classifier with different feature to class ratios.
the prototype can visualize mistakes in prediction, where the algorithm misclassifies data. it can also reverse engineer each data point’s path through the algorithm to visualize at which fork an error was made.
the most popular paths taken through the algorithm’s complex network of decisions are also visualized.
can the visualizations of algorithms be used as an a-linguistic tool to reengage with decision-making in prediction systems?
can interaction design, generative design, and critical code studies, combine as an effective method to visualize ethical positions in algorithms, including bias, mistakes, and interpretability?
to consider bias augmentation, what can be learnt by temporarily isolating the meaning in data, to focus on the effect that structure and process play in the generation of bias?
what does it mean to learn, in machine learning systems, and is anthropomorphism a productive analogy?
1/2018 Rethinking AI: Neural Networks, Biometrics
and the New Artificial Intelligence Peer-reviewed article: Visual Tactics for an Ethical Debugging
I saw a connection between computer vision, a branch of AI, and the cellular automata algorithm. The work seeks to draw a line visually, connecting an advanced social issue, like surveillance in society, reverse engineered back to a computational logic.
I used langton’s ant, an autonomous agent operating within a cellular automata system. CA’s are grid based calculations using neighbourhood relations. I was interested in how an autonomous computational entity would navigate and register a data set, in this case a photographic image.
Using a satellite image of the site of The Meeting of The Waters, the confluence of two rivers in the amazon, that due to their densities and different colours, don’t merge, but run alongside each other for several kilometers.
The work is an experimentation with showing the algorithms view of the system, which is a simplified logic, versus the human view, which is a higher quality but surface image.
Connecting to the issue of surveillance, I used the open source computer vision library, OpenCV, that is used for facial recognition and surveillance cameras to apply the same CA logic from Automata I to show how image filters algorithmically reduce the complexity of an image to create a different reading of the image to compute on, in this case to read motion. It’s a filter called background subtraction.
Applied to a video of a chameleon, the analogy being the the stillness of the animal is its camouflage, and it reveals itself to the algorithm through its movement.
A branching narrative interactive experience, that explores cinematic aesthetics and decision-making.
A user navigates a narrative with two characters about artificial intelligence systems and emotional connection. The filmed scenes at times connect and diverge from the narrative, depending on the decisions taken.
Whilst the computation is not foregrounded, although it is reflexively alluded to in the narrative, I was thinking about branching path structures and decision-making.
This connects to later work in visualizing algorithms.