Formulated as a gedankenexperiment in 2021 and with the implementation beginning in 2022, Transfigurations of G-code is the project in which two machine learning frameworks replace the slicer as a deterministic algorithm to foreground the figural as a generative force in the translation. What seemed to be the strongest side of G-code, its relentlessly rigid translation into the signified, became its weakest spot with the arrival of stochastic parrots in the form of large language models, such as now ubiquitously known GPT. Representational translation of the slicer can be replaced by grammatical permutations generated by the machine learning framework of GPT as paths traversed by reinforcement learning agent in their search for expressive modes of translation or the figural. It would apply the “linguistic turn” to the G-code, analysing the relationship between the language, its users, and the environment. Algorithmic agents could explore the vast development possibilities on a far larger scale than humans could. Thinking back to the dancer translating the movement via the motor cortex, it becomes evident that the translation does not simply follow the optimised model with the shortest or most economical movement but that the movement embodies more and carries power and expression with it.
Transfigurations of G-code consists of three main components of the system: reinforcement learning agents (RL), G-code-generating large language model (LLM), and the simulated environment. Two open-source frameworks are combined to correctly simulate the environment of 3D printing. The first framework, Gazebo, is responsible for the physical simulation of kinetics and dynamics of the movement of printer bed and the printer head. The second one, openFOAM, uses computational fluid dynamics (CFD) to simulate the extrusion of plastic, its cooling properties, and the adhesion to the previous layers. A pre-trained LLM (GPT-J-6B) is fine-tuned with a curated dataset of G-code is responsible for the creation of new G-code, based on prompts from RL agents. RL agents send the newly generated G-code into the simulated environment as an action and receive a reward (based on print quality, structural integrity or material efficiency) and their new state in the environment.
In addition to the inner connections of RL agents, LLM generator, and the simulated environment, three feedback loops ensure constant fine-tuning and improvements of the whole system. The simulated environment delivers feedback to both RL agents (the first feedback loop) and the LLM generator (the second feedback loop) to improve their performance. The third feedback loop between RL agents and the LLM generator enables iterative improvements and a fine balance between exploration and exploitation of the environment. This feedback loop transforms LLM from a static G-code generator into a dynamic participant in 3D printing process, learning and adapting based on real-world outcomes.
In the first experiment, GPT-3 was given the G-code and instructed to continue its generation.
Without the fine tuning with a curated set G-code sequences and its embodiment in the simulated physical environment, the continuation generated by the LLM model went off the tangent:
The advantage of the curated dataset is not only the improvement of the generated G-code, but also the formation of the style through the choice of 3D models used to generate the training G-code dataset. In further iterations this set can be automatically generated based on the specific prompts coming from RL agents (like Platonic solids, sex toys, luxury cars, and expressive gestures amongst others).
Reinforcement agents embodied in the simulated physical environment are free to explore the vast space of combinatorial capabilities of GPT, not to generate “yet-another” representation, but to be the generative of the process. Free to learn with each translation, to possess the freedom to explore this new environment akin to the freedom of random walk, but without needing infinite time to arrive at the same result. It could produce “ready-mades” that all look alike but result from a different generative process. The constitutive, ana-material difference, situated in Duchamp’s “found” object as the past tense or always already found articulates itself here as “founding” in the present participle, in the process of making, and therefore in the ongoing process of becoming. The figural does not interrupt the discourse but generates and regenerates it in the intrinsic nature of its cognition and creativity. Not as a serendipity, as in the errors of Synthetic Eros -- Synthetic Errors, but as an intrinsic manifestation of the process.
Back to Top