Formulated as a gedankenexperiment in 2021 and with the implementation beginning in 2022, Transfigurations of G-code is the project in which two machine learning frameworks replace the slicer as a deterministic algorithm to foreground the figural as a generative force in the translation. What seemed to be the strongest side of G-code, its relentlessly rigid translation into the signified, became its weakest spot with the arrival of stochastic parrots in the form of large language models, such as now ubiquitously known GPT. Representational translation of the slicer can be replaced by grammatical permutations generated by the machine learning framework of GPT as paths traversed by reinforcement learning agent in their search for expressive modes of translation or the figural. It would apply the “linguistic turn” to the G-code, analysing the relationship between the language, its users, and the environment. Algorithmic agents could explore the vast development possibilities on a far larger scale than humans could. Thinking back to the dancer translating the movement via the motor cortex, it becomes evident that the translation does not simply follow the optimised model with the shortest or most economical movement but that the movement embodies more and carries power and expression with it.
Transfigurations of G-code consists of three main components of the system: reinforcement learning agents (RL), G-code-generating large language model (LLM), and the simulated environment. Two open-source frameworks are combined to correctly simulate the environment of 3D printing. The first framework, Gazebo, is responsible for the physical simulation of kinetics and dynamics of the movement of printer bed and the printer head. The second one, openFOAM, uses computational fluid dynamics (CFD) to simulate the extrusion of plastic, its cooling properties, and the adhesion to the previous layers. A pre-trained LLM (GPT-J-6B) is fine-tuned with a curated dataset of G-code is responsible for the creation of new G-code, based on prompts from RL agents. RL agents send the newly generated G-code into the simulated environment as an action and receive a reward (based on print quality, structural integrity or material efficiency) and their new state in the environment.