Artificial Us is an interactive installation and research project that challenges how generative AI is used in the arts. While some use AI to produce final works that often feel unoriginal or detached, this project takes a different approach: treating AI as raw material rather than the finished product. By using AI critically to tell a story of misrepresentation and extractivism, Artificial Us addresses the biases in mainstream generative AI models and the systems that create them.
The project focuses on Latin America, a region often misrepresented by generative AI. Using photos from Chile and Argentina, custom LoRA models were trained for Stable Diffusion to generate images that reflect the textures, colors, and essence of “home”.
Buenos Aires, Argentina (2024). Ranco Lake, Chile (2024).
To train the model, a curated dataset of photos from Chile and Argentina was collected, ranging from natural landscapes to snapshots of daily life. The only requirement for inclusion was that the images fit the subjective criteria of “feels like home.” Before fine-tuning the model, generative AI outputs for prompts related to daily life in Chile or Argentina were generic and lacked any recognizable elements that could identify the location. After training the LoRA models, the outputs began to reflect attributes unique to these places, capturing details that evoke a sense of familiarity and belonging.
Images generated with base Stable Diffusion model. Images generated with custom LoRA model.
These AI-generated images are not the final product but instead serve as material to build immersive 3D spaces. Depth maps generated with Depth Anything V2 are used to transform the images into navigable environments, allowing participants to explore AI-generated interpretations of Latin America in real time.
AI generated image. Depth map. The final piece, running on TouchDesigner with custom Python scripts, incorporates Mediapipe Face Landmarker to track the audience’s facial positions. This interactivity makes the entire scene react dynamically to the viewer’s movements, creating a more immersive and personal experience.
Artificial Us is being developed in collaboration with Tiago Aragona (Argentina), as part of our MFA thesis at Parsons School of Design.
The project focuses on Latin America, a region often misrepresented by generative AI. Using photos from Chile and Argentina, custom LoRA models were trained for Stable Diffusion to generate images that reflect the textures, colors, and essence of “home”.
These AI-generated images are not the final product but instead serve as material to build immersive 3D spaces. Depth maps generated with Depth Anything V2 are used to transform the images into navigable environments, allowing participants to explore AI-generated interpretations of Latin America in real time.
Artificial Us is being developed in collaboration with Tiago Aragona (Argentina), as part of our MFA thesis at Parsons School of Design.