Project 03


The glitch becomes a script:
AI · Visual experiment · Technical Image




This practice-based project investigates how AI-generated images become believable, even when they are unstable, inaccurate, or clearly fabricated. I began from a simple observation in my own use of AI systems: hallucinated outputs can still feel like “signs”. They calm, persuade, and invite interpretation, not because they are true, but because they look coherent and respond to a need for meaning.



Space for projection
Credibility without evidence
Validation and reassurance



How do AI image systems produce credibility through coherence and repetition, and how does this credibility invite projection and reliance even when reference is unstable?




My method treats repetition, error, and variation as evidence rather than noise. To generate a more lifelike jellyfish, I keep adjusting the prompt details as a way to test how image credibility is constructed. The incomplete face is a trace of the system’s uncertainty as it combines my prompt with reference images. This hallucinated trace leaves room for emotional and psychological projection, so meaning gets completed in the act of looking. In this sense, these images do not simply depict hallucination; they show how it works at a system level, forcing viewers to ask: am I seeing an objectively generated image, or an error assembled through systemic drift?


I generate many experimental images because repetition, error, and variation are the evidence. The work studies how credibility and belief are built across a feedback loop, not within a single image.





Feedback Loop Test: Prompt → Image → 3D Model → Image:






When I feed the 3D model back into the AI, what I get is not a more accurate copy but a kind of “drifting coherence”. The system rewrites the object through statistical coherence, and each iteration introduces new deviations and artefacts. It reveals that what we call a “credible” image is often built on continuous fabrication rather than a stable source.

Drifting coherence: images remain stylistically and materially plausible while identity and reference shift across iterations.



My practice does not aim to prove how viewers feel. Instead, it sets up a viewing condition: when an image stays consistent in overall style but keeps drifting in the details, meaning is more likely to be filled in through looking and comparison. To do this, I work with a series of AI-generated surreal images and videos, and I deliberately keep the failures, repetitions, and glitches. I generate, select, and re-sequence the outputs again and again, so the “errors” are not hidden but become something you can read, like a script. The project is less about proving why people trust AI and more about showing how a sense of credibility gets produced. The gap between a coherent surface and unstable details creates space for interpretation, where meaning is temporarily stitched together. For me, this becomes a kind of shared dreaming: the machine keeps offering fragments and a surface that looks real, and meaning is pieced together each time we view and compare.















RETURN TO TOP OF PAGE