Language

Internet image generator archive

The beautiful bad old days of AI images

From melted MNIST digits to Nano Banana 2: a curated timeline of models, artifact-heavy outputs, and the most direct places to still try them.

Run links prioritize promptable browser demos. Some historic models are closed, sleeping, unofficial, or replaced by current products; each entry says what kind of link it is.

Digits, bedrooms, fake faces

GAN basement

2015
Early DCGAN bedroom samples with smeared objects and broken room geometry.
Convolutional GAN Interactive HF Space

DCGAN

Almost a bedroom, if you squint.

Radford, Metz, and Chintala gave GANs a stable convolutional recipe and a recognizable generated-image texture.

Vibe: Symmetric smears, room blobs, glossy training-set averages.

Demo context: Community Space; current demo is not the original paper UI.

2018
BigGAN failure sample of a tennis-ball-colored dog-like object.
Large class-conditional GAN Interactive HF Space

BigGAN

Sharper ImageNet GANs still produced cursed plush-object hybrids.

DeepMind showed that scaling GANs could produce much sharper and more diverse ImageNet samples.

Vibe: Hyperreal animals and objects with suspicious local anatomy.

Demo context: Community Space; class-based prompt/play controls vary.

2019
StyleGAN synthetic faces with crops highlighting earrings, mouths, and fused background details.
Style-based GAN One-click face generator

StyleGAN

Convincing until you looked at ears, jewelry, hair, and backgrounds.

NVIDIA made latent-space controls, face interpolation, and synthetic portraits central to AI imagery.

Vibe: Almost-photo faces with melting earrings and impossible backgrounds.

Demo context: One-click face refresh; not a prompt interface.

Sketches, cats, zebras

Translation toys

2017
CycleGAN failure cases where domain translation makes minimal or distorted changes.
Unpaired translation Interactive HF Space

CycleGAN

The failure grid is the point: cats half-become dogs, seasons drift, labels get confused.

Cycle consistency removed the need for paired examples and made domain transfer a core generative pattern.

Vibe: Texture transfer that sometimes forgets object boundaries.

Demo context: Community Space; may be asleep or slower than newer demos.

CLIP, DALL-E, haunted grids

Prompt weirdness

2021
A CLIP-guided generative image with painterly surreal texture.
CLIP-guided generation Interactive HF Space

VQGAN + CLIP

The AI art Twitter notebook look: ornate, crunchy, symbolic, over-optimized.

Artists combined CLIP with generators and made promptcraft public before diffusion dominated.

Vibe: Maximal texture, repeated motifs, dream-logic composition.

Demo context: Public HF Space; old CLIP-guided workflows can take time.

SD, DALL-E 2, FLUX

Diffusion opens up

2022
DALL-E 2 example of an astronaut riding a horse.
Diffusion + CLIP priors Replaced by newer OpenAI image models

DALL·E 2

The astronaut-on-a-horse moment where photorealism became the headline.

It pushed text-to-image into a mainstream product race and made inpainting/outpainting feel usable.

Vibe: Clean surreal photography with hand, text, and physics failures around the edges.

Demo context: DALL-E 2 itself is retired from the main flow; ChatGPT is the current OpenAI route.

2023
DALL-E 3 example of an avocado in therapy with legible text.
Prompt-following image model Runnable in ChatGPT

DALL·E 3

Prompt adherence and readable-ish text started displacing the old glitch aesthetic.

Long natural-language prompts worked better, and image generation moved into chat.

Vibe: Crisp editorial illustration, fewer weird edges, cleaner composition.

Demo context: Current product route; requires access to ChatGPT image generation.

2024
FLUX.1 generated sample collage.
Rectified-flow transformer Interactive HF Space

FLUX.1

Open models caught up on anatomy, typography, and prompt obedience.

Black Forest Labs reset expectations for open models after the first Stable Diffusion wave.

Vibe: Sharper, more commercial, less obviously old SD.

Demo context: Official HF Space; fast open model demo.

Chat, editing, Nano Banana

Product era

2025
OpenAI whiteboard image generation announcement hero.
Native multimodal generation Available through OpenAI products

GPT-4o image generation

Image models became editing partners with better visual memory and text rendering.

OpenAI moved from a separate DALL-E lineage toward image generation inside a native multimodal model.

Vibe: Useful, polished, less nostalgic, harder to spot at a glance.

Demo context: Current product route; availability depends on account access.

2025
Google Gemini image model announcement artwork.
Fast image editing Succeeded by Nano Banana 2

Nano Banana / Gemini 2.5 Flash Image

Keep the person, change the scene, iterate fast.

Google turned subject-preserving edits into mainstream Gemini behavior with a memorable nickname.

Vibe: Fast literal edits with less identity drift than earlier tools.

Demo context: Current Gemini product route; model availability changes by region/account.