Research & Writing

Blog

Dispatches from the edge of chaos — on nonlinear dynamics, AI, emergence, and the mathematics of complex systems.

Transformer

ImageGPT: Architecture & How It Works

ImageGPT (iGPT) applies the autoregressive GPT architecture directly to image generation by treating images as sequences of pixels or color clusters, demonstrating that language model approaches

2 min read
Generative

VAE (Variational Autoencoder): Architecture & How It Works

The Variational Autoencoder (VAE) is a generative model that learns a continuous latent representation of data by combining an autoencoder architecture with variational Bayesian inference, enabling

2 min read
GAN

StyleGAN: Architecture & How It Works

StyleGAN is a revolutionary generative architecture that produces photorealistic images by borrowing from neural style transfer, using a mapping network and adaptive instance normalization to control

2 min read
Generative

Neural ODEs: Architecture & How They Work

Neural Ordinary Differential Equations (Neural ODEs) replace discrete layer-by-layer transformations with continuous dynamics defined by neural networks, treating depth as a continuous variable and computing outputs

2 min read
Diffusion

Consistency Models: Architecture & How They Work

Consistency Models are a new family of generative models that enable high-quality single-step image generation by learning to map any point along a diffusion trajectory directly

2 min read
Diffusion

Flow Matching: Architecture & How It Works

Flow Matching is a generative modeling framework that learns continuous normalizing flows by regressing onto simple vector fields, providing a simpler and more flexible alternative to

2 min read
Vision

VQ-VAE & VQGAN: Architecture & How They Work

VQ-VAE (Vector Quantized Variational Autoencoder) and VQGAN (Vector Quantized GAN) learn discrete codebook representations of images, enabling powerful image generation by converting the continuous pixel space

2 min read