Research & Writing

Blog

Dispatches from the edge of chaos — on nonlinear dynamics, AI, emergence, and the mathematics of complex systems.

Transformer

GPT-4 & Gemini: Architecture & How They Work

GPT-4 and Gemini represent the frontier of large language models—massive multimodal systems capable of processing text, images, audio, and video while demonstrating near-human performance across

2 min read
Transformer

ImageBind: Architecture & How It Works

ImageBind is Meta's multimodal AI model that learns a joint embedding space across six modalities—images, text, audio, depth, thermal, and IMU data—using

2 min read
Transformer

Flamingo, LLaVA & InternVL: Architecture & How They Work

Flamingo, LLaVA, and InternVL represent the evolution of vision-language models that combine pretrained vision encoders with large language models, enabling conversational AI that can see and

2 min read
Transformer

CLIP: Architecture & How It Works

CLIP (Contrastive Language-Image Pre-training) learns to connect images and text by training on 400 million image-text pairs from the internet, enabling zero-shot visual classification and serving

2 min read