Research & Writing

Blog

Dispatches from the edge of chaos — on nonlinear dynamics, AI, emergence, and the mathematics of complex systems.

Transformer

ImageGPT: Architecture & How It Works

ImageGPT (iGPT) applies the autoregressive GPT architecture directly to image generation by treating images as sequences of pixels or color clusters, demonstrating that language model approaches

2 min read
CNN

MobileNet & EfficientNet: Architecture & How They Work

MobileNet and EfficientNet are efficiency-focused CNN architectures designed for deployment on mobile devices and edge hardware, achieving strong accuracy with dramatically fewer parameters and computations than

2 min read
CNN

Standard CNN: Architecture & How It Works

The Convolutional Neural Network (CNN) is the foundational architecture for computer vision, using learnable spatial filters to automatically extract hierarchical visual features from images, from low-level

2 min read
Vision

VQ-VAE & VQGAN: Architecture & How They Work

VQ-VAE (Vector Quantized Variational Autoencoder) and VQGAN (Vector Quantized GAN) learn discrete codebook representations of images, enabling powerful image generation by converting the continuous pixel space

2 min read
CNN

U-Net: Architecture & How It Works

U-Net is an encoder-decoder architecture with skip connections designed for biomedical image segmentation, producing pixel-precise segmentation masks by combining high-level semantic features with fine-grained spatial details.

2 min read
CNN

ResNet & DenseNet: Architecture & How They Work

ResNet (Residual Networks) and DenseNet (Densely Connected Networks) are landmark CNN architectures that solved the degradation problem in deep networks through skip connections, enabling training of

2 min read
Transformer

DINO & DINOv2: Architecture & How They Work

DINO (Self-Distillation with No Labels) and DINOv2 are self-supervised learning methods that train Vision Transformers to learn powerful visual features without any labeled data, producing representations

2 min read
Transformer

DeiT & MAE: Architecture & How They Work

DeiT (Data-efficient Image Transformers) and MAE (Masked Autoencoders) are two breakthrough approaches to training Vision Transformers effectively—DeiT through advanced training strategies and distillation, MAE through

1 min read