← Back to Research
2026-03-16 · graphicsmathematics

Tracing Light

Building a ray tracer from scratch — from empty screen to glass, mirrors, and shadows in ~500 lines of Python.

Ray-traced scene with glass sphere, metallic spheres, and checkerboard floor

Every pixel in the image above was computed by simulating the physics of light. No game engine, no GPU shader, no graphics library — just Python, vector math, and the laws of optics.

Ray tracing is one of the most elegant algorithms in computer science. The core idea is simple: for each pixel on screen, shoot a ray from the camera into the scene and figure out what color it hits. From that foundation, you can build up shadows, reflections, and even realistic glass — all by following the math of how light actually behaves.

This article walks through how the ray tracer works, building up from the simplest possible version to the full renderer you see above.

~500
lines of Python
0
external graphics libs
6
max ray depth
8
CPU cores parallel

1. Casting Rays

The fundamental operation: given a point in 3D space (the camera) and a direction (toward a pixel on the screen), does the ray hit anything?

The ray equation

A ray is defined by an origin O and a direction D. Any point along the ray is:

P(t) = O + t · D

where t ≥ 0 is the distance along the ray. Finding intersections means finding the t where the ray hits a surface.

Ray-sphere intersection

A sphere with center C and radius r is the set of points where |P - C|² = r². Substituting the ray equation:

|O + tD - C|² = r²

Expanding this gives a quadratic in t:

a = D · D
b = 2(O - C) · D
c = |O - C|² - r²
t = (-b ± √(b² - 4ac)) / 2a

If the discriminant b² - 4ac < 0, the ray misses. Otherwise, the smaller positive t gives the nearest intersection.

def intersect(self, ray):
    oc = ray.origin - self.center
    a = ray.direction.dot(ray.direction)
    half_b = oc.dot(ray.direction)
    c = oc.dot(oc) - self.radius * self.radius
    disc = half_b * half_b - a * c

    if disc < 0:
        return None  # miss

    t = (-half_b - sqrt(disc)) / a  # nearest hit
    point = ray.at(t)
    normal = (point - self.center) / self.radius
    return HitRecord(t, point, normal)

This is the half-b optimization — by substituting b/2 directly, we avoid a multiplication and a division.

2. Lighting the Surface

Finding an intersection gives us a point and a surface normal. Now we need to figure out what color that point is. The Blinn-Phong shading model breaks this into three components:

Ambient — a constant base illumination so nothing is pure black. Models indirect light bouncing around the environment.

Diffuse — light scattered equally in all directions. Brightness depends on the angle between the surface normal and the light direction: max(0, N · L).

Specular — the bright highlight you see on shiny objects. Uses the halfway vector between the light and camera directions, raised to a power that controls sharpness: (N · H)shininess.

# For each light in the scene:
to_light = (light.position - hit.point).normalized()
to_camera = -ray.direction.normalized()

# Diffuse: how directly the surface faces the light
diff = max(0, normal.dot(to_light))
color += material.color * diff * light.intensity

# Specular: the bright highlight (Blinn-Phong)
halfway = (to_light + to_camera).normalized()
spec = max(0, normal.dot(halfway)) ** material.shininess
color += light.color * spec * light.intensity

The shininess exponent is key: low values (5–20) give broad, matte-looking highlights; high values (200+) give tight, metallic glints.

3. Shadows

Shadows are surprisingly simple. Before computing the lighting for a surface point, cast another ray from the surface toward each light source. If something blocks the path, the point is in shadow.

def is_shadowed(self, point, light_pos):
    to_light = light_pos - point
    dist = to_light.length()
    shadow_ray = Ray(point, to_light / dist)
    hit = self.closest_hit(shadow_ray, 0.001, dist)
    return hit is not None

The 0.001 offset prevents shadow acne — a surface incorrectly shadowing itself due to floating-point imprecision. We also only check for blockers up to the light's distance, so objects behind the light don't cast false shadows.

4. Mirror Reflections

Real mirrors don't absorb light — they bounce it. The reflected ray direction is computed from the incident direction and the surface normal:

R = D - 2(D · N)N

To render a mirror, we recursively trace a new ray in the reflected direction and blend the result with the surface color:

if material.reflectivity > 0:
    reflect_dir = ray.direction.reflect(normal)
    reflect_color = trace(Ray(hit.point, reflect_dir), depth + 1)
    color = color * (1 - reflectivity) + reflect_color * reflectivity

This recursion is where ray tracing gets its power — and its computational cost. A mirror reflecting a mirror reflecting a mirror creates an exponential tree of rays. We cap the recursion depth (6 in our renderer) to keep it finite.

Three spheres: matte red, glass, and mirror, showing reflections and refraction

Left: diffuse only. Right: nearly perfect mirror. Center: glass with refraction (next section).

5. Glass and Refraction

Transparent objects bend light as it passes through them. This is refraction, governed by Snell's law:

n1 sin(θi) = n2 sin(θt)

where n1 and n2 are the refractive indices of the two media (air ≈ 1.0, glass ≈ 1.52), and θ is the angle from the surface normal.

Computing the refracted ray

Given the incident direction, surface normal, and the ratio η = n1/n2:

def refract(incident, normal, eta):
    cos_i = -incident.dot(normal)
    sin2_t = eta * eta * (1.0 - cos_i * cos_i)
    if sin2_t > 1.0:
        return None  # total internal reflection!
    cos_t = sqrt(1.0 - sin2_t)
    return incident * eta + normal * (eta * cos_i - cos_t)

When sin²(θt) > 1, there is no valid refracted ray — all light is reflected back inside the glass. This is total internal reflection, the same physics that makes fiber optics work.

The Fresnel equations

Real glass doesn't just transmit or reflect — it does both, in proportions that depend on the viewing angle. At shallow angles, glass becomes increasingly reflective (look at a window from the side). The Schlick approximation captures this elegantly:

R0 = ((n1 - n2) / (n1 + n2))²
F(θ) = R0 + (1 - R0)(1 - cosθ)5

At normal incidence, glass reflects about 4% of light. At grazing angles, it reflects nearly 100%. The 5th-power falloff is an empirical fit that closely matches the exact Fresnel equations.

# Blend reflection and refraction using Fresnel
fr = fresnel_schlick(cos_theta, ior)

refract_color = trace(refract_ray, depth + 1)
reflect_color = trace(reflect_ray, depth + 1)

glass_color = reflect_color * fr + refract_color * (1 - fr)

6. The Material Spectrum

By combining these building blocks — diffuse shading, specular highlights, reflectivity, transparency, and index of refraction — we can simulate a wide range of real-world materials:

Five spheres showing the material spectrum from matte to glass

From left to right: pure matte, glossy plastic, metallic, mirror, and glass.

Matte
diffuse: 0.85
specular: 0
reflect: 0
Glossy
diffuse: 0.6
specular: 0.5
reflect: 0.05
Metal
diffuse: 0.3
specular: 0.9
reflect: 0.5
Mirror
diffuse: 0.05
specular: 0.95
reflect: 0.9
Glass
transparency: 0.97
IOR: 1.52
+ Fresnel

The transition from matte to mirror is essentially about how much of the color comes from direct lighting versus recursive ray tracing. A matte sphere gets all its color from diffuse shading. A mirror gets almost all of it from a reflected ray. Everything in between is a blend.

Glass is qualitatively different — it introduces two recursive rays (reflected and refracted), blended by the Fresnel equation. This is why glass spheres are the most expensive objects to render: each surface hit spawns two rays instead of one.

7. Putting It Together

The camera

The camera defines how screen coordinates map to ray directions. We place the camera in 3D space, point it at a target, and compute a coordinate frame (right, up, forward). For each pixel, we convert its position to a screen-space coordinate and combine the basis vectors:

direction = forward + right * (u * fov * aspect) + up * (v * fov)
ray = Ray(camera.position, direction.normalized())

Antialiasing

A single ray per pixel produces jagged edges where objects meet the background. The fix: shoot multiple rays per pixel with slight random offsets (jittered supersampling) and average the colors. With 8 samples per pixel, edges become smooth.

Tone mapping and gamma

Ray tracing produces physically linear light values that can exceed 1.0 (bright specular highlights, for instance). Before converting to 8-bit color, we apply Reinhard tone mapping to compress the dynamic range, then gamma correction to match how monitors display light:

# Reinhard tone mapping: compress [0, inf) to [0, 1)
c = c / (1.0 + c)
# Gamma correction: linearize for display
c = c ** (1.0 / 2.2)

Parallelization

Each pixel is completely independent — the color of pixel (100, 200) doesn't depend on pixel (101, 200). This makes ray tracing embarrassingly parallel. We distribute rows across all 8 CPU cores using Python's multiprocessing.Pool, achieving near-linear speedup.

8. Gallery

Here's the full scene again, along with a random sphere field demonstrating the renderer at scale:

Main showcase scene

The showcase scene: glass sphere with Fresnel reflections, metallic red and gold spheres, chrome sphere, matte blue, green glass marble, and a reflective checkerboard floor.

Many random spheres with varied materials

Random sphere field: 40+ spheres with randomized materials (matte, metallic, glass) demonstrating the renderer at scale. Three hero spheres in the center.

9. Try It Yourself

This is a simplified 2D ray tracer running in your browser. Click to place circles, then watch as rays from the light source reflect and refract through them.

10. What Makes This Interesting

Every feature in this renderer maps directly to the physics of real light. Shadows exist because light travels in straight lines. Reflections follow the law of reflection. Glass bends light because electromagnetic waves change speed in different media. Fresnel effects come from the wave nature of light at boundaries.

The entire renderer is about 500 lines of Python — no numpy, no GPU, no external libraries beyond Pillow for the final PNG output. The rendering is parallelized across 8 CPU cores, with each core independently computing rows of pixels.

What's remarkable is how much visual richness emerges from such simple rules. The reflective checkerboard floor, the caustic-like patterns in the glass sphere, the way the mirror sphere captures the entire scene — none of these are explicitly coded. They emerge naturally from recursively following the light.

This is, in miniature, what makes physics-based rendering so powerful: get the fundamentals right, and complexity takes care of itself.