Made a GPU pathtracer

I made a very basic GPU pathtracer. Check out the code at Github if you want to play with it. Below is a devlog of sorts.

21. feb 2015

I implemented most of the features I wanted, so the experiment is done for now.

Feel free to mess around with the application. It's all open source on Github!

20. feb 2015

Alright. Got direct lighting now. Sort of.

This means we get crisp shadows.

Not sure if I'm doing it right though!

The lighting computation is now basically:

dir = CosineWeightedHemisphereSample(normal)
result = (0 0 0)
if ray traced from surface along dir does not hit anything:
    result += SampleSky(dir)

dir = SunDirection()
if ray traced from surface towards sun does not hit anything:
    result += SunColor(dir) * dot(normal, dir) / PI

So basically, I only trace a single bounce. I could incorporate multiple bounces by iteratively computing the light at the hit point if we DO hit something in the above ray tracings.

The divide by PI term in the second expression comes from the rendering equation, which I now sample directly.

11. feb 2015

Heck yeah! Got the camera working again. Turns out I made an error in deriving the lens direction, which caused the depth of field to not actually follow the camera. Sounds weird? Well it looked weird too.

Also, raymarching everything allows you to do some cool debugging tricks. Like checking if the hit point was within some margin, and marking the area with a specific color.

In this image I've projected the extent of the field of depth (that is, where objects are in focus) onto the ground, and marked the region with red.

09. feb 2015

Attempting to get depth of field working. Somewhat successful, but the view-matrix transformation is broken now.

I'll try to give a more detailed description of how it works once it uh... actually works...

06. feb 2015

Got anti-aliasing working. I moved the image-coordinate matrix transformation out of the vertex shader and into the fragment shader for easier math. Surprisingly, it didn't really affect performance. Or well, an additional millisecond won't overflow your cup when you're already at 80ms. Heh.

Anti-aliasing is basically free after that. I just add a random offset of half the pixel size to the sample coordinate.

"Ah! Those smooth edges!"

"I call it the spherktal."

Also I upped the rendering resolution by twofold for these renders. It really does look alot better with anti-aliasing.

05. feb 2015

Having fun with more advanced raymarching scenes.

I'm wondering if I want to implement anti-aliasing, or just leave it like this. Part of me kinda likes the rough outlines mixed with smooth internal edges...

Also, since I integrate light by additively blending over multiple frames, motion blur is super simple. Just let time advance while integrating!

04. feb 2015

Loading textures. Basic IBL. Got a chance to whip out my trig to do sky-dome sampling.

02. feb 2015

Got sampling and integration somewhat working. The way it works is rather cool:

  1. Store a floating point framebuffer to store accumulated light

  2. For each frame of refinement, render the scene to the framebuffer with additive blending enabled

  3. Each frame has its RNG seeded differently, so that bouncing rays sample the entire scene.

I've bound the R-key to accumulate light and advance the iteration count. After holding R for a while, an image will start to appear. I'm currently only sampling against an open sky (with plenty of light!), so the render converges pretty quickly.

I tried to sample against in-scene light sources, like a hovering sphere. But the result was waaaay to dark. I guess I need to read up on Monte Carlo integration.

01. feb 2015

Implemented a matrix-based camera system. This means I can use the same view-transformation matrix as I do in traditional triangle rendering. Not sure if I want to do that for this project, but I'm used to working with those type of transformations anyway.

19. jan 2015

Got SDL and OpenGL set up. I don't want to have too many dependencies for this project, so I'd like to implement the matrix math that I need. I reckon it should be enough with basic vec2/3/4 types and mat4 (does anyone ever use mat2/3?).

Resources

Some pages have been useful while developing this.

Lightweight GUI library for C++

I recently discovered this tiny, awesome graphical-user-interface library by ocornut.

The best part is that it's really easy to integrate into your OpenGL/DirectX/SDL or whatever project. You need to include 3 files, and define a function callback to render the vertices that the library pumps out each frame.

You'll also want user-input, like mouse-clicks and text-input. These are usually handled in your update-routine or where you poll for events. For example, here's how I do it in my prototyping framework:

void poll_events(const SDL_Event &event)
{
    ImGuiIO &io = ImGui::GetIO();
    switch (event.type)
    {
    case SDL_TEXTINPUT:
        io.AddInputCharacter(event.text.text[0]);
        break;
    case SDL_KEYUP:
        io.KeysDown[event.key.keysym.scancode] = false;
        io.KeyCtrl = (event.key.keysym.mod & KMOD_CTRL) != 0;
        io.KeyShift = (event.key.keysym.mod & KMOD_SHIFT) != 0;
        break;
    // etc...
    }
}

Because the library is a so-called immediate mode GUI, using it is super simple as well:

void render_game(float dt)
{
    clear(0xcc9999ff, 1.0f);
    static float lightColor[4];
    static float attenuation;
    ImGui::NewFrame();
    ImGui::ShowTestWindow();
    ImGui::Begin("Diffuse Shader");
    ImGui::ColorEdit4("lightColor", &lightColor);
    ImGui::SliderFloat("attenuation", &attenuation, 1.0f, 16.0f);
    ImGui::End();
    ImGui::Render();
}

which produces the following result:

imgui_result

I needed to make some small changes to the original code to get pixel perfect text with AA enabled. The first I did was to change the RenderText function to properly round the text position to to integer coordinates, so:

void ImBitmapFont::RenderText(...) const
{
    //...
    // Align to be pixel perfect
    pos.x = (float)(int)pos.x + 0.5f;
    pos.y = (float)(int)pos.y + 0.5f;
    //...
}

becomes

pos.x = (float)(int)(pos.x + 0.5f);
pos.y = (float)(int)(pos.y + 0.5f);

Note that x and y are both positive numbers, so we don't need to handle rounding negative numbers. The second thing I did was to adjust the PixelCenterOffset constant to 0.375 instead of 0.5. A search on Google will give you some interesting stories aboud this magic number ;)

Realtime 2D wave simulation

Traditionally, rendering water uses a hack known as normalmaps. The normalmap represents small variations of the surface normal due to waves. And this works pretty well! Good normalmaps can give a convincing wave effect combined with some clever scrolling techniques. If you want artistic value, this is probably the way to go. For realism? Not so much.

But today we have more computing power in our pants pocket than you need to travel to the moon, so we might as well spend it all on simulating the wave equations in realtime!

The wave equation describes, as the name suggests, how waves behave. Of course, it is not limited to water waves - it's fully applicable for any other wibbly wobbley physical phenomenon you can imagine.

The equation is known as a partial differential equation, if that tells you anything. If not, then all you need to know is that they are awesome, and solving them analytically is a mathematician's worst nightmare. Thankfully, we have powerful computers that can solve anything, if we give them enough time.

A method of solving them is described here, taken from the page in the link below. There's also more cool stuff on that page - like how to simulate actual fluids, with density and pressure and stuff.

But basically, you divide your area of interest into an evenly spaced grid. For each grid position you store a height- and a velocity value. Then for each tick in your simulation you perform some math on these values and their neighbors, to approximate the next height and velocity at all points. (It boils down to solving a Poisson equation).

You also need to take care of boundary and initial conditions, as you would when solving differential equations on paper. In this case, the initial conditions are the height and velocity at the beginning of the simulation, while the boundary is set to something appropriate.

In the video above I clamp values on the boundary to zero, causing waves to bounce back when they hit the edge.

To render the water I calculate the surface normal for each pixel. Using the normal and the viewing vector, we can calculate a reflected and a refracted vector to use for lookup in a cubemap skybox. To blend between the reflection and the refraction, I use Fresnel's law. Or rather, an approximation to it, which uses the dot product between the view vector and normal vector.

Finally, to get the whole thing running fast, I perform the simulation on the GPU. I represent the grid of height and velocity values as a seperate rendertarget - a texture that can be written to, but not necessarily shown on screen. To move the simulation forward I need two of those, as one is used for input while the other is used for output. I then ping-pong between them. The solving part is done per-fragment in a shader, and written back to the rendertarget.

As usual, the code for this demo can be found on my Github.

Reading material

OpenGL Things

Exams are coming up, and with no lectures I've had some spare time to mess around with OpenGL. I'd like to work on a larger project of some kind, but for now I've been doing these small tech demo thingies.

I've been using my homemade rendering library named glTerrain. It started out as the name suggests, but over time it morphed into some weird mix of who knows. But it works!

All the demos (there are more than in this post) can be found on my Github.

Dynamic terrain rendering

This one loads a height- and normalmap from disk, and whips up the terrain mesh on the fly, using the vertex shader. I calculate tangents and bitangents on there as well, so that the normalmap works properly with deformed terrain.

Read more:

Particles

This one uses OpenGL 4.2 and something called transform feedback. Basically, you perform a bunch of computations on the GPU, retrieve the result and use them again in a loop. Pretty nifty!

Read more:

Arealights

This demo uses deferred rendering and lights with polygonal extrusion in SPACE. The deferred rendering bit makes it simpler to add multiple lights together in the shader - I wrote a forward rendering version as well ;)

Pardon the .gif ruining the color gradients.

Read more: