3D ASCII Renderer
Date Written: August 30, 2022; Last Modified: August 30, 2022I decided to make my own 3D renderer using ncurses, a C++ library that provides a character-by-character interface with the terminal.
The first day, I stayed up until 4 or 5 in the morning getting static triangles drawn on the screen. Then, the next day, I implemented rotations and proceeded to spend the next ten hours squashing bugs.
I’m still populating the rest of the website, so I’ll update this page with better explanations and some graphics some time in the distant future.
3D rendering really just boils down to drawing triangles on a screen. I like to imagine that the screen, or terminal in this case, is floating off in 3D space somewhere. The computer then uses the perspecting of some camera pointing normal to the screen, and “rays” are shot through each pixel of the screen to see what gets drawn.
Initially, I rendered each pixel one-by-one. I iterated over each triangle, checked to see if the ray through that pixel hit the triangle, computed the distance, and moved on. However, this is grossly inefficient when there are hundreds or thousands of triangles, as in the gif above. Each frame would have to undergo millions of computations! Even when parallelized, this would usually cap out at 1 or 2 fps for high-resolution surfaces.
An optimisation was due. For each triangle in the scene, the vertices are projected onto the screen, a bounding box is drawn around the triangle, and we check each pixel in the box to see if it’s in the triangle or not. This works particularly well for triangles that only occupy one or two pixels, as we avoid performing computations for blank squares. When parallelised, this runs really well.
I later found out that this is quite similar to the barycentric algorithm for rendering triangles. There are other algorithms for doing this, but since triangles tend to be small (as far as pixels go), and since pixels cover a relatively large area on the terminal, I think this algorithm is a very suitable choice, if not the best.
If you’re looking at the poorly organised code, you may notice that after computing the bounding box for the triangle, the projected coordinates are no longer used to shade the pixels. This is because some triangles may overlap on the screen or poke through the screen. This information is lost in the projection, so in order to know which triangle should be drawn, we need to use the original unprojected triangle.
Currently, the shade of each pixel is computed based on the angle the corresponding triangle makes with a light source. It’s quite simple. Having many light sources would be really cool, but that would involve overhauling the way scenes are rendered.
Triangles are currently stored as raw text (terrible format!). Each triangle gets its own line, which has 9 numbers corresponding to each vertex’s coordinates.
For now, I can only triangulate parametric surfaces whose domains are rectangles. Small triangles in the domain are roughly mapped to triangles in 3D space, so this gives a very simple way to triangulate surfaces. Wikipedia has some good figures that show what this looks like.
Please check the GitHub page if you’re interested in seeing what I have planned for it next!