I’m in the middle of relaunching of series of articles for the Games That Pushed the Limits of game console hardware and I wanted to be able to share explanations of technical software concepts without cluttering of each of the console’s guides.
Once consoles like the Sega Saturn and the Sony PlayStation arrived and 3D games were becoming the norm, game developers had to think more about conversion of their 3D models to pixels and how to efficiently manage what is viewable to the player. A process known as “rasterization” converts a 3D image in 2D pixels/dots based on a point of view(camera/your monitor)
But what happens when you have,for example,one object behind another?
For the 2D perspective,they have the same (X,Y) coordinate,thus,both need to be drawn at the same pixels.But this is wrong because the object behind shouldn’t be shown from that point of view.
There are algorithms to fix that.The most well-known is the Z-buffer.
Basically, Z-buffer analyze the Z coordinate of each object (the object points) and store the points that are closer to the camera from the Z-perpective.This one will be drawn, the other don’t.
Without Z-Buffering in place, polygons had to be sorted “manually” to determine the rasteration of objects at different depths.
This was very common through the mid 1990’s as software renderers simply didn’t have access to the raw throughput to get away with z-buffering, so they used sorting instead. Z-buffering was actually perceived as a bit of a brute-force approach.
This was fine in simply cases like the early portal-based FPS engines, but as games increased in complexity, the complexity of the sorting got worse.
Eventually Z-buffers won out due to their algorithmic simplicity on more complex scenes, and ability to be hardware-accelerated.
If you have further insights into this process and would like to add to the discussion, please let me know in the comments before.