Why would I look at this before having a fully functional processor?
I need to know what operations I will be doing the most, and perhaps even write the software that will be rendering, before I design the instruction set. By understanding the program I'll be running I can optimize the processor for that program.
There are three main ways I want to be able to render:
- Rasterizing my 3D primitives
- Ray tracing 3D primitives
- Rendering 2D textures/graphics
Rasterization
Here is my idea:
- Create a depth buffer somewhere, storing distance from the camera
- For each triangle, project the three vertices
- For each pixel in that triangle compare with the depth buffer, overwrite if closer to camera
- Also compute fragment color and write to color buffer if closer to camera
- Swap the video generator's address with that of the color buffer
- Old color buffer is new image, old image is new color buffer
- This achieves double buffering
Ray Tracing
Not sure if I ever want to really program this, but I want the option.
- For each pixel
- Find the nearest triangle (if any) that would render on that pixel
- Compute the color of that pixel using the triangle (or lack thereof)
- Write the color to the pixel
2D Images
Here we need the option of rotating images and computation of depth (think 2D games like Starbound). Possibly smooth lighting or cool graphical effects.
- Create a depth buffer and color buffer
- For each sprite instance
- For each pixel in the sprite compute a rotated position for it
- Shade and write color to the color buffer
- For each pixel in the sprite compute a rotated position for it
- Swap the video generator's address with that of the color buffer
- Old color buffer is new image, old image is new color buffer
- This achieves double buffering
This is surprisingly similar to the Rasterization pipeline... Am I doing something wrong? Please let me know if I am, I'm somewhat new to this :P
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.