Framebuffers, Offscreen Rendering, and Post-Processing
There are many instances in real-time graphics (and, thus, games), where you want some rendering operations to be able to read the pixel values written by previous rendering operations. For example:
- A security camera monitor that shows a view of the scene rendered from a different camera. (Render the scene from the view of the security camera, save to a texture, and use this texture on the monitor.)
- A shiny steel marble that reflects its surroundings. (Render the scene in a cube of views surrounding the marble, use these views as a cube-map texture to look up reflection information.)
- Simulating the "glow" that appears around bright lights due to scattering in the eye. (Render the scene to a texture, then add back a blurred version of that texture in a shader.)
- Simulating depth-of-field blur. (Render the scene colors and depths to textures, produce the final image by blurring the colors depending on the depth values and the current focal distance of the camera.)
- Creating dynamic shadows. (Render the depth of the scene from the point of view of a light, then use those depth values in order to do light/shadow testing when rendering the scene.)
- Supersampling. (Render the scene to a buffer larger than the screen, then blur/downsample to produce a screen image with smoother edges.)
All of these techniques involve setting up the GPU to render to memory that isn't going to be (directly) shown on the screen ("offscreen rendering"). In OpenGL, the area the GPU renders to is called a framebuffer; offscreen rendering is accomplished by creating a new framebuffer object to describe where the GPU should send its results, and binding this framebuffer to the pipeline.
The remainder of this lesson will describe the mechanics of framebuffer setup, and give you a chance to play with one of the more straightforward (but, nonetheless, powerful) uses of offscreen rendering: post-processing (generally: effects that modify the output pixels of the whole scene in order to perform blurs, color correction, and other fancy effects).
Framebuffer Objects
In OpenGL, information about the current target for rendering operations is held in a framebuffer object.
These objects are managed just like any other OpenGL object: they are named with GLuint
s (with 0
being a special name for the default framebuffer object), there are functions to allocate and delete names, there are functions to set the currently bound framebuffer object.
//Allocate and bind a framebuffer:
glGenFramebuffers(1, &hdr_fb);
glBindFramebuffer(GL_FRAMEBUFFER, hdr_fb);
A framebuffer object keeps track of attachments -- areas of memory where the GPU should read and write during rendering operations. Framebuffers have various attachment locations with specific purposes:
Note that these attachment points are references to GPU-allocated memory -- your code needs to allocate memory before it can point a framebuffer object to it. Your code can allocate memory for a framebuffer object to render to in two ways: by allocating a texture or by allocating a renderbuffer.
Textures are, well, textures. You can render to them using an attached framebuffer, read from them with all sorts of different sampling and wrapping modes in a shader program, and so on.
//Allocate and bind texture to framebuffer's color attachments:
//allocate texture name:
glGenTextures(1, &hdr_color_tex);
glBindTexture(GL_TEXTURE_2D, hdr_color_tex);
//allocate texture memory:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, size.x, size.y, 0, GL_RGB, GL_FLOAT, nullptr);
//set sampling parameters for texture:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, 0);
//attach texture to framebuffer as the first color buffer:
glBindFramebuffer(GL_FRAMEBUFFER, hdr_fb);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, hdr_color_tex, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Renderbuffers are "as simple as possible" framebuffer-attachable memory that cannot be accessed outside of their function as part of a framebuffer. (This makes sense for things that have formats that don't clearly map to a texture -- like a multisample color buffer, or a combined depth-and-stencil buffer. This also makes sense when you just don't want to bother with texture state because you won't be reading the values from a shader.)
//Allocate and bind renderbuffers to framebuffer's depth attachment:
//allocate renderbuffer name:
glGenRenderbuffers(1, &hdr_depth_rb);
glBindRenderbuffer(GL_RENDERBUFFER, hdr_depth_rb);
//allocate renderbuffer memory:
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, size.x, size.y);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
//attach renderbuffer to framebuffer as the depth buffer:
glBindFramebuffer(GL_FRAMEBUFFER, hdr_fb);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, hdr_depth_rb);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Finally, a framebuffer won't work unless it is complete -- this is a specification-defined term that basically means that a framebuffer has a set of attachments which match in dimension, and are of appropriate formats to render to. It is very frustrating to discover that the reason you weren't seeing any output on-screen is that your framebuffers were not complete. The framebuffer example code has a convenient helper function (in gl_check_fb.hpp), which you should probably get in the habit of using.
#include "gl_check_fb.hpp"
//Check for framebuffer completeness:
glBindFramebuffer(GL_FRAMEBUFFER, hdr_fb);
gl_check_fb();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Small Notes
Different framebuffer objects can reference the same attachments! This can be useful in order to share memory (e.g., for screen-sized temporary storage locations) between rendering steps.
Drawing into a framebuffer which is also being used as a source texture during the drawing has undefined behavior! (To see why this is hard notice that it could -- e.g. -- make the order of fragment processing observable; something which the specification takes pains to otherwise avoid.) This means that when your code is processing a texture it needs to output the processing result into a different texture, "ping-pong-ing" between buffers for each processing step.
An Example: HDR Rendering
I have prepared an example of using off-screen rendering to do some simple HDR (high-dynamic-range) rendering with a "glow" or "bloom" effect. This code is available at https://github.com/15-466/15-466-f20-framebuffer.
I encourage you to explore the code and play with the shaders to generate different effects.
Final Remarks
The use of offscreen rendering is pervasive in modern real-time graphics because it allows the use of rasterization and shading hardware on GPUs to compute all sorts of rendering effects that aren't directly rendered as part of the main scene. These can be as simple as rendering separate viewpoints and as complex as running physics on particle systems. (And, importantly, using all this compute without requiring communication back to system memory.)
Recent GPUs go even further than offscreen rendering by offering "compute shaders", which can directly read and write memory buffers. These are convenient because -- among other capabilities -- they enable scatter-style memory access rather than just the gather-style access offered by fragment shaders.