Context Clues: Explaining the Structure Behind Rendering Contexts

I've been leaving my blog posting to rot as of late while slaving away on my thesis, but I feel it's about time to talk a little bit about what I've been working on lately.


Today's discussion is about rendering contexts, which are essential in order to use the rendering pipeline on any platform. My confusion with this topic began as I started porting Vingine to multiple consoles and phones. Each new platform had its own strange method of setting up a rendering context. Even worse, each platform had a different name for that setup process and its parameters. Thus, this post is meant to elucidate what these contexts are FOR ALL TIME. Or something like that, anyway.

It Can't Be That Complicated, Right?

Just in case you haven't come across this problem yourself, here is the current list of context-initializing calls I've seen in my time working on each platform:

  • OpenGL (Win):
    HGLRC WINAPI wglCreateContext(
    HDC hdc
  • DX11 (Win): 
    HRESULT D3D11CreateDeviceAndSwapChain(
    _In_   IDXGIAdapter *pAdapter,
    _In_   D3D_DRIVER_TYPE DriverType,
    _In_   HMODULE Software,
    _In_   UINT Flags,
    _In_   const D3D_FEATURE_LEVEL *pFeatureLevels,
    _In_   UINT FeatureLevels,
    _In_   UINT SDKVersion,
    _In_   const DXGI_SWAP_CHAIN_DESC *pSwapChainDesc,
    _Out_  IDXGISwapChain **ppSwapChain,
    _Out_  ID3D11Device **ppDevice,
    _Out_  D3D_FEATURE_LEVEL *pFeatureLevel,
    _Out_  ID3D11DeviceContext **ppImmediateContext
  • EGL (Android):
    EGLContext eglCreateContext(
    EGLDisplay display,
    EGLConfig config,
    EGLContext share_context,
    EGLint const * attrib_list);

...And so on. As mentioned previously, each platform has a different name for this initialization, uses different arguments, and abstracts the interface at a different level. Finally, after doing this setup without any abstraction on some handheld consoles, I feel like I have an understanding of what each of these functions is actually doing behind-the-scenes.

Demystifying Contexts

As I currently understand it, a Rendering Context is essentially the CPU-side memory space set aside for the rendering library's work space. Each library stores different data, but the primary user of this space is what is known as the render target. The Render Target is a data structure that holds one or more Render Buffers in a Display Queue. The buffers in this queue are where the GPU deposits the completed image in each frame so that they may be displayed.

A programmer art representation of the rendering context

Each of the above functions is doing the job of setting up the outer context that holds the memory.

However, what I didn't show is that there is generally a lot of options that are filled out before those functions are called. Those options determine the makeup of the overall context memory, the render target inside of it, and even the display queue and buffers, and we can make use of these options for our own ends.

Taking Advantage of our Newfound Knowledge

So, we learned a little bit more about how the backbone of the rendering engine works, but how can this help us in game development? I am still experimenting with the options on each platform, but here are a pair of tricks you can leverage with this knowledge:

  1. Save on Render Buffer Size. Are you making a 2D game? If so, you are probably drawing using the Painter's Algorithm, and have no need for the depth buffer. Most platforms provide the ability to set the render buffer's attachments like you would with a framebuffer. By not attaching a depth buffer, you can save a lot of space.
  2. Change the Size of the Display Queue. You probably already know about double or triple buffering, but on many platforms you can make the queue even larger than that. Why would you want to do this? If the GPU gets far enough ahead of the display, all of the render buffers inside of the display queue can get filled before they can be displayed. When this happens, the graphics pipeline will be stalled so that the display can catch up. Adding more render buffers to the queue can prevent this.


...And that's it. It's a little bit short, but that's the basics of what I've learned of the rendering context in my short time of working with it. In the future, I want to write an abstraction for this just for convenience, but I am going to spend a little more time experimenting with each platform's settings before I do so.

Please let me know in the comments or contact me if this was helpful to you. Alternatively, contact me if I am totally wrong and misinforming everyone on the internet about how graphics actually works. Either way, I look forward to your input.

Quick Tip: Menu Mouse Lock with Scaleform + UDK

Lately, our team has been putting work into polishing the menus for our upcoming game, Super Slash n'Grab.

One of the outstanding issues with our current implementation was that users could accidentally cause button focus to go wrong by clicking the mouse, even though our mouse was not enabled for input in either UnrealScript or AS3.

It took about an hour of searching through Scaleform's documentation, I found an AS3 function that, in combination with regular AS3 functionality, fixed the issue for us.

If you're in a similar situation, try using this code:

Source file
 1 import scaleform.clik.core.CLIK;
 3 CLIK.disableNullFocusMoves = true;
 4 CLIK.stage.mouseEnabled = false;
 5 CLIK.stage.mouseChildren = false;

How does this code work?

CLIK.stage is where Scaleform stores its reference to the main flash stage.

By setting mouseEnabled and mouseChildren to false, the mouse position in flash can no longer be changed by mouse inputs. I am not sure where the default location is, but I know that it is offscreen.

CLIK.disableNullFocusMoves tells the scaleform focus handler to ignore requests to set focus to null. Since the pointer is offscreen, it's pointing to a null object, and thus the focus events set by its clicks are ignored.

Let me know in the comments how this trick works out for you.