Architecture of Lugdunum

The purpose of this section is to introduce you to the internal operation of our 3D engine. We will first talk about the architecture of the renderer. Then we will describe the sequencing of the engine graphic’s loop, how each component of the Renderer::Target is interacting with the Render::Window composed of different Renderer::View. Then, we will discuss the GPU & CPU’s side operation. We will explain how each buffer is loaded and used by our engine.

Renderer Architecture

We decided to be as API independent as possible, i.e. we do not want to be too much dependent on Vulkan itself. This is why we created abstract classes for each type and their Vulkan-equivalent in a separate, API specific directory. This is especially visible in here. Hypothetically speaking, this allows us to be much less dependent on this technology and maybe one day, to derive the implementation for another low-level API, such as D3D12 for example.

The main object of the renderer is the Render::Target. A Render::Target is any surface on which we can render, e.g. a window or an offscreen image.

A Render::Target can have multiple Render::Views, each representing a fraction of the Render::Target, defined by a Render::View::Viewport and a Render::Scissor defined as following:

class Viewport {
public:
    struct {
        float x;
        float y;
    } offset;

    struct {
        float width;
        float height;
    } extent;

    float minDepth;
    float maxDepth;

    inline float getRatio() const;
};

struct Scissor {
    struct {
        float x;
        float y;
    } offset;

    struct {
        float width;
        float height;
    } extent;
};

Each of the components of Render::View::Viewport and Render::View::Scissor are defined as percentage values (i.e. a float between 0.0 an 1.0), so it has the same appearance on every size of the Render::Target.

A unique Render::Camera can be attached to a single Render::View, i.e. we cannot have a Render::Camera attached to two different Render::Views.

Render::Cameras contain a Render::Queue and have pointer to a Scene::Scene, which is created by the user, and can be attached to multiple cameras.

Every frame, the Render::Queue is cleared, then filled by the Scene::Scene with the objects visible by the Render::Camera’s frustrum.

The Render::Queue is finally sent to Vulkan::Render::Technique::Technique::render().

Main classes of the renderer

In the diagram here, we are representing the main classes of the renderer and their dependencies.

The diagram here shows an example of how classes interact with each other:

Example of a possible usage of the render views

Sequence diagrams

In this section will be presented the rendering of a single frame with the help of two sequence diagrams, here and here. The second is a subset of the first, as they have been separated to ease readability.

Rendering of a frame (part. 1)

Let us describe this sequence diagram, step by step:

First, UserApplication is the user-defined class that inherits from lug::Core::Application and defines the methods onEvent and onFrame. Application::run() is called (and must be) by the user like in this example:

int main(int argc, char* argv[]) {
    UserApplication app;
    
    if (!app.init(argc, argv)) {
        return EXIT_FAILURE;
    }
    
    if (!app.run()) {
        return EXIT_FAILURE;
    }
    
    return EXIT_SUCCESS;
}

The method Core::Application::run() is the main loop of the engine which polls the events from the window and renders everything correctly. As expected, we can see that the Core::Application is polling all the events from the Render::Window and sending them to the UserApplication through the method UserApplication::onEvent(const lug::Window::Event& event).

Then, Core::Application is calling the method Renderer::beginFrame() which call itself the method Render::Window::beginFrame() to notify the Render::Window that we are starting a new frame.

Finally, the user can update the logic of their application in the method UserApplication::onFrame(const lug::System::Time& elapsedTime).

At the end of the frame, the method Renderer::endFrame() is called and will call the method Render::Target::render() for all Render::Target to draw and will finish the frame by calling the method Render::Window::endFrame() to notify the Render::Window that we are ending this frame.

Rendering of a frame (part. 2)

In the method Render::Target::render(), the Render::Target is calling the method Render::View::render() for each enabled Render::View.

To be rendered, Render::View needs to update its Render::Camera which will fetch all the elements in its Render::Queue from the scene with Scene::fetchVisibleObjects().

So the Render::Queue will contain every elements needed to render the Scene::Scene, meshes, models, lights, etc.

Then the Render::View can call the render technique to draw the the elements in the Render::Queue (e.g. for Vulkan a class inheriting from Vulkan::Render::Technique::Technique).

Vulkan Rendering

Global

GPU Side

The Vulkan::Render::Window and the Vulkan::Render::Views of Lugdunum are pretty straightforward. For simplicity’s sake we have split this process into five steps:

Swapchain image acquisition and synchronization

Each arrow represents a Vulkan semaphore for synchronization purpose.

  1. We get an available image from the swapchain
  2. We change the layout of this image to VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL
  3. We render each Vulkan::Render::View in parallel
  4. We change the layout of this image to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR
  5. We add the image to the presentation queue of the swapchain.

For steps 2 and 4 we are using one Vulkan command buffer per image in the swapchain. Each of the command buffers are built beforehand, therefore we don’t need to rebuild them each frame. Step 3 is dependent on the render technique used.

CPU Side

Since our semaphores are stored in a pool, we let each method (beginFrame(), endFrame(), …) select their own semaphore(s) to use.

Steps 1 & 2

The method Vulkan::Render::Window::beginFrame() is used to accomplish steps 1 and 2. This method chooses one semaphore to be notified when the next image is available and chooses N semaphores to notify each Vulkan::Render::View when the image has changed layout. (N being the number of Vulkan::Render::View in the Vulkan::Render::Window)

Step 3

The method Vulkan::Render::Window::render() is used to accomplish step 3. This method uses the N previous semaphores, one for each call to Vulkan::Render::View::render(). Each Vulkan::Render::View has a semaphore which is signaled when the view has finished rendering. We will explain how the render technique works in the next part.

Steps 4 & 5

The method Vulkan::Render::Window::endFrame() is used to accomplish steps 4 and 5. This method retrieves all the semaphores from the Vulkan::Render::View and chooses one semaphore to be notified when the image has changed layout.

Forward render technique

GPU Side

Forward technique

The Vulkan::Render::Technique::Forward has two different Vulkan::Render::Queue, i.e. one transfer and one graphics.

The transfer Render::Queue is responsible for updating the data of the Render::Camera and Light::Lights, each of which is contained in a uniform buffer Vulkan::API::Buffer which is sent through different Vulkan::API::CommandBuffers (i.e. “Command buffer A” and “Command buffer B” in the above schema). These Vulkan::API::CommandBuffers are then sent to the transfer Render::Queue.

Here is the structure of the uniform buffers for the camera and the lights:

// Camera
layout(set = 0, binding = 0) uniform cameraUniform {
    mat4 view;
    mat4 proj;
};

// Directional light
layout(set = 1, binding = 0) uniform lightUniform {
    vec3 lightAmbient;
    vec3 lightDiffuse;
    vec3 lightSpecular;
    vec3 lightDirection;
};

// Point light
layout(set = 1, binding = 0) uniform lightUniform {
    vec3 lightAmbient;
    float lightConstant;
    vec3 lightDiffuse;
    float lightLinear;
    vec3 lightSpecular;
    float lightQuadric;
    vec3 lightPos;
};

// Spot light
layout(set = 1, binding = 0) uniform lightUniform {
    vec3 lightAmbient;
    vec3 lightDiffuse;
    vec3 lightSpecular;
    float lightAngle;
    vec3 lightPosition;
    float lightOuterAngle;
    vec3 lightDirection;
};

Each type of light has a different pipeline using different fragment shaders (That’s why all the light uniforms are using the same binding point in the above code sample).

To pass the transformation matrix of the objects we are using pushconstant:

layout (push_constant) uniform blockPushConstants {
    mat4 modelTransform;
} pushConstants;

The graphics Render::Queue is responsible for all the rendering.

The “Command buffer C” for the drawing depends on the two command buffers of transfer by means of semaphores at different stages of the pipeline, VK_PIPELINE_STAGE_VERTEX_INPUT_BIT for the camera and VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT for the lights.

CPU Side

Buffer Pool

The allocation of the uniform buffers is managed by a Vulkan::Render::BufferPool, one for the camera and one for the lights.

As we do not want to perform lots of allocations, we mitigate this using the pool which will allocate a relatively large chunk of memory on the GPU, that will itself contain many Vulkan::Render::BufferPool::SubBuffers.

A Vulkan::Render::BufferPool::SubBuffer is a portion of a bigger Vulkan::API::Buffer that can be allocated and freed from the pool and bind with a command buffer without worrying about the rest of the Vulkan::API::Buffer.

Triple buffering

Because we are using triple buffering, we need a way to store data for a specific image. For that we have Vulkan::Render::Technique::Forward::FrameData that contains all we need to render one specific frame (command buffers, depth buffer, etc.). To avoid using a command buffer already in use, we are synchronizing their access with a fence.

To share Vulkan::Render::BufferPool::SubBuffer across frames, e.g. if the camera does not move, we have a way to reuse the same Vulkan::Render::BufferPool::SubBuffer. We associate the Vulkan::Render::BufferPool::SubBuffer with the object (camera or light), and test at the beginning of the frame if we can use a previous one (if the object has not changed from the update of this Vulkan::Render::BufferPool::SubBuffer).

If it is not possible to use a previously allocated buffer we are allocating a new one from the Vulkan::Render::BufferPool.

Drawing Command Buffer

Here is the pseudo code that we are using to build the command buffer of drawing:

BeginCommandBuffer

# The viewport and scissor are provided by the render view
SetViewport
SetScissor

BeginRenderPass

# We can bind the uniform buffer of the camera early
# It is the same everywhere
BindDescriptorSet(Camera)

# All the lights influencing the rendering (visible to the screen)
Foreach Light
    # Each type of Light has a different pipeline
    BindPipeline(Light)
    
    # We can bind the uniform buffer of the light
    BindDescriptorSet(Light)
    
    # All the objects influenced by the light
    Foreach Object
        # Push the transformation matrix of the Object
        PushConstant(Object)
        
        # We use indexed draw, so we need to bind
        # the index and the vertex buffer of the object
        BindVertexBuffer(Object)
        BindIndexBuffer(Object)
        
        DrawIndexed(Object)
    EndForeach
EndForeach

EndRenderPass

EndCommandBuffer

Project maintained by Lugdunum3D
Follow us on Twitter
Contact us by email Hosted on GitHub Pages — Theme by mattgraham