Introduction

This is a book (well, "book"), about learning OpenGL usage with Rust.

It's based on LearnOpenGL.com, by Joey de Vries, which is for C++ OpenGL. I'm not associated with Joey at all, I just think that they made a cool thing and I want to spread the knowledge to Rust folks as well.

OpenGL (OGL), is one particular flavor of GL within the larger GL family. There's also OpenGL ES (GLES), which is for embedded systems like phones and raspberry pi, and there's WebGL which is for GL in the browser.

OpenGL lets you draw things. 3D things. Actually if you orient a 3D thing properly it'll look 2D, so you can also draw 2D things if you want.

This book is a work in progress! Even the lessons that are "written" are probably written kinda badly. At the moment I'm more concerned with getting lessons out so that you can see the working code and the kinda explaining in text what's going on after the fact.

  • Current Goal: The current goal is to get example program for the "basics" lessons from LearnOpenGL.com into Rust versions, along with lesson explanation text for all example programs. At that point, development will probably take a break, but readers will have seen enough so that they can begin adapting other OpenGL books and blogs to continue their education on their own.

Please file an issue in the repo if there's something you don't get and I'll try to improve that part of the book.

Also, you can maybe read LearnOpenGL.com to understand the thing, while you wait for me to get back to you.

You can get the code on GitHub!

Basics

TODO: maybe some of this can go into the Introduction? whatever.

We'll be using OpenGL 3.3. The latest version is 4.6, but we'll still be using 3.3. The main reason for this is because if we take a quick look at Mac's supported OpenGL versions we can see that they support 3.3 on older stuff and 4.1 on newer stuff. Macs don't get OpenGL 4.6 like Windows and Linux have. Oh well. Feel free to use this book and then learn the stuff that got added after 3.3 if you don't care about supporting old macs.

OpenGL is a specification. Not a specific implementation, just a specification. Each graphics card has its own driver, which has its own implementation of OpenGL. This means that you can run in to bugs on one card that doesn't show up on other cards. Fun!

OpenGL is specified in terms of a series of C functions that you can call. They all affect a "Context". A GL context has all sorts of state inside. There's GL calls to draw things, but there's also a lot of calls to carefully set the state before the drawing happens. Both types of call are equally important to getting a picture on the screen.

So we'll be doing a lot of FFI calls. FFI calls are naturally unsafe, because the Rust compiler can't see what's going on over there.

If you don't want to have to call unsafe code you can try luminance, or glium, or wgpu or, something like that. You don't have to call unsafe code to get a picture on the screen.

But if you want to know how people built those other libraries that let you do those cool things, you gotta learn this direct usage stuff.

Prior Knowledge

You should generally be familiar with all the topics covered in The Rust Programming Language, but you don't need to have them memorized. You can look things up again if you need to.

I usually tell folks that they should read The Rustonomicon before doing a lot of unsafe code. However, with GL you're not really doing a lot of hackery within Rust that could go wrong. It's just that the driver could explode in your face if you look at it funny. Or even if you don't, because drivers are just buggy sometimes. Oh well, that's life.

Libraries Used

As I start this project, this is what my Cargo.toml looks like.

[dependencies]
bytemuck = "1"
ogl33 = { version = "0.2.0", features = ["debug_error_checks"]}

[dev-dependencies]
beryllium = "0.2.0-alpha.4"
imagine = "0.0.5"

So the library itself, where we'll put our useful GL helpers, will depend on

  • ogl33, which gives us bindings to OpenGL.
    • It's similar to the gl crate (which loads OpenGL 4.6), but all functions and constants use their real names exactly as you'd see in C code. It makes it a lot easier to read books and blogs about OpenGL that are written for C (which is essentially all of them), and then quickly translate it to Rust.
  • bytemuck, which is a handy crate for casting around plain data types.

And then if you're not familiar with "dev-dependencies", that's bonus dependencies that tests and examples can use (but not bins!). Since our example programs will be examples in the examples/ directory, they'll be able to use "dev-dependencies" without that affecting the lib itself. That way if someone else wants to use the lib they can use just the lib in their own program, without having to also build the stuff we're using for our examples.

  • beryllium is an SDL2 wrapper. It will dynamic link by default so you'll need SDL2.dll in your path to run a program. You can swap this to static linking, I describe that at the end of the first lesson.
  • imagine is a PNG parser (not used right away, but soon enough).
  • ultraviolet is a graphics linear algebra crate.

Full disclosure: I wrote almost all of the crates on the list. Other than ultraviolet, which was done by Fusha, because I'm a dummy who can't do math.

However, I'm writing the book, so I get to use my own crates while I do it. I think this is fair, and I'm also providing alternative suggestions for each one, so I don't feel bad about it.

Creating A Window

This part of the tutorial is very library specific, so I won't focus on it too much. Basically, we have to open a window, and we also need a GL context to go with that window. The details for this depend on what OS and windowing system you're using. In my case, beryllium is based on SDL2, so we have a nice cross-platform abstraction going for us.

Pre-Window Setup

On most platforms, you have to specify that you'll be using GL before you create the window, so that the window itself can be created with the correct settings to support GL once it's made.

First we turn on SDL itself:

use beryllium::*;

fn main() {
  let sdl = Sdl::init(init::InitFlags::EVERYTHING);

Then we set some attributes for the OpenGL Context that we want to use:

#![allow(unused)]
fn main() {
  sdl.set_gl_context_major_version(3).unwrap();
  sdl.set_gl_context_major_version(3).unwrap();
  sdl.set_gl_profile(video::GlProfile::Core).unwrap();
  #[cfg(target_os = "macos")]
  {
    sdl
      .set_gl_context_flags(video::GlContextFlags::FORWARD_COMPATIBLE)
      .unwrap();
  }
}
  • The Core profile is a subset of the full features that the spec allows. An implementation must provide the Core profile, but it can also provide a Compatibility profile, which is the current spec version's features plus all the old stuff from previous versions.
  • The Forward Compatible flag means that all functions that a particular version considers to be "deprecated but available" are instead immediately unavailable. It's needed for Mac if you want to have a Core profile. On other systems you can have it or not and it doesn't make a big difference. The Khronos wiki suggest to only set it if you're on Mac, so that's what I did.

Make The Window

Finally, once GL is all set, we can make our window.

In some libs you might make the window and then make the GL Context as a separate step (technically SDL2 lets you do this), but with beryllium it just sticks the window and the GL Context together as a single thing (glutin also works this way, I don't know about glfw).

#![allow(unused)]
fn main() {
  let win_args = video::CreateWinArgs {
        title: WINDOW_TITLE,
        width: 800,
        height: 600,
        allow_high_dpi: true,
        borderless: false,
        resizable: false,
  };

  let _win = sdl
    .create_gl_window(win_args)
    .expect("couldn't make a window and context");
}

Processing Events

Once we have a window, we can poll for events. In fact if we don't always poll for events promptly the OS will usually think that our application has stalled and tell the user they should kill the program. So we want to always be polling for those events.

Right now we just wait for a quit event (user clicked the X on the window, pressed Alt+F4, etc) and then quit when that happens.

#![allow(unused)]
fn main() {
  'main_loop: loop {
    // handle events this frame
    while let Some(event) = sdl.poll_events() {
        match event {
            (events::Event::Quit, _) => break 'main_loop,
            _ => (),
        }
    }
    // now the events are clear

    // here's where we could change the world state and draw.
  }
}
}

Done!

That's all there is to it for this lesson. Just a milk run.

Extras

I'm developing mostly on Windows, and Windows is where most of your market share of users will end up being, so here's some bonus Windows tips:

Windows Subsystem

I'm going to put the following attribute at the top of the file:

#![allow(unused)]
#![cfg_attr(not(debug_assertions), windows_subsystem = "windows")]
fn main() {
}

This will make is to that a "release" build (with the --release flag) will use the "windows" subsystem on Windows, instead of the "console" subsystem. This makes the process not have a console by default, which prevents a little terminal window from running in the background when the program runs on its own. However, we only want that in release mode because we want the ability to print debug message in debug mode.

Static Linking SDL2

Finally, instead of dynamic linking with SDL2 we could static link with it.

All we have to static link SDL2 instead is change our Cargo.toml file so that instead of saying

beryllium = "0.2.0-alpha.2"

it says

beryllium = { version = "0.2.0-alpha.1", default-features = false, features = ["link_static"] }

However, when we do this, we have to build the SDL2 static lib, which takes longer (about +30 seconds). So I leave it in dynamic link during development because it makes CI go faster.

Drawing A Triangle

In this lesson, we'll do a lot of setup just to be able to draw a single triangle.

Don't worry, once you do the first batch of setup, drawing that second triangle is easy.

Load The Opengl Functions

Unlike most libraries that you can use in a program, OpenGL cannot be statically linked to. Well, you can static link to very old versions, but any sort of newer OpenGL library is installed on the user's system as a dynamic library that you load at runtime. This way the user can get their video driver updates and then your program just loads in the new driver file the next time it turns on.

The details aren't too important to the rest of what we want to do, so I won't discuss it here. Perhaps an appendix page or something at some point in the future. The ogl33 crate handles it for us. As a reminder, you could also use the gl or glow crates.

After we open the window, we just say that we want to load up every OpenGL function.

#![allow(unused)]
fn main() {
unsafe {
  load_gl_with(|f_name| win.get_proc_address(f_name));
}
}

Set The "Clear" Screen Color

When we clear the previous image's data at the start of our drawing, by default it would clear to black. Since we'll only have one thing at a time to draw for a little bit, let's use a slightly softer sort of color.

We just need to call glClearColor with the red, green, blue, and alpha intensities that we want to use.

#![allow(unused)]
fn main() {
unsafe {
  glClearColor(0.2, 0.3, 0.3, 1.0);
}
}

This is a blue-green sort of color that's only a little bit away from being gray. You can kinda tell that even before we open the window. The channel values are all close (which is gray), but there's a little less red, so it tilts towards being a blue-green.

The alpha value isn't important for now because our window itself isn't transparent (so you can't see pixels behind it) and we're not doing any color blending yet (so the alpha of the clear color compared to some other color doesn't come into play). Eventually it might matter, so we'll just leave it on "fully opaque" for now.

Send A Triangle

At this point there's two main actions we need to take before we're ready for our triangle to be drawn.

  • We need to get some triangle data to the video card in a way it understands.
  • We need to get a program to the video card so that it can make use of the data.

Neither task depends on the other, so we'll send our triangle data first and then send our program.

Generate A Vertex Array Object

A Vertex Array Object (VAO) is an object that collects together a few different bits of stuff. Basically, at any given moment there either is a Vertex Array Object "bound", meaning it's the active one, or there is not one bound, which makes basically all commands that relate to buffering data and describing data invalid. Since we want to buffer some data and describe it, we need to have a VAO bound.

You make a vertex array object with glGenVertexArrays. It takes the length of an array to fill, and a pointer to the start of that array. Then it fills the array with the names of a bunch of new VAOs. You're allowed to make a lot of vertex arrays at once if you want, but we just need one for now. Luckily, a pointer to just one thing is the same as a pointer to an array of length 1.

Also, glGenVertexArrays shouldn't ever return 0, but if some sort of bug happened it could, so we'll throw in a little assert just to check that.

#![allow(unused)]
fn main() {
unsafe {
  let mut vao = 0;
  glGenVertexArrays(1, &mut vao);
  assert_ne!(vao, 0);
}
}

Once we have a VAO we can "bind" it with glBindVertexArray to make it the active VAO. This is a context wide effect, so now all GL functions in our GL context will do whatever they do with this VAO as the VAO to work with.

As a note: you can also bind the value 0 at any time, which clears the vertex array binding. This might sound a little silly, but it can help spot bugs in some situations. If you have no VAO bound when you try to call VAO affected functions it'll generate an error, which usually means that you forgot to bind the VAO that you really did want to affect.

Generate A Vertex Buffer Object

To actually get some bytes of data to the video card we need a Vertex Buffer Object (VBO) to go with our Vertex Array Object. You might get sick of the words "vertex" and "object" by the time you've read this whole book.

This time things are a little different than with the VAO. Instead of calling a function to make and bind specifically a vertex buffer object, there's just a common function to make and bind buffers of all sorts. It's called glGenBuffers, and it works mostly the same as glGenVertexArrays did, you pass a length and a pointer and it fills an array.

#![allow(unused)]
fn main() {
unsafe {
  let mut vbo = 0;
  glGenBuffers(1, &mut vbo);
  assert_ne!(vbo, 0);
}
}

Now that we have a buffer, we can bind it to the binding target that we want. glBindBuffer takes a target name and a buffer. As you can see on that page, there's a whole lot of options, but for now we just want to use the GL_ARRAY_BUFFER target.

#![allow(unused)]
fn main() {
unsafe {
  glBindBuffer(GL_ARRAY_BUFFER, vbo);
}
}

And, similar to the VAO's binding process, now that our vertex buffer object is bound to the the GL_ARRAY_BUFFER target, all commands using that target will operate on the buffer that we just made.

(Is this whole binding thing a dumb way to design an API? Yeah, it is. Oh well.)

Now that we have a buffer bound as the GL_ARRAY_BUFFER, we can finally use glBufferData to actually send over some data bytes. We have to specify the binding target we want to buffer to, the isize of the number of bytes we want to buffer, the const pointer to the start of the data we're buffering, and the usage hint.

Most of that is self explanatory, except the usage hint. Basically there's memory that's faster or slower for the GPU to use or the CPU to use. If we hint to the GPU how we intend to use the data and how often we intend to update it then it has a chance to make a smarter choice of where to put the data. You can see all the options on the glBufferData spec page. For our first demo we want GL_STATIC_DRAW, since we'll just be sending the data once, and then GL will draw with it many times.

But what data do we send?

Demo Vertex Data

We're going to be sending this data:

#![allow(unused)]
fn main() {
type Vertex = [f32; 3];
const VERTICES: [Vertex; 3] =
  [[-0.5, -0.5, 0.0], [0.5, -0.5, 0.0], [0.0, 0.5, 0.0]];
}

It describes a triangle in Normalized Device Context (NDC) coordinates. Each vertex is an [X, Y, Z] triple, and we have three vertices.

We can also use size_of_val to get the byte count, and as_ptr followed by cast to get a pointer of the right type. In this case, GL wants a "void pointer", which isn't a type that exists in Rust, but it's what C calls a "pointer to anything". Since the buffer function need to be able to accept anything you want to buffer, it takes a void pointer.

#![allow(unused)]
fn main() {
unsafe {
  glBufferData(
    GL_ARRAY_BUFFER,
    size_of_val(&VERTICES) as isize,
    VERTICES.as_ptr().cast(),
    GL_STATIC_DRAW,
  );
}
}

Good to go!

Enable A Vertex Attribute

How will the GPU know the correct way to use the bytes we just sent it? Good question. We describe the "vertex attributes" and then it'll be able to interpret the bytes correctly.

For each vertex attribute we want to describe we call glVertexAttribPointer. There's just one attribute for now (the position of the vertex), so we'll make just one call.

  • The index is the attribute we're describing. Your selection here has to match with the shader program that we make later on. We'll just use 0.
  • The size is the number of components in the attribute. Since each position is a 3D XYZ position, we put 3.
  • The type is the type of data for the attribute. Since we're using f32 we pass GL_FLOAT.
  • The normalized setting has to do with fixed-point data values. That's not related to us right now, so we just leave it as GL_FALSE.
  • The stride is the number of bytes from the start of this attribute in one vertex to the start of the same attribute in the next vertex. Since we have only one attribute right now, that's just size_of::<f32>() * 3. Alternately, we can use size_of::<Vertex>() and when we edit our type alias at the top later on this vertex attribute value will automatically be updated for us.
  • The pointer value is, a little confusingly, not a pointer to anywhere in our memory space. Instead, it's a pointer to the start of this vertex attribute within the buffer as if the buffer itself were starting at memory location 0. Little strange, but whatever. Since this attribute is at the start of the vertex, we use 0. When we have more attributes later all the attributes will usually end up with the same stride but different pointer values. I'll be sure to review this point again later, because it's a little weird.

Once we've described the vertex attribute pointer, we also need to enable it with glEnableVertexAttribArray. It just takes the name of the index to enable, so we pass 0.

Also, when we provide the stride it wants isize and Rust always uses usize for sizes, so we have to convert there. In this case we'll use the TryInto::try_into trait method, along with an unwrap. It should work, but if somehow it would have overflowed, it's better to explode in a controlled manner now than cause the GPU to read memory way out of bounds at some unknown point later.

Also also, we have to convert the pointer location using usize values and then cast to a const pointer once we have our usize. We do not want to make a null pointer and then offset it with the offset method. That's gonna generate an out of bounds pointer, which is UB. We could try to remember to use the wrapping_offset method, or we could just do all the math in usize and then cast at the end. I sure know which one I prefer.

#![allow(unused)]
fn main() {
unsafe {
  glVertexAttribPointer(
    0,
    3,
    GL_FLOAT,
    GL_FALSE,
    size_of::<Vertex>().try_into().unwrap(),
    0 as *const _,
  );
  glEnableVertexAttribArray(0);
}
}

Send A Program

Okay, we have some bytes sent to the GPU, and the GPU knows that it's a series of vertexes which are each three f32 values. How does it know what to do from there? Again, with these good questions.

When your GPU draws a picture, that's called the "graphics pipeline". Some parts of the pipeline are totally fixed, or you can pick from one of a few options. The rest is done by a "shader program".

We need to make a Program Object, compile and attach some shader stages to it, link the stages together, and then use that program.

Of course, to attach those compiled shader stages we need to make some Shader Objects too. It's objects all the way down!

Create A Vertex Shader

First we want a Vertex Shader.

This time we're not calling a "gen" style method with an array to fill and getting a huge array of new shaders. GL assumes that you'll use sufficiently few shaders that you can make them one at a time, so we call glCreateShader with a shader type and we get just one shader back. Or 0 if there was an error.

If you look at the spec page there (and you should naturally have at least a quick look at all of the spec pages I'm linking for you!), then you'll see that there's a lot of types of shader! We only actually need two of them to get our program going. Actually most GL programs will just use the Vertex and Fragment shader. Even like complete products that aren't just demos. Vertex and Fragment are essential, the others are optional and specialized.

One vertex shader please.

#![allow(unused)]
fn main() {
unsafe {
  let vertex_shader = glCreateShader(GL_VERTEX_SHADER);
  assert_ne!(vertex_shader, 0);
}
}

Thank you.

Now we need to upload some source code for this shader. The source code needs to be written in a language called GLSL. Let's go with a vertex shader that's about as simple as you can possibly get with a vertex shader:

#![allow(unused)]
fn main() {
const VERT_SHADER: &str = r#"#version 330 core
  layout (location = 0) in vec3 pos;
  void main() {
    gl_Position = vec4(pos.x, pos.y, pos.z, 1.0);
  }
"#;
}

That's one long string literal with a lot of stuff inside it.

Inspecting The Vertex Source

The first line of the vertex shader is a #version 330 core. You have to have this line on the very first line, it identifies the version of the GLSL language that your program is written for. In the same way that each version of OpenGL adds a little more stuff you can do, each version of GLSL has a little more you can do too. Version 330 is the version that goes with OpenGL 3.3, and we're using the core profile.

Now we get to the actual interesting bits. The job of the vertex shader is to read in the vertex attribute values from the buffer, do whatever, and then write to gl_Position with the position that this vertex should end up at.

layout (location = 0) in vec3 pos;

This specifies that at attribute index 0 within the buffer (remember how we set vertex attribute 0 before?) there's an in variable, of type vec3, which we're going to call pos.

void main() {
  gl_Position = vec4(pos.x, pos.y, pos.z, 1.0);
}

Like with Rust and C, GLSL programs start at main. Our main function reads the x, y, and z of the vertex position, and then sticks a 1.0 on the end, and writes that vec4 into the gl_Position variable. It just copies over the data, no math or anything. Not the most exciting. We'll have plenty of math later, don't worry.

Upload The Vertex Shader Source, and Compile

Now that we've got some source, we need to send it over. For this we use glShaderSource, which is a little tricky to get right the first time. The first argument is the name of a shader to set the source for. Next we have to describe the string data sorta like with glBufferData, but the format is a little wonky. They're expecting a length of two different arrays, and the first array is full of string data, while the second array is full of the lengths of each string. This is supposed to allow you to... I dunno. It's some sort of C nonsense.

What we do in Rust is this:

#![allow(unused)]
fn main() {
unsafe {
  glShaderSource(
    vertex_shader,
    1,
    &(VERT_SHADER.as_bytes().as_ptr().cast()),
    &(VERT_SHADER.len().try_into().unwrap()),
  );
}
}

Ah, look a little weird? Yeah it's still a little weird. So what's happening is that first we're saying that out array of strings and our array of string lengths will both have length 1. Like with glGenBuffer.

Then we're passing a pointer to the pointer of the start of the string. So we write &(expr), with a & forced to the outside of our expression by the parentheses. If you don't have those parentheses then the order of operations goes wrong: it takes a reference to VERTEX_SHADER, calls as_bytes on that, and then you get a very wrong value at the end.

Then, for the length we do basically the same thing. We take a pointer to the length after getting the string length as an i32 value.

Once that string data is uploaded we call glCompileShader to tell GL to compile it, and we're home free.

#![allow(unused)]
fn main() {
unsafe {
  glCompileShader(vertex_shader);
}
}

Check For An Error

I lied just now, we're not home free.

Obviously, the one thing I'm very sure that you know about programming, is that sometimes when you compile a program there's an error. Maybe you spelled a word wrong, maybe a type didn't match, whatever. Anything could go wrong, so we have to check for that.

The checking process is actually more annoying than the compilation!

First we use glGetShaderiv. The iv part means "int" "vector", so the output value will be that they'll write to a pointer we send them. We have to pass the name of the shader we want info on, the GL_COMPILE_STATUS specifier to get the compile status, and a pointer that they can write to so we can get a value back. Side note: out-parameter pointers are terrible, please never design your API this way.

#![allow(unused)]
fn main() {
unsafe {
  let mut success = 0;
  glGetShaderiv(vertex_shader, GL_COMPILE_STATUS, &mut success);
}
}

So this success value is bool-style 1 for yes and 0 for no. You can also use GL_TRUE and GL_FALSE but the types won't match up and in C you don't get automatic conversion, so we'll just check for 0 (no success).

If there was not a success, then then real fun begins. That means we have to get a message out of the shader log.

We could check the info log length with GL_INFO_LOG_LENGTH, then allocate a perfectly sized buffer and have them write to the buffer. However, that gives us a Vec<u8> (or Vec<c_char> if you want), and then we convert that to String. I like to use String::from_utf8_lossy when I've got unknown bytes, which allocates its own buffer anyway, so we'll just allocate 1k of Vec and assume that the log length is 1024 or less.

So we call glGetShaderInfoLog, with the shader we want the info log for, the maximum capacity of our buffer, a pointer to the spot where it will store the number of bytes written, and the pointer to the buffer of course. Then we set the length of the Vec, convert to String, and panic! (at the disco) with that error message.

#![allow(unused)]
fn main() {
unsafe {
  if success == 0 {
    let mut v: Vec<u8> = Vec::with_capacity(1024);
    let mut log_len = 0_i32;
    glGetShaderInfoLog(
      vertex_shader,
      1024,
      &mut log_len,
      v.as_mut_ptr().cast(),
    );
    v.set_len(log_len.try_into().unwrap());
    panic!("Vertex Compile Error: {}", String::from_utf8_lossy(&v));
  }
}
}

Create A Fragment Shader

Making a Fragment Shader is nearly identical to making a vertex shader, except we pass a different shader type. Also, we have some different source code of course.

#![allow(unused)]
fn main() {
unsafe {
  let fragment_shader = glCreateShader(GL_FRAGMENT_SHADER);
  assert_ne!(fragment_shader, 0);
}
}

And the fragment source looks like this

#![allow(unused)]
fn main() {
const FRAG_SHADER: &str = r#"#version 330 core
  out vec4 final_color;

  void main() {
    final_color = vec4(1.0, 0.5, 0.2, 1.0);
  }
"#;
}

Inspecting The Fragment Source

Again we have a version line, always nice to have versions.

out vec4 final_color;

This says that we're going to output a vec4, and we'll call it final_color. With the gl_Position value in the vertex shader, it's just assumed to be there since every vertex shader needs to write a position out. With fragment shaders, the system will just assume that whatever vec4 your fragment shader puts out, with any name, is the output color.

void main() {
  final_color = vec4(1.0, 0.5, 0.2, 1.0);
}

Here, the color is a kind of orange color, and it's the same everywhere. Anywhere we have a fragment, we'll have an orange pixel.

I assure you that both vertex and fragment shaders will become more complex as we go, but if you just want to draw anything it's this simple.

Upload The Fragment Shader Source

And we upload and compile like before:

#![allow(unused)]
fn main() {
unsafe {
  glShaderSource(
    fragment_shader,
    1,
    &(FRAG_SHADER.as_bytes().as_ptr().cast()),
    &(FRAG_SHADER.len().try_into().unwrap()),
  );
  glCompileShader(fragment_shader);
}
}

Check For An Error, Again

And we check for an error like before:

#![allow(unused)]
fn main() {
unsafe {
  let mut success = 0;
  glGetShaderiv(fragment_shader, GL_COMPILE_STATUS, &mut success);
  if success == 0 {
    let mut v: Vec<u8> = Vec::with_capacity(1024);
    let mut log_len = 0_i32;
    glGetShaderInfoLog(
      fragment_shader,
      1024,
      &mut log_len,
      v.as_mut_ptr().cast(),
    );
    v.set_len(log_len.try_into().unwrap());
    panic!("Fragment Compile Error: {}", String::from_utf8_lossy(&v));
  }
}
}

This is all a very good candidate for wrapping into an easier to use function, but we'll get to that after we can at least see a triangle.

Create A Program

A program combines several shader "stages" such as vertex and fragment, and lets you have a completed graphics pipeline.

We use glCreateProgram to create one, and then we use glAttachShader to connect both shaders we have so far. Finally we call glLinkProgram to connect the shader stages into a single, usable whole.

#![allow(unused)]
fn main() {
unsafe {
  let shader_program = glCreateProgram();
  glAttachShader(shader_program, vertex_shader);
  glAttachShader(shader_program, fragment_shader);
  glLinkProgram(shader_program);
}
}

And we have to check the GL_LINK_STATUS with glGetProgramiv, and grab the link error log if there was a link error.

#![allow(unused)]
fn main() {
unsafe {
  let mut success = 0;
  glGetProgramiv(shader_program, GL_LINK_STATUS, &mut success);
  if success == 0 {
    let mut v: Vec<u8> = Vec::with_capacity(1024);
    let mut log_len = 0_i32;
    glGetProgramInfoLog(
      shader_program,
      1024,
      &mut log_len,
      v.as_mut_ptr().cast(),
    );
    v.set_len(log_len.try_into().unwrap());
    panic!("Program Link Error: {}", String::from_utf8_lossy(&v));
  }
}
}

Finally, and this part is a little weird, we can mark the shaders to be deleted with glDeleteShader. They won't actually get deleted until they're unattached from the program we have, but we can call delete now and worry about one less thing later on.

#![allow(unused)]
fn main() {
unsafe {
  glDeleteShader(vertex_shader);
  glDeleteShader(fragment_shader);
}
}

Finally, after all that, we can call glUseProgram to set our program as the one to use during drawing.

Vsync

Last thing before we move on to the main loop, let's turn on vsync, which will make our swap_window call block the program until the image has actually been presented to the user. This makes the whole program run no faster than the screen's refresh rate, usually at least 60fps (sometimes more these days). This is usually a good thing. We can't show them images faster than the screen will present them anyway, so we can let the CPU cool down a bit, maybe save the battery even if they're on a laptop.

#![allow(unused)]
fn main() {
// this goes any time after window creation.
win.set_swap_interval(SwapInterval::Vsync);
}

Clear The Screen

In the main loop, after we process our events, we start our drawing with a call to glClear. In this case we specify the GL_COLOR_BUFFER_BIT, since we want to clear the color values. You could clear the other bits too, but since we're not using them right now we'll just clear the colors.

#![allow(unused)]
fn main() {
unsafe {
  glClear(GL_COLOR_BUFFER_BIT);
}
}

Draw The Triangle

To actually draw our triangle we call glDrawArrays.

  • The mode is how to connect the vertexes together. We use GL_TRIANGLES which makes it process the vertexes in batches of 3 units each into however many triangles that gets you.
  • The first value is the first vertex index to use within our vertex buffer data. Since we want to draw all three of our vertexes, we start at index 0.
  • The count value it the number of indices to be drawn. Since we want to draw all three of our vertexes, we use 3.
#![allow(unused)]
fn main() {
unsafe {
  glDrawArrays(GL_TRIANGLES, 0, 3);
}
}

Be extra careful with this call. If you tell it to draw too many triangles the GPU will run right off the end of the array and segfault the program.

Swap The Window Buffers

Once the drawing is done, we have to swap the window's draw buffer and display buffer, with swap_window. This will make the picture we just drew actually be displayed to the user. With vsync on it'll also block until the image is actually displayed.

#![allow(unused)]
fn main() {
win.swap_window();
}

Done!

Triangle Cleanup

Now that we can see the basics of what's going on we're going to do a bit of clean up. This won't change what we're drawing, it'll just help us sort out the easy stuff (which we can mark safe and then worry about a lot less) from the unsafe stuff (which we will always have to pay attention to).

From here on, the examples will all have

#![allow(unused)]
fn main() {
use learn_opengl as learn;
}

We'll use our helpers via learn::func_name(). You could of course import the functions and then leave off the prefix, but in tutorial code you always want to aim for a little more clarity than is strictly necessary.

First, A Note On Using glGetError

The ogl33 crate will automatically call glGetError after each GL call if the debug_error_checks is enabled along with debug_assertions. This means that we don't have to call glGetError ourselves to see any errors get reported when we're testing the program. However, if we wanted to check errors without debug_assertions on then we'd have to call glGetError manually. Or if you were using a crate to load and call GL other than ogl33 I guess.

The way that glGetError works is pretty simple: You call it, and you get a value back. If there's no pending errors you get GL_NO_ERROR, if there's a pending error you get some other value. However, depending on driver there might be more than one error pending at once. So you should call glGetError until you finally get a GL_NO_ERROR.

Setting The Clear Color

Making glClearColor safe is easy, there's nothing that can go wrong:

#![allow(unused)]
fn main() {
/// Sets the color to clear to when clearing the screen.
pub fn clear_color(r: f32, g: f32, b: f32, a: f32) {
  unsafe { glClearColor(r, g, b, a) }
}
}

and then in the example we'd call it like this:

#![allow(unused)]
fn main() {
learn::clear_color(0.2, 0.3, 0.3, 1.0);
}

Vertex Array Objects

With the Vertex Array Object stuff, we're just wrapping the name in our own type and then giving methods for the operations that go with it. However, we don't yet know all of the functions that we might need to use, so we'll keep the inner value public and we can just pull that out at any time if we need to.

We'll want a way to make them, and to bind them.

#![allow(unused)]
fn main() {
/// Basic wrapper for a [Vertex Array
/// Object](https://www.khronos.org/opengl/wiki/Vertex_Specification#Vertex_Array_Object).
pub struct VertexArray(pub GLuint);
impl VertexArray {
  /// Creates a new vertex array object
  pub fn new() -> Option<Self> {
    let mut vao = 0;
    unsafe { glGenVertexArrays(1, &mut vao) };
    if vao != 0 {
      Some(Self(vao))
    } else {
      None
    }
  }

  /// Bind this vertex array as the current vertex array object
  pub fn bind(&self) {
    unsafe { glBindVertexArray(self.0) }
  }

  /// Clear the current vertex array object binding.
  pub fn clear_binding() {
    unsafe { glBindVertexArray(0) }
  }
}
}

Then we use it like this:

#![allow(unused)]
fn main() {
let vao = VertexArray::new().expect("Couldn't make a VAO");
vao.bind();
}

Buffers

For buffers it's a little more tricky because we have to make sure that we don't design to heavily for just Vertex Buffers and block ourselves from easily using other types of buffers. In fact since we'll want to use ElementArray buffers in the next lesson we can add that now to a BufferType.

#![allow(unused)]
fn main() {
/// The types of buffer object that you can have.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum BufferType {
  /// Array Buffers holds arrays of vertex data for drawing.
  Array = GL_ARRAY_BUFFER as isize,
  /// Element Array Buffers hold indexes of what vertexes to use for drawing.
  ElementArray = GL_ELEMENT_ARRAY_BUFFER as isize,
}
}

Then the buffers themselves will accept a BufferType argument when we bind.

#![allow(unused)]
fn main() {
/// Basic wrapper for a [Buffer
/// Object](https://www.khronos.org/opengl/wiki/Buffer_Object).
pub struct Buffer(pub GLuint);
impl Buffer {
  /// Makes a new vertex buffer
  pub fn new() -> Option<Self> {
    let mut vbo = 0;
    unsafe {
      glGenBuffers(1, &mut vbo);
    }
    if vbo != 0 {
      Some(Self(vbo))
    } else {
      None
    }
  }

  /// Bind this vertex buffer for the given type
  pub fn bind(&self, ty: BufferType) {
    unsafe { glBindBuffer(ty as GLenum, self.0) }
  }

  /// Clear the current vertex buffer binding for the given type.
  pub fn clear_binding(ty: BufferType) {
    unsafe { glBindBuffer(ty as GLenum, 0) }
  }
}
}

Finally, to buffer some data, we'll leave that as a free function. It'll take a buffer type and a slice of bytes, and a usage. I don't think we really need to make a special enum for usage values, so we'll just keep using GLenum for the usage argument.

#![allow(unused)]
fn main() {
/// Places a slice of data into a previously-bound buffer.
pub fn buffer_data(ty: BufferType, data: &[u8], usage: GLenum) {
  unsafe {
    glBufferData(
      ty as GLenum,
      data.len().try_into().unwrap(),
      data.as_ptr().cast(),
      usage,
    );
  }
}
}

And the usage code looks like this:

#![allow(unused)]
fn main() {
let vbo = Buffer::new().expect("Couldn't make a VBO");
vbo.bind(BufferType::Array);
learn::buffer_data(
  BufferType::Array,
  bytemuck::cast_slice(&VERTICES),
  GL_STATIC_DRAW,
);
}

The bytemuck crate is a handy crate for safe casting operations. In this case, it's letting us cast our &[[f32;3]] into &[u8].

Vertex Attribute Pointers

This stuff is wild!

It's actually really hard to come up with a general vertex attribute pointer system that works with arbitrary rust data type inputs and also always lines up with the shaders you're using... so I'm not even going to bother.

It's okay to have a few unsafe parts where you just always pay attention to what you're doing.

Shaders and Programs

So obviously we want a shader type enum:

#![allow(unused)]
fn main() {
/// The types of shader object.
pub enum ShaderType {
  /// Vertex shaders determine the position of geometry within the screen.
  Vertex = GL_VERTEX_SHADER as isize,
  /// Fragment shaders determine the color output of geometry.
  ///
  /// Also other values, but mostly color.
  Fragment = GL_FRAGMENT_SHADER as isize,
}
}

And then... well what we really want is to say to our library: "I have this string and it's a shader of this type, just make it happen".

#![allow(unused)]
fn main() {
/// A handle to a [Shader
/// Object](https://www.khronos.org/opengl/wiki/GLSL_Object#Shader_objects)
pub struct Shader(pub GLuint);
impl Shader {
  pub fn from_source(ty: ShaderType, source: &str) -> Result<Self, String> {
    unimplemented!()
  }
}
}

Like that's the final interface we want to have, right? But to support that operation we probably want to make each individual operation a little easier to use. That way we can think about the bigger operation in terms of easy to use smaller operations. Sometimes having too many middle layers can hide a detail that you don't want hidden, but this is just a little extra in the middle so it's fine.

I'm just gonna throw it all down because you've seen it before and there's not much new to comment on.

#![allow(unused)]
fn main() {
impl Shader {
  /// Makes a new shader.
  ///
  /// Prefer the [`Shader::from_source`](Shader::from_source) method.
  ///
  /// Possibly skip the direct creation of the shader object and use
  /// [`ShaderProgram::from_vert_frag`](ShaderProgram::from_vert_frag).
  pub fn new(ty: ShaderType) -> Option<Self> {
    let shader = unsafe { glCreateShader(ty as GLenum) };
    if shader != 0 {
      Some(Self(shader))
    } else {
      None
    }
  }

  /// Assigns a source string to the shader.
  ///
  /// Replaces any previously assigned source.
  pub fn set_source(&self, src: &str) {
    unsafe {
      glShaderSource(
        self.0,
        1,
        &(src.as_bytes().as_ptr().cast()),
        &(src.len().try_into().unwrap()),
      );
    }
  }

  /// Compiles the shader based on the current source.
  pub fn compile(&self) {
    unsafe { glCompileShader(self.0) };
  }

  /// Checks if the last compile was successful or not.
  pub fn compile_success(&self) -> bool {
    let mut compiled = 0;
    unsafe { glGetShaderiv(self.0, GL_COMPILE_STATUS, &mut compiled) };
    compiled == i32::from(GL_TRUE)
  }

  /// Gets the info log for the shader.
  ///
  /// Usually you use this to get the compilation log when a compile failed.
  pub fn info_log(&self) -> String {
    let mut needed_len = 0;
    unsafe { glGetShaderiv(self.0, GL_INFO_LOG_LENGTH, &mut needed_len) };
    let mut v: Vec<u8> = Vec::with_capacity(needed_len.try_into().unwrap());
    let mut len_written = 0_i32;
    unsafe {
      glGetShaderInfoLog(
        self.0,
        v.capacity().try_into().unwrap(),
        &mut len_written,
        v.as_mut_ptr().cast(),
      );
      v.set_len(len_written.try_into().unwrap());
    }
    String::from_utf8_lossy(&v).into_owned()
  }

  /// Marks a shader for deletion.
  ///
  /// Note: This _does not_ immediately delete the shader. It only marks it for
  /// deletion. If the shader has been previously attached to a program then the
  /// shader will stay allocated until it's unattached from that program.
  pub fn delete(self) {
    unsafe { glDeleteShader(self.0) };
  }

  /// Takes a shader type and source string and produces either the compiled
  /// shader or an error message.
  ///
  /// Prefer [`ShaderProgram::from_vert_frag`](ShaderProgram::from_vert_frag),
  /// it makes a complete program from the vertex and fragment sources all at
  /// once.
  pub fn from_source(ty: ShaderType, source: &str) -> Result<Self, String> {
    let id = Self::new(ty)
      .ok_or_else(|| "Couldn't allocate new shader".to_string())?;
    id.set_source(source);
    id.compile();
    if id.compile_success() {
      Ok(id)
    } else {
      let out = id.info_log();
      id.delete();
      Err(out)
    }
  }
}
}

So with the Program, again we want to have some sort of thing where we just hand over two source strings and it makes it and we don't worry about all the middle steps.

#![allow(unused)]
fn main() {
pub struct ShaderProgram(pub GLuint);
impl ShaderProgram {
  pub fn from_vert_frag(vert: &str, frag: &str) -> Result<Self, String> {
    unimplemented!()
  }
}
}

But to do that we need to support all the middle steps:

#![allow(unused)]
fn main() {
/// A handle to a [Program
/// Object](https://www.khronos.org/opengl/wiki/GLSL_Object#Program_objects)
pub struct ShaderProgram(pub GLuint);
impl ShaderProgram {
  /// Allocates a new program object.
  ///
  /// Prefer [`ShaderProgram::from_vert_frag`](ShaderProgram::from_vert_frag),
  /// it makes a complete program from the vertex and fragment sources all at
  /// once.
  pub fn new() -> Option<Self> {
    let prog = unsafe { glCreateProgram() };
    if prog != 0 {
      Some(Self(prog))
    } else {
      None
    }
  }

  /// Attaches a shader object to this program object.
  pub fn attach_shader(&self, shader: &Shader) {
    unsafe { glAttachShader(self.0, shader.0) };
  }

  /// Links the various attached, compiled shader objects into a usable program.
  pub fn link_program(&self) {
    unsafe { glLinkProgram(self.0) };
  }

  /// Checks if the last linking operation was successful.
  pub fn link_success(&self) -> bool {
    let mut success = 0;
    unsafe { glGetProgramiv(self.0, GL_LINK_STATUS, &mut success) };
    success == i32::from(GL_TRUE)
  }

  /// Gets the log data for this program.
  ///
  /// This is usually used to check the message when a program failed to link.
  pub fn info_log(&self) -> String {
    let mut needed_len = 0;
    unsafe { glGetProgramiv(self.0, GL_INFO_LOG_LENGTH, &mut needed_len) };
    let mut v: Vec<u8> = Vec::with_capacity(needed_len.try_into().unwrap());
    let mut len_written = 0_i32;
    unsafe {
      glGetProgramInfoLog(
        self.0,
        v.capacity().try_into().unwrap(),
        &mut len_written,
        v.as_mut_ptr().cast(),
      );
      v.set_len(len_written.try_into().unwrap());
    }
    String::from_utf8_lossy(&v).into_owned()
  }

  /// Sets the program as the program to use when drawing.
  pub fn use_program(&self) {
    unsafe { glUseProgram(self.0) };
  }

  /// Marks the program for deletion.
  ///
  /// Note: This _does not_ immediately delete the program. If the program is
  /// currently in use it won't be deleted until it's not the active program.
  /// When a program is finally deleted and attached shaders are unattached.
  pub fn delete(self) {
    unsafe { glDeleteProgram(self.0) };
  }

  /// Takes a vertex shader source string and a fragment shader source string
  /// and either gets you a working program object or gets you an error message.
  ///
  /// This is the preferred way to create a simple shader program in the common
  /// case. It's just less error prone than doing all the steps yourself.
  pub fn from_vert_frag(vert: &str, frag: &str) -> Result<Self, String> {
    let p =
      Self::new().ok_or_else(|| "Couldn't allocate a program".to_string())?;
    let v = Shader::from_source(ShaderType::Vertex, vert)
      .map_err(|e| format!("Vertex Compile Error: {}", e))?;
    let f = Shader::from_source(ShaderType::Fragment, frag)
      .map_err(|e| format!("Fragment Compile Error: {}", e))?;
    p.attach_shader(&v);
    p.attach_shader(&f);
    p.link_program();
    v.delete();
    f.delete();
    if p.link_success() {
      Ok(p)
    } else {
      let out = format!("Program Link Error: {}", p.info_log());
      p.delete();
      Err(out)
    }
  }
}
}

Our final usage becomes:

#![allow(unused)]
fn main() {
let shader_program =
  ShaderProgram::from_vert_frag(VERT_SHADER, FRAG_SHADER).unwrap();
shader_program.use_program();
}

That's so much smaller! Very nice.

Clearing And Drawing Arrays?

We could also wrap the clearing function, but it's small and has to go with other unsafe calls, so we'll skip it for now. We could always add it later.

We can't easily make glDrawArrays safe, because we'd have to carefully monitor the size of the buffer in the actively bound array buffer in the actively bound vertex array to make sure that the call didn't make the GPU go out of bounds. Or we could make it something like "draw these arrays", and you pass a slice and it buffers the slice and draws it immediately. I don't really care for either of those, so we'll just let that be unsafe too.

Done!

Rectangle Elements

Naturally we don't want just one triangle. When you're playing The Witcher 3, there's at least two triangles on the screen (maybe more!).

Let's move on to drawing a rectangle. For this we need a second triangle.

We could just add three more vertex entries and call it a day. If we wanted two triangles that were each on their own that's what we might do. However, since these two triangles making up our rectangle are going to be directly touching, that means we'd have six vertexes making up only four "real" points. That's 50% more space used than we want! It may seem small now, but a complete model for a tree or a person or something like that can easily end up being thousands of triangles. Making that be 50% more space used is a bad time.

Of course this problem of duplicate vertices is a fairly easy problem to solve, and GL has it covered. What we do is specify an Index Buffer. It holds the indexes of the vertex buffer entries we want to use to form each geometry element (in this case triangles). Then the vertex buffer doesn't need to have any duplicates, we just have more than one triangle index the same vertex.

Note: What we'll be drawing is usually called a "quad", because the important part is that it has four outside edges. It's not really important that the edges are in two pairs of parallel lines at right angles with each other like a true rectangle has.

Data

So we've got some new data. We're going to have 4 vertex entries that describes the points we want to use, and an index buffer with 2 entries where each entry describes a triangle using the points.

#![allow(unused)]
fn main() {
type Vertex = [f32; 3];
type TriIndexes = [u32; 3];

const VERTICES: [Vertex; 4] =
  [[0.5, 0.5, 0.0], [0.5, -0.5, 0.0], [-0.5, -0.5, 0.0], [-0.5, 0.5, 0.0]];

const INDICES: [TriIndexes; 2] = [[0, 1, 3], [1, 2, 3]];
}

Element Buffer Object

Our indexes go into a separate kind of buffer. This is that ElementArray buffer type that I snuck into the cleanup lesson.

After we make and bind our vertex data we also bind a buffer for the element data and upload it, the code looks nearly identical:

#![allow(unused)]
fn main() {
let ebo = Buffer::new().expect("Couldn't make the element buffer.");
ebo.bind(BufferType::ElementArray);
learn::buffer_data(
  BufferType::ElementArray,
  bytemuck::cast_slice(&INDICES),
  GL_STATIC_DRAW,
);
}

Draw It!

Finally, instead of calling glDrawArrays, we use a separate function called glDrawElements.

  • mode: The style of drawing. We're still drawing triangles so we keep that from before.
  • count: The number of index elements to draw. We want two triangles to form our quad, and there's three indexes per triangle, so we put 6.
  • type: This is the type of the index data. The u32 type is specified with GL_UNSIGNED_INT. I used u32 out of habit, we could have made our indexes be u16 or u8 as well.
  • indices: is a pointer to the position within the index buffer to start the drawing with. Similar to the attribute specification, you pretend the index buffer starts at address 0 and then you decide the offset you want, and then cast that to a *const pointer.

So the usage looks like this:

#![allow(unused)]
fn main() {
// and then draw!
unsafe {
  glClear(GL_COLOR_BUFFER_BIT);
  glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0 as *const _);
}
win.swap_window();
}

Bonus: Wireframe Mode

Since this lesson is really short let's look at one extra ability we can use.

You often see 3d models with just the outlines of each triangle. "Wireframe mode" it's sometimes called. We can easily do that with glPolygonMode.

  • We can specify the face, but in the Core profile the only valid value is GL_FRONT_AND_BACK (in Compatibility profile you can also use GL_FRONT or GL_BACK).
  • We also specify the mode. The default is GL_FILL, but With GL_LINE we get the wireframe effect. GL_POINT is also allowed, but makes it pretty hard to see what's going on.

All this can go in our lib.rs file:

#![allow(unused)]
fn main() {
/// The polygon display modes you can set.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum PolygonMode {
  /// Just show the points.
  Point = GL_POINT as isize,
  /// Just show the lines.
  Line = GL_LINE as isize,
  /// Fill in the polygons.
  Fill = GL_FILL as isize,
}

/// Sets the font and back polygon mode to the mode given.
pub fn polygon_mode(mode: PolygonMode) {
  unsafe { glPolygonMode(GL_FRONT_AND_BACK, mode as GLenum) };
}
}

And then before our main loop we can turn it on:

#![allow(unused)]
fn main() {
learn::polygon_mode(PolygonMode::Line);
}

Now we get a wireframe quad! and it looks like two triangles just like it should!

Done!

Appendix: Math

Vectors

Matrices

Transforms