Qt5 OpenGL Part 3b: Camera Control 23


As we learned in Part 2, there is something known as the Transformation Pipeline which all of our points move through. This pipeline allows us to translate points around to give the geometry the sense that it’s moving. In a similar sense, we can perform the same kind of math to approximate a camera. The idea is the same, we’re going to move points around such that they are with respect to another vector space. The only real difference is that how we calculate the final worldToCamera matrix will be a little different than what our custom Transform3D class does.

This will probably be a pretty big tutorial, because we still have to cover the Input class, which I alluded to the implementation we would use in Part 3a. It will contain a lot of dense sections of code. So get comfy, this is going to take a while.

The Camera Matrix

So we saw in Part 2 that the Transformation Matrix for any given object can be calculated by Identity * Translation * Rotation * Scale, and that the order is important because we are in row-major format. We didn’t really discuss the math behind these transformations, but for the average user this should be enough to understand that we are combining transformation information. What this does is translates things so that they are in respect to the world. That is why this is known as the modelToWorld matrix.

Cameras are a little different. Think about the modelToWorld transformation for a second; there are two things a point can be in-respect to in this transformation, the model, and the world. We are moving information from the model, to the world. So if we imagine a camera as an object, and we want to get the world to be in respect of the camera, but we are not really moving the object from it’s position in the world, to the position of the camera. What we realize is that this is not quite the same transformation as what we would compute for modelToWorld. Simply put: We can’t have just another Transform3D which represents the camera and use that. It wont put the world in respect to the camera, it would put the camera in respect to the world.

However, if we know a little Linear Algebra, we can infer the following:

modelToWorld^{-1} = worldToModel

Thus the simplest camera we can implement at this time is a Transform3D, which in the end, instead of just passing the matrix to the shader, does the following:

You can try it if you like, but in the end it’s better to opt for making a custom Camera3D class. This act of building and inverting a matrix that I mention is possible for Transform3D contains needless computation. Instead we should simply build the transformation matrix to form the worldToCamera instead of forming the cameraToWorld matrix.

Right-Handed / Left-Handed Coordinate Systems

The next important thing to consider is how we’re going to move this camera around. When dealing with a 3D environment, it’s all about how easy and convenient you’ve made your Transform3D class. Ours is handy, but yet incomplete. We forgot one of the most important bits of information that the Transform3D implicitly holds; the forward, up, and right vector.

The forward, up, and right vectors define what vector direction corresponds to the given values. Locally, what is the direction of forward generically for any object? And then if we want to find what direction forward is in terms of the world, we only need to transform our vector by the object’s rotation (since scale and translation don’t matter for directions). As you can imagine, this is very important information logically. We can say handy things like “Rotate about your up vector”. And no matter how that object is oriented, it will rotate properly.

But I’m getting ahead of myself, first we need to define our coordinate system. The common decision you have to make is between one of two coordinate systems: Right-Handed, and Left-Handed.

Arrows point towards positive values.

The reason for such names is because we can form the coordinate systems physically using our fingers. Take the above picture as an example, let’s practice by forming the Right-Handed coordinate system.

  • First, make a fist and hold your hand up with your palm facing towards your face.
  • Then extend your thumb so that it points out to the right. At this point you should be holding a side-ways thumbs-up.
  • After that, extend your index finger so that it is pointing as straight as possible upwards. Now your fingers should be forming an L shape.
  • Finally, point your middle finger towards yourself.

The middle finger is actually the most important bit of information. I always remember it because I find it funny that I’m “flicking myself off”. Your middle finger represents positive direction along the Z-axis (our forward vector). Given that information, it’s usually pretty easy to determine the rest.

Another important trick for making sense of our 3D world is using our hands to model 3D Quaternion rotation. If we consider that our right thumb is the axis of rotation, if we extend our fingers similarly to the right-handed coordinate system above, the way our index and middle fingers “wind” shows us what direction a positive rotation will go. So if you want to rotate an object about it’s up vector; make a right-handed coordinate system, point your thumb upwards, and you’ll see by following the directions of your index and middle finger what direction the rotation will go.

Note: It’s important to note that quaternions don’t really have “handedness”, so regardless of what coordinate system your object is in, it’s best to use our right-handed coordinate system to see what direction rotating about a vector will yield. (Simply because it’s easy to note that the rotation follows the winding of your fingers).

A Fly-Through Camera

Given the information above, it’s not too difficult to imagine how we would implement a fly-through camera. For an example of how a fly-through camera works, load up a program which supports it (say, Unity; hold right mouse button in to enable fly-through mode, and use WASDQE to move), and try moving around in the world. Which vectors do you think the camera is rotating about? What translations do you think are happening when you use WASDQE? In reality it’s pretty simple (this is what we’ll implement).

Hint: Our camera uses a Left-Handed coordinate system for translation. Rotations do not have “handedness”, so for testing rotations we can use our right hand. In Qt, the cursor position is mapped from <0,0> top-left of the screen, to <width, height> bottom-right of the screen. We can record deltas of mouse movement by recording current and previous cursor position.

Expand the spoiler below to find the answer:

Fly-Through Camera Logic

There are actually two rotations occurring based on the mouse movement.

  • The first rotation is about a constant upwards vector, and uses the mouse’s -x-delta.
  • The second rotation is about the object’s right vector, and uses the mouse’s -y-delta.
  • W and S keys use the object’s forward vector to move in the direction the object is facing.
  • A and D keys use the object’s right vector to strafe in the direction of the object’s side.
  • Q and E keys use the object’s up vector to raise and lower itself in the direction of the object’s up.

With this information, it should be trivial to implement a camera controller!


Camera Control

1. Create the Input Manager

So the first thing we need is an input manager. Start by creating a class named Input, and edit input.h to the following:

Nothing really important, but you can see that there is a lot of different static state information. Most of the state will be held in the source file. As you can tell, this is basically an extension of the Input Manager that we discussed creating in 3a, but we’ve got a few more functions to check state information.

But the header isn’t too interesting. Let’s look at input.cpp:

Since input.cpp is a little confusing, we’re going to build it in steps. Everything that I’m about to show you is in order. Comments denote that the code will continue after I mention something about the content of that section.

First we’ll have just our regular includes. As we saw in Part 3a, QVector and QList are significantly less efficient, so we’re going to opt for std::vector to keep querying and accessing more efficient. You will often have code that iterates over several key states checking to see if they’re pressed, released, or whatever. So we want this to be efficient.

Next we’re going to define some static helper types and data.

Whenever we work with the STL, it becomes important to typedef and consolidate code as much as possible. If we don’t we’ll have pretty ugly function calls with multiple complex declarations. Not to mention what if we find out that we need to use a different container type? I don’t want to change a bunch of lines of code just because I need to change a single type. All of these problems can be avoided if we just typedef.

Another kind of interesting C++-ism here is the template InputInstance. The reason we’re providing a template is because the functionality between checking if a key or a mouse button is down is pretty much the same. Instead of duplicating our efforts manually, let’s just instruct the compiler to do this with templates.

In the end, we won’t even have to write a for loop to look at our data, because we can just use std::find from the algorithms header – which does pretty much exactly what you’d expect it to do. Hence the operator== definition.

As for the globals. Yes they’re not good. We’ve basically got a singleton pattern going on here. In most cases you want to avoid singletons, as they tend to create complexities in the code that are hard to maintain. There are ways we can contextually make this an instance, but act like a singleton – but we’re just redefining the same problem. Ultimately, I’ve never seen a need to think really hard about the Input Manager not being a singleton. You need to access it from many different locations, so we’ll just have to code carefully. In most games, this will not pose a problem because our uses are simple.

Next up, some helper functions that will make our lives easier.

First we’re going to leverage the <algorithms> header for manipulating our vector. All we provide are a few simple functions which make it so that we don’t have to supply begin() or end() to the algorithm every call. UpdateStates and CheckReleased are both functions which have intended usage with STL; UpdateStatus updates the current state of the keys, and CheckReleased is a predicate to see if a key should be removed or not.

Finally in Update, we use these helpers to modify a templated container, we remove first – which is important – and then we update existing data. STL newbies might be a bit confused as to why we call std::remove_if followed by std::vector::erase. This technique is known as the Erase-Remove Idiom (please click the link if you want to know more).

The rest is pretty straight-forward if you’re use to working with the STL. The trickiest thing to understand about Input is that there are two modes for Input; listening and updating.

  • Listening state is when we are receiving input KeyEvents. The key needs to be marked as “should update”, but not immediately update. The reason for this is because our update pass will move states along as well – so we don’t want a key to skip InputTriggered and go directly to InputPressed in one pass.
  • Updating state happens just before any user logic occurs. This is when we actually take the keys which were marked for update and actually update them. This allows a key to go from InputRegistered -> InputTriggered, or from InputUnregistered -> InputReleased. This step is required and cannot be consolidated into the Listening state, because next loop the key needs to move from InputTriggered -> InputPressed.

That’s it! Using Input is as simple as calling Input::keyPressed(Qt::Key_<keyname>), there’s a little more setup involved (we have to register to receive key events, and we have to ask Input to update), but we will see that in a bit.

2. Create the Camera3D class

Next we need a Camera3D class – it’s very similar to Transform3D, but it is vastly simpler. Recall that I mentioned the need for Camera3D at the beginning of this tutorial. It’s not that we directly need such a class, but it would be nice to not perform extra, unneeded calculations such as inverting our matrix. The interface will be very similar to Transform3D, only we will not support scale (it’s not a common camera operation).

camera3d.h:

Lot’s of stuff! But like I mentioned, nothing super interesting. It’s very similar to Transform3D. The only real difference is the presence of forward(), up(), and right(), as well as LocalForward, LocalUp, and LocalRight. We will discuss these more in the source file.

camera3d.cpp:

Really there are only two parts to talk about.

The first is Camera3D::toMatrix(). As you can tell, multiplication order and the values we’re passing are different. Recall we’re building an inverse matrix. We’re building a matrix which will move us from being with respect of the world, to being with respect of the camera. We could form a matrix and then invert it; or we could just build it inverted!

The next thing is the addition of forward(), right(), and up(). As we talked about at the beginning of this tutorial, these have great significance to us. And they’re fairly simple to form. You take the vectors of the current object’s coordinate system, and you rotate them using our QQuaternion. If we wanted to add this same functionality to Transform3D, we would only need to change our Transform3D::LocalForward vector to be <0.0f, 0.0f, 1.0f> instead of what is defined for Camera3D <0.0f, 0.0f, -1.0f>. Recall that the camera’s coordinate system is Left-Handed, and our objects is Right-Handed. This is what will describe our coordinate system to outside objects.

I’ve added these same functions to Transform3D, but the changes are redundant to what I’ve explained above. Expand the spoiler below to view my changes:

Transform3D Changes

transform3d.h:

transform3d.cpp:

3. Add Camera3D and Input to Window

Next we need to actually update our Input Manager, as well as construct a camera which we will pass to the shader. We will be renaming some of our matrices as well, because we no longer just have a worldToView matrix, instead we will have a worldToCamera and a cameraToView matrix. The reason we called the previous matrix worldToView is because we’re assuming the camera is unchanged at the origin. In such a case it forms the identity matrix, which changes nothing about our cameraToView matrix. So if we assume the camera never moves, and is constantly at the origin, our cameraToView matrix is our worldToView matrix.

However, now that is not true, so we will have to split it into two parts.

window.h:

After that, we need to add some changes to the source.

window.cpp:

First you’ll have to include Input so we can access the input manager.

Next we need to cache the worldToCamera uniform location, and we’ll have to change worldToView to be cameraToView. Again, we can probably make this caching much nicer later on. But for now let’s keep things simple.

After that we will upload the new worldToCamera and cameraToView matrices to the GPU.

Update has the most changes to pre-existing code. This is where we will implement our Fly-Through camera. Like mentioned in the spoiler for Fly-Through Camera at the top of the page, we only need to apply two rotations, and 6 translations.

For rotation, we must apply each rotation successively. We cannot combine the two axes to form one single rotation axis which applies both rotations properly. So we must perform one rotation about one axis, and then a second rotation about a second axis.

Translation is a bit simpler. We can aggregate the translation values into one final vector, and then perform the whole translation in one go at the very end. This is handy because we will only have to multiply by the speed of the translation once instead of multiple times.

For now, the speed of these actions are simply hard coded at the top of the function. The units are fairly arbitrary, we simply state that we move 0.5f units per frame. Per frame movement operations are usually frowned upon, so later on we will make this rely on some delta time that the frame provides. At that time we will get more appropriate values for these speeds.

One question you may have is about the if (event->isAutoRepeat())  check. Essentially if a key is autorepeating, it’s because you are holding it down. Think about opening a text document and holding down the “a” key. Notice how it follows the format of <A>-onelongpause-<A>-<A>-<A>-… Those repeat keys are what we’re trying to ignore. Everything else seems pretty simple, we just pass information to Input so that we can have dynamic state information.

4. Update shader code

The only other change we need to make is to the Vertex Shader.

That’s it! Your final executable should allow you to use WASDQE to move, and mouse movement to rotate (when you are holding the Qt::MouseRight button). Nothing visually different, but this is enough to allow us to look at our object from many different angles. This will come in handy when we start loading assets in (next time)!

Not too different looking, but we can move around!

Not too different looking, but we can move around!

Summary

In this tutorial, we learned about the following topics.

  • What the camera matrix represents, and how to calculate it.
  • What a Fly-Through Camera was, and the math behind it.
  • How to create a rudimentary Input Manager for Qt, and how to register it.
  • How to create our Camera Matrix using the information/classes we already have.

View Code on GitHub

Cheers!


Leave a Reply to visual Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

23 thoughts on “Qt5 OpenGL Part 3b: Camera Control

  • George

    Wow, very nice post! I especially liked the clever use of stl in the Input Controller. Is there a difference in using the QMatrix4x4.lookAt() method instead of creating the camera matrix this way?

    • Trent Post author

      Hey George!

      Been a tad busy with other things (hence lack of update on my website), sorry this took so long. For this question, you really have to ask yourself what information your trying to show. A lookAt() may be great if you really have something you want to look at, and you have the necessary information handy (for example, a camera that is locked on to some target). Maybe in that case it might be better since there’s less math involved to produce the rotation matrix. But then again, a lookAt() also involves some normalization (sqrt calls). I’d really have to test both methods together to see if one would be better over the other, but ultimately my guess would be a quaternion would win for two reasons:

      1. The algorithm to produce a rotation matrix (though it looks complicated) may be less computationally expensive than keeping valid up/right/forward vectors for the lookAt().
      2. Many of the pros of using quaternions apply (no gimbal lock https://en.wikipedia.org/wiki/Gimbal_lock) and less memory overhead as well.

      Honestly, after you learn quaternions, you’ll really miss having them. A lookAt() is good for getting yourself oriented, I think – but when you’re rotating around some arbitrary axis, you can’t really beat quaternions. I’ve never really cared to keep up/right/forward vectors because it’s always just simpler to slap a quaternion in there.

      I guess I’m saying, this is just a quick assessment. Nothing truly beats testing it yourself. :)
      I hope to revisit graphics when Vulkan is released, and I’ll keep this in mind for something to experiment with when the time comes.

    • Trent Post author

      Hey Marcel,

      Yup! I entirely did this on purpose.

      Generally when you create a tool for manipulating objects in 3D space, you want some key or button to signify the user intends to move the camera. The reason for this is because otherwise they could accidentally move the camera just through normal usage. (Most common with mouse movement)

      Of course, it’s up to you if you want to move logic around, change this configuration, etc. This is just what I’ve found to be the most comfortable when I was working on this project.

  • Ryan

    Trent,

    This tutorial is amazing. I am trying to make a flight simulator, like “Pilot Wings”, in QT and this has been extremely helpful. I am trying to take the camera around with using yaw, roll, and pitch as well but I want to make sure my idea is along the right path. So, would I use QQuaternion::fromEulerAngles(p,y,r) that returns the Quat and then use that to change the camera view, and then translate and rotate according to WASDQE (or in my case: lift, speed, cross-wind)?

    • Trent Post author

      Hey Ryan,

      Glad you found this useful! :)
      You can apply quaternions to alter rotations, if that’s what you mean. That function will return a quaternion which – when applied to another quaternion, will apply it’s rotation on an existing rotation. I think that’s what you’re asking, but I’m not totally sure – hopefully that helps. Sorry for the insanely late response!

  • Jiaxin Liu

    Thanks a lot for a tutorial! I’ve got a problem that I run the GitHub code and nothing happens when I press WASD keys. I wonder what happens here. I use Qt 5.8 and Visual Studio 2015. Thank you! :-)

  • Nicolas Jullien

    Thanks again for this very nice tutorial!
    Please note that I’m using a QOpenGLWidget ; to get key events, I had to add this line in the widget’s constructor:
    setFocusPolicy(Qt::FocusPolicy::StrongFocus);

    Otherwise, only the mouse works.

  • Achim

    I just fell over yout tuts. And they a great. I am rewriting every single to line to hopefully get a better understanding then just copy-paste code everything. Very nice and elegant and it works pretty fine.

    I just made some changes in version of your movement controlling, so I only need to use the mouse. It’s not meant as a correction, but instead of an alternative / additional way to your your great interfaces without adepting too much.
    All I did was to add the wheel event to my OpenGlWidget and save the angle in a variable I appended to you Input class.

    void GlWidget::wheelEvent(QWheelEvent * event)
    {
    QPoint wheelAngleDegrees = event->angleDelta() / 8;
    GlInput::registerWheelTurned(wheelAngleDegrees);
    event->accept();
    }

    In widget’s update function I appended a few lines to your code. (Here I replaced your code with my changes to keep the code small)

    void GlWidget::update()
    {
    GlInput::update();

    static const float rotSpeed = 0.5f;
    static const float transSpeed = 0.5f * 0.1f;

    //Rotation
    if (GlInput::buttonPressed(Qt::RightButton))
    {
    m_camera.rotate(-rotSpeed * GlInput::mouseDelta().x(), GlCamera::LocalUp);
    m_camera.rotate(-rotSpeed * GlInput::mouseDelta().y(), m_camera.right());
    }

    //Movement
    QVector3D translation;
    translation += m_camera.forward() * GlInput::mouseWheelDegrees();
    GlInput::resetWheelAngle();

    if (GlInput::buttonPressed(Qt::LeftButton))
    {
    QPoint mousePositionDelta = GlInput::mouseDelta();
    m_camera.translate(m_camera.right() * mousePositionDelta.x() * transSpeed);
    m_camera.translate(m_camera.up() * mousePositionDelta.y() * transSpeed);
    }
    m_camera.translate(transSpeed * translation);

    m_transform.rotate(1.0f, 0.4f, 0.3f , 0.3f);
    QOpenGLWidget::update();
    }

  • tarotgirl

    hi Trent ,
    l learned a lot from your tutorials.Thank you very much.However I have problems with mouse event.when I moved mouse,the cube didn’t rotate but translated to opposite direction.Do you know any solutions?

        • tarotgirl

          hi Trent,
          Thanks for your reply. I am learning LINUX Qt5 and OpenGL,because of my Graduation Thesis.I have learned a lot from your tutorials.But I don’t know how to set lighting and how to draw complex graphics.It seems that Official documentation can’t help me.I don’t know how to set paintGL If I want to draw two graphics(just like cylinder).These confused me several days.I have to confess that I am a fool.Do you know any solutions?

          • Trent Post author

            Hey tarotgirl,

            This is a difficult question to answer, because there are many ways to draw multiple objects – it all depends on your use case. If you have one object drawing, the simplest next step is to simply draw another object with a different transform.

            You can read more about the transformation pipeline here: http://www.trentreed.net/blog/qt5-opengl-part-2-3d-rendering/
            Think about it this way, if we have a buffer of vertexes, and we have a shader program which manipulates them based on uniform transform data, then to draw multiple objects we must change the transform data, and draw repeatedly. This is the simplest way to draw several objects – there is also instancing and batching, among other techniques.

            Another thing you might want to look into is a more advanced form of rendering. Deferred rendering is very popular, and almost all graphics whitepapers assume you have a geometry buffer (which is a byproduct of the deferred pipeline).

            You can read more about deferred rendering here: http://www.trentreed.net/blog/deferred-rendering-pipeline/

            If these don’t help, please let me know what you are specifically struggling with and I can try to find better resources.

        • tarotgirl

          hi Trent,Thanks for your reply.I’m sorry for replying too late.I have graduated luckily.Actually, I didn’t solved the problem. My research direction has changed in my postgraduate stage. Thank you, anyway.

  • Rob Nio

    Hey Trent,

    nice tutorial on qt and opengl. I have been reading and trying it out so far. And I really like the encapsulation of classes like Transform3D and it’s interoperability with QT-OpenGL functions. I have two questions though :)

    1.)
    I cannot really get the hang of the listening mode. I think, that I don’t really got the reasoning behind the listening mode –

    “The key needs to be marked as “should update”, but not immediately update. The reason for this is because our update pass will move states along as well – so we don’t want a key to skip InputTriggered and go directly to InputPressed in one pass.”

    Is that because we call Window::update and then update the Key’s InputState to go over to their next state?
    Why do we need that intermediate “Triggered” state, anyway?
    I know that a registered Key is a key that is registered by the “void Window::keyPressEvent(QKeyEvent *event)”-method.
    Why do we need to defer/delay the state and not immediately move the camera into some direction with “translation += m_camera.right();”?

    2.)
    Also inside the Window::update method. Doesn’t a call to Input::keyPressed(Key) trigger a loop through the container of your InputManager?
    I imagined having more inputs with panning around the camera and zooming into some point and probably other commands like saving and so on (it can be extended with more if’s). That would mean that for each of these use-cases (or if’s) I would have a loop over the container-vector inside my InputManager. I thought about returning just a vector of the pressed keys and checking them.

    But wouldn’t it be better to have something like in Unreal where you register some callback? Hence you have some Key triggering a callback:

    /**
    * Binds a delegate function to an Action defined in the project settings.
    * Returned reference is only guaranteed to be valid until another action is bound.
    */
    FInputActionBinding& BindAction(const FName ActionName, const EInputEvent KeyEvent, FMethodPtr Callback);

    and in the user-code:
    InputComponent->BindAction(“ZoomIn”, IE_Pressed, this, &AMyProjectCharacter::CameraZoomIn);

    I hope it’s alright to ask those questions.
    Sincerely
    Rob

    • Trent Post author

      Hey Rob,

      No worries on the questions – I will try to answer them as best as possible! :)

      There are really 4 states of a button being pressed that we care about:

      1. The button is not being pressed.
      2. The button was just pressed this frame.
      3. The button is being held down.
      4. The button was just released this frame.

      From this, you can see there are some interesting combinations of states. My design in this article is lacking in that it is missing one important function on the Input:: class – keyHeld(). And it also mistakenly defines keyPressed() as just the 3rd state (InputPressed).

      I would say a more accurate system would look like this:

      keyTriggered() := (InputTriggered)
      keyPressed() := (InputPressed || InputTriggered)
      keyHeld() := (InputPressed)
      keyReleased := (InputReleased)

      Why are these functions useful? Well, consider a platformer for instance. If I am holding the right button, I want to be moving right (keyPressed). If I want to jump, I should only jump when I depress the button on that frame (keyTriggered). If I want to select a menu item, sometimes people hold a button down before selecting and the actual selection of a menu item happens on release (keyRelease). keyHeld is kind of just a special state for a button which is neither triggered or released, and it can be possibly used when something must be mutually exclusive between keyTriggered and keyHeld (e.g. the execution cannot overlap, for some reason).

      Now let’s talk about input; there are really multiple ways to handle input in a game.

      A: Bitfield Input States

      The most common kind from the 80s and 90s was with bitfields – on the NES for example, you would read controller state into a variable, and then AND with the appropriate bits to see if a button was pressed or not. The way this would work on the NES is handy, because there are 8 buttons per controller, so per-player you only needed a minimum of 8 bits to hold state, and the NES’s CPU was 8-bit!

      Now, depending on the type of input you wanted to support, you may need 16-bits per controller, one for immediate frame action, and another for the last frame’s action. This is because it only takes 2 bits of data to represent the four states above (00 = not pressed, 01 = triggered, 11 = pressed, 10 = released). If a user is quick enough, they might skip pressed state all-together (00, 01, 10). So, imagine two bytes that you AND or OR together with constants to find the state of your controller; P1A and P1B (UDLRABST being a potential bitfield – Up, Down, Left, Right, A, B, Select, sTart)

      B: Trivial Input State (Above)

      For computers, it’s a little more difficult than a bitfield. You have many more keys, and keeping them all in a bitfield isn’t possible. This is why I did the efficiency testing in the last chapter to see about how many buttons need to be held down before forward-searching through a vector proves to be less efficient than using a map or something. It turns out, for the amount of buttons being held down on average on a keyboard, a vector is good enough for our job.

      Usually, for various reasons, you want to manage your input states manually. Traditionally this is because windowing systems only tell you when something is PRESSED or RELEASED (why would they tell you that it was HELD? You should keep that state). Now, you may want to care about special events, like when a key is TRIGGERED or RELEASED, or maybe you don’t! It’s up to you.

      The trivial input state is good if you want something quick and dirty, and don’t want to build an event system to go along with your code. I found that for my purposes, this was good enough.

      C: Event-Based Input

      Event-based input is more efficient in ways, but it is also harder to reason about with complex gameplay code. It’s sometimes very nice to be able to write an update function which you can see where we check for input in the logic and in what order and how. However, event-based is the way to go for efficiency and scalability.

      I can say that all of my games I’ve ever made have used the trivial input manager (something similar to what I have written here). The only differences are that I would add other abstraction layers, like a map of actions which corresponds to buttons, so that changing inputs later on in the projects is easier (e.g. instead of Input::keyPressed(KEY_A), more like Input::triggered(MENU_UP), and then we can re-assign MENU_UP at runtime to be different things depending on what the player wants).

      So in short, here are my answers;

      1.) You sometimes care about specific states of buttons, so you need to know TRIGGERED, PRESSED, RELEASED, UNPRESSED minimally for most games.

      2.) Yeah, an event system would be more efficient, but since it wasn’t a requirement for this project, I just made a trivial input system.