I’ve stopped acknowledging the term AAA because I’m increasingly convinced nobody knows what they mean when they use it beyond “games that look expensive and I don’t like”.
I also don’t think there’s that many developers that don’t give “story, mechanics and game difficulty balance” an equal amount of consideration, mostly because those things are typically handled by entirely different people in any production that is bigger than a skeleton crew. It’s not like designers in big studios are just twiddling their thumbs waiting for the rendering engineers to finish the peach fuzz on people’s cheeks.
The way people perceive opportunity cost in collaborative media is always weird to me.
For me, I shy away from AAA games in general because the bigger the studio and higher the budget, the greater chance that there’s MBAs involved that will push design decisions that favour making more money over making a good game.
I think some people correlate that with graphics, maybe because the diminishing returns on effort put into graphics means those amazing graphics could have come at the cost of time spent on the gameplay elements, though I don’t personally think a great game and great graphics are mutually exclusive.
This view on things where a guy in a suit is telling a bunch of passionate artists how to do their day to day jobs is entirely separate from reality.
Don’t get me wrong, there are plenty of big, merecenary operations out there, but this is a) not how those play out, and b) very easy to sus out without needing to have a roll call of the studio.
Case in point, Baldur’s Gate 3, which people keep weirdly excluding from “AAA” was made by a large team that balooned into a huge team during the course of development. That never seemed a budget, size or graphics problem to me. Or about what degree people in the studio happen to have, for that matter.
If you don’t want to play hyper compulsive live services built around a monetization strategy that is perfectly legitimate. Gaming is very broad and not everybody needs to like everything. It’s just the categorizations people use to reference things they supposedly dislike that seem weird to me.
It’s not like designers in big studios are just twiddling their thumbs waiting for the rendering engineers to finish the peach fuzz on people’s cheeks.
This is true to an extent, but the visual fidelity of a game also determines how much work it is to author assets and (more importantly) the interactions between those assets.
If I want to make a new enemy in DOOM I have to make a series of 128x128 sprites that show 2-3 frames of walking animation from a few different angles, then add some simple programming for its abilities and AI and I’m done.
If I want to do the same thing in a game with high visual fidelity I have to make a 3D model, rig it and make animations, worry about inverse kinemetics, and make a bunch of textures and shaders to go on the model. And for anything extra I want the AI to do, or any extra gameplay element I want it to interact with, I have to worry about most of that stuff all over again.
For example, let’s say I want to add a gun that freezes enemies to both types of games. In the case of the DOOM-like game I can make a semitransparent ice texture and overlay it on all the sprites I’ve already made to make new textures, although I could probably get away with just tinting them blue. Then I have to change the enemy code to make their AI and animations freeze when hit with freeze gun and swap out their sprites to the frozen textures.
If I want to do that in a high visual fidelity game I have to think of how I’m going to cover the character in ice in the first place if I want the ice to be a 3D model. Sure I can freeze their animations pretty easily, but if they can be in any pose (including a pose generated by inverse kinemetics) when they get frozen I’m going to have to write a system to dynamically cover the model in ice crystals. I’m also going to have to author materials and shaders for the ice, and worry about what that looks like in combination with the existing materials in different lighting conditions, not only for that enemy but for all the ones than can be frozen.
This sort of stuff is mitigated somewhat by modern tooling, and mitigated even more by the production pipelines that large studios have, but its these same production pipelines that make impossible the sort of drive-by-development and flexibility that you saw in the creation of a lot of earlier games like DOOM, Thief, and Half Life 1. (And when lots of changes are made late in development you usually end up with horrific crunch and a bad game at the end.)
Ultimately there’s a reason that low-budget indie games gravitate towards pixel art or low-poly art styles. Sometimes an indie game will come out that is very high fidelity, but these will generally be walking simulators with no visually present human characters (so, no interactions or characters to animate). And its for the same reason that games with the most complicated gameplay interactions (Dwarf Fortress, NetHack, etc) or the most highly branching storylines (Fallen London) tend to be text based.
EDIT: A couple of real world examples:
This is a retrospective on the development of Anthem. While obviously a lot of bad things were going on there, including a lack of leadership or clear vision, I can’t help but think of that as an example of old style drive-by-development running up against its limits. Every time they’d change the traversal mechanic it would invalidate the world design another team already did, and how many other mechanics are tied to level design? Any time one part of the team couldn’t decide on what they wanted to do, other parts also had to redo their own work, and as a result they spun their wheels for years.
My other example is this developer. He’s clearly passionate about his work and wants it to look as good as possible, but I worry that he doesn’t realize what a can of worms he’s opened by setting that standard for animations now. Its not that he spent months learning how inverse kinemetics work, its that he’s now committed to making sure every single enemy looks good when interacting with anything it can encounter so that nothing stands out by looking jank or unfinished compared to its peers. A problem that’s made worse by the fact that it sounds like you can take control of any enemy and take it anywhere, meaning anything can interact with anything in the entire game. He has already run into a problem with his manticore enemy not being able to fit through doorways and has started talking about making an IK-driven animation where it tucks its wings in as the player approaches the door. What’s gonna happen if the player presses the fly or jump button when that’s happening? Now multiply that question for every gameplay mechanic for every enemy in the game and every weird situation that might require a custom animation.
This somewhat wall-of-text-y post is most interesting for going back to my original point about how this argument has been in place for decades with little change.
I mean, seeing Half-Life 1 pop up as an example of drive-by game development is interesting, given that it’s THE game that debuted highly scripted interactive narrative sequences. I wasn’t there, but I can only imagine the sort of planning it must have taken to figure out the complex sequences where AI entities do things in tandem with the player. I do know for a fact that it took a long time to rig any sort of map for HL1 (because they did provide the dev tools for modding and I did mess around with them). HL1’s enemy AI was an early example of a game that required level designers to manually set up navigation nodes on the geometry (because those flippy ninja enemies later in the game needed to know where they could jump and take cover). I remember it being so cool but noticeably harder to do in your spare time than, say, the old Quake stuff.
Anyway, nobody is saying there aren’t costs to complexity, but the focus on visual fidelity as a linear relationship to gameplay complexity is at best reductive. The problem is that end users can see graphics. So if something looks nice but just hasn’t figured out core aspects to gameplay it’s easier to assume one thing drove the other rather than, say, not having spent enough time prototyping, or having trend-chasing leadership change their minds multiple times about fundamental aspects of the game because something new got big last month or whatever other behind the scenes stuff may have sent development off the rails.
I don’t have time to go over the other guy’s stuff, but I will say that I respect anybody who tries to do development with their whole ass out in public, especially if they’re learning. I get anxious just thinking about it.
I don’t mean to completely disagree with you, I do think that graphics actually get somewhat of a bad rap, just that I think the tradeoff is real even if it doesn’t really scale linearly.
I mean, seeing Half-Life 1 pop up as an example of drive-by game development is interesting
Its true that Half Life 1 marked the turning point from systems focused games to content focused games with its scripted sequences and setpieces. Its also where Valve created the “cabal” development process, which was supposed to be more organized than the development for games like Quake.
I included it mainly because in the making of Half Life 1 documentary the texture artist mentioned that whenever she would make a new texture oftentimes a level designer would just grab it and use it simply because it was new and exciting. The problem was that if every level used every texture then they started to look same-y, so she actually had to start labeling the files as groups and tell people to try to avoid mixing them together. And this was supposed to be more organized than earlier games lol, you can imagine how thrown together they must’ve been.
I’m reminded of a similar story from the development of Deus Ex 1. There’s one level where you walk around an abandoned mansion searching for clues. Unlike the rest of the game, which is mainly stealth / fps, in that level you have to explore and solve a puzzle while listening to an NPC talk about her childhood growing up there. Apparently the guy making the level had to yell at his fellow level designers to stop adding enemies to the rooms of the mansion.
Anyway, sorry if my posts are really long and rambling, I just like talking about games.
I mean, that all depends on the game I suppose, but you’re assuming that sort of anecdotal interaction doesn’t happen now, which I would argue may have more to do with games not doing as much behind the scenes content right away as other media.
And if something killed it (big if), it was producers getting good, rather than graphics.
Look, games have changed a lot, nobody is denying that. The spaces for flexibility and creativity have moved around in some places, but not necessarily disappeared. It’s also true that games are more diverse now than they used to be. There are also more of them, by a lot. I have a hard time making uniform blanket statements about these things. Which of course also makes it hard to push back against inaccurate but simple, consumable statements like “the pores on characters’ skins killed fun gameplay” or whatever.
But there are tradeoffs in lots of directions here. People are talking a lot this month about Expedition 33 getting to those visuals with a small team by effectively using Epic’s third party tech very directly. That’s not wrong. You may have more moving pieces today, but you also don’t have to build a whole engine to render them or code each of them directly instead of having tools to set them up.
I wish the general public was a bit more savvy about this, though, because there are plenty of modern development practices and attitudes that deserve more scrutiny. It loops back around to the behind the scenes access, though. Nobody has time or interest in sitting down and arguing about prototyping, or why the modern games industry sucks at having any concept of pre-production or whatever. Gamers have always been quick to anger driven by barely informed takes in ways disproportionate to their interest in how games are actually made, and that part of the industry hasn’t changed.
You may have more moving pieces today, but you also don’t have to build a whole engine to render them or code each of them directly instead of having tools to set them up.
That’s definitely true. Even with my ice gun example there’s actually a system in UE5 that does exactly what I was talking about with the 3D ice crystals (though, whether it works for animated objects with deforming meshes I don’t know).
Hey, love your take on this. Can you tell us more on what is “opportunity cost in collaborative media” and how it relates to games? I don’t understand these words, and I’d rather get the explanation from a human being :3
Right. Sorry about the overly nerdy way of saying that.
Basically, people tend to think that any time or effort spent doing one thing means time and effort not spent doing another thing. So good graphics means they took time away from designing the gameplay or whatever.
But that’s not how it works. In big projects everybody tends to be a highly specialized expert. So gameplay people are doing gameplay all the way through, modellers are modelling all the way through, rendering programmers are working on rendering features all the way through and so on.
When you have big teams like that it can actually become harder to make sure everything is coordinated and organized so that nobody has to sit and wait for anybody else. Complexity does grow and changing things does become much harder, just… not in the ways people often think. It’s not like there are freely interchangeable “game units” that you can choose to spend in either gameplay or graphics but not both, you know?
It’s not like designers in big studios are just twiddling their thumbs waiting for the rendering engineers to finish the peach fuzz on people’s cheeks.
Okay! I don’t think that either. I think they’re underpaid and overworked like virtually everyone in the games industry and unable to put out quality products because of arbitrary deadlines. That kind of thing is much more common with AAA games (which studios don’t seem to know how to define either, given that now they’re going on about AAAA and AAAAA games) than it is with indie games.
I’m gonna need some sourcing for that assertion, because man, there are no more underpaid and overworked developers in the gaming industry than indies living on a friend’s garage, having two jobs and coding all-nighters on a passion project.
Crunch horror stories are real, but big “AAA” devs are more likely to have some type of overtime policy they can adhere to and a decent compensation package.
I’d argue about arbitrary deadlines, too, but it’s a case by case basis there. In any case, both indies and larger devs are often working to the same type of deadline, that being “we’re running out of money”.
Do you know people working in gaming or are you working in it yourself? Because “just ask for overtime and you’ll get it without any repercussions” absolutely doesn’t match the experience of anyone I know. Especially since people tend to jump from big studio to indie, not the other way around, for quality of life reasons.
People tend to jump from big studio to indie by way of either getting laid off or having a game they want to do that won’t get greenlit in a big studio (mostly because very few people get to even bring up projects to a greenlight process in the first place).
Working for ages with next to no financial security on the off chance that you pull off a minor miracle and get an investor backing you or your own startup money back is hardly what I call “quality of life”. Best case scenario you have the investment already lined up on your way out of a big dev, but that is getting harder these days.
On the other question, if I wanted to share my resume I would not post under a pseudonym, so apply your best judgement there.
I’ll say this, though, if that counts for something. I am NOT in the US.
I wasn’t asking for personal information beyond whether or not you’re in or adjacent to the industry, or anything I hadn’t already shared myself, peace ✌️
Oh, I take no offense in asking, I just don’t like disclosing even trivial stuff. Even stuff you can sort of reverse engineer from my post history. It’s more habit than anything else at this point.
I’ve stopped acknowledging the term AAA because I’m increasingly convinced nobody knows what they mean when they use it beyond “games that look expensive and I don’t like”.
I also don’t think there’s that many developers that don’t give “story, mechanics and game difficulty balance” an equal amount of consideration, mostly because those things are typically handled by entirely different people in any production that is bigger than a skeleton crew. It’s not like designers in big studios are just twiddling their thumbs waiting for the rendering engineers to finish the peach fuzz on people’s cheeks.
The way people perceive opportunity cost in collaborative media is always weird to me.
For me, I shy away from AAA games in general because the bigger the studio and higher the budget, the greater chance that there’s MBAs involved that will push design decisions that favour making more money over making a good game.
I think some people correlate that with graphics, maybe because the diminishing returns on effort put into graphics means those amazing graphics could have come at the cost of time spent on the gameplay elements, though I don’t personally think a great game and great graphics are mutually exclusive.
This view on things where a guy in a suit is telling a bunch of passionate artists how to do their day to day jobs is entirely separate from reality.
Don’t get me wrong, there are plenty of big, merecenary operations out there, but this is a) not how those play out, and b) very easy to sus out without needing to have a roll call of the studio.
Case in point, Baldur’s Gate 3, which people keep weirdly excluding from “AAA” was made by a large team that balooned into a huge team during the course of development. That never seemed a budget, size or graphics problem to me. Or about what degree people in the studio happen to have, for that matter.
If you don’t want to play hyper compulsive live services built around a monetization strategy that is perfectly legitimate. Gaming is very broad and not everybody needs to like everything. It’s just the categorizations people use to reference things they supposedly dislike that seem weird to me.
This is true to an extent, but the visual fidelity of a game also determines how much work it is to author assets and (more importantly) the interactions between those assets.
If I want to make a new enemy in DOOM I have to make a series of 128x128 sprites that show 2-3 frames of walking animation from a few different angles, then add some simple programming for its abilities and AI and I’m done.
If I want to do the same thing in a game with high visual fidelity I have to make a 3D model, rig it and make animations, worry about inverse kinemetics, and make a bunch of textures and shaders to go on the model. And for anything extra I want the AI to do, or any extra gameplay element I want it to interact with, I have to worry about most of that stuff all over again.
For example, let’s say I want to add a gun that freezes enemies to both types of games. In the case of the DOOM-like game I can make a semitransparent ice texture and overlay it on all the sprites I’ve already made to make new textures, although I could probably get away with just tinting them blue. Then I have to change the enemy code to make their AI and animations freeze when hit with freeze gun and swap out their sprites to the frozen textures.
If I want to do that in a high visual fidelity game I have to think of how I’m going to cover the character in ice in the first place if I want the ice to be a 3D model. Sure I can freeze their animations pretty easily, but if they can be in any pose (including a pose generated by inverse kinemetics) when they get frozen I’m going to have to write a system to dynamically cover the model in ice crystals. I’m also going to have to author materials and shaders for the ice, and worry about what that looks like in combination with the existing materials in different lighting conditions, not only for that enemy but for all the ones than can be frozen.
This sort of stuff is mitigated somewhat by modern tooling, and mitigated even more by the production pipelines that large studios have, but its these same production pipelines that make impossible the sort of drive-by-development and flexibility that you saw in the creation of a lot of earlier games like DOOM, Thief, and Half Life 1. (And when lots of changes are made late in development you usually end up with horrific crunch and a bad game at the end.)
Ultimately there’s a reason that low-budget indie games gravitate towards pixel art or low-poly art styles. Sometimes an indie game will come out that is very high fidelity, but these will generally be walking simulators with no visually present human characters (so, no interactions or characters to animate). And its for the same reason that games with the most complicated gameplay interactions (Dwarf Fortress, NetHack, etc) or the most highly branching storylines (Fallen London) tend to be text based.
EDIT: A couple of real world examples:
This is a retrospective on the development of Anthem. While obviously a lot of bad things were going on there, including a lack of leadership or clear vision, I can’t help but think of that as an example of old style drive-by-development running up against its limits. Every time they’d change the traversal mechanic it would invalidate the world design another team already did, and how many other mechanics are tied to level design? Any time one part of the team couldn’t decide on what they wanted to do, other parts also had to redo their own work, and as a result they spun their wheels for years.
My other example is this developer. He’s clearly passionate about his work and wants it to look as good as possible, but I worry that he doesn’t realize what a can of worms he’s opened by setting that standard for animations now. Its not that he spent months learning how inverse kinemetics work, its that he’s now committed to making sure every single enemy looks good when interacting with anything it can encounter so that nothing stands out by looking jank or unfinished compared to its peers. A problem that’s made worse by the fact that it sounds like you can take control of any enemy and take it anywhere, meaning anything can interact with anything in the entire game. He has already run into a problem with his manticore enemy not being able to fit through doorways and has started talking about making an IK-driven animation where it tucks its wings in as the player approaches the door. What’s gonna happen if the player presses the fly or jump button when that’s happening? Now multiply that question for every gameplay mechanic for every enemy in the game and every weird situation that might require a custom animation.
This somewhat wall-of-text-y post is most interesting for going back to my original point about how this argument has been in place for decades with little change.
I mean, seeing Half-Life 1 pop up as an example of drive-by game development is interesting, given that it’s THE game that debuted highly scripted interactive narrative sequences. I wasn’t there, but I can only imagine the sort of planning it must have taken to figure out the complex sequences where AI entities do things in tandem with the player. I do know for a fact that it took a long time to rig any sort of map for HL1 (because they did provide the dev tools for modding and I did mess around with them). HL1’s enemy AI was an early example of a game that required level designers to manually set up navigation nodes on the geometry (because those flippy ninja enemies later in the game needed to know where they could jump and take cover). I remember it being so cool but noticeably harder to do in your spare time than, say, the old Quake stuff.
Anyway, nobody is saying there aren’t costs to complexity, but the focus on visual fidelity as a linear relationship to gameplay complexity is at best reductive. The problem is that end users can see graphics. So if something looks nice but just hasn’t figured out core aspects to gameplay it’s easier to assume one thing drove the other rather than, say, not having spent enough time prototyping, or having trend-chasing leadership change their minds multiple times about fundamental aspects of the game because something new got big last month or whatever other behind the scenes stuff may have sent development off the rails.
I don’t have time to go over the other guy’s stuff, but I will say that I respect anybody who tries to do development with their whole ass out in public, especially if they’re learning. I get anxious just thinking about it.
I don’t mean to completely disagree with you, I do think that graphics actually get somewhat of a bad rap, just that I think the tradeoff is real even if it doesn’t really scale linearly.
Its true that Half Life 1 marked the turning point from systems focused games to content focused games with its scripted sequences and setpieces. Its also where Valve created the “cabal” development process, which was supposed to be more organized than the development for games like Quake.
I included it mainly because in the making of Half Life 1 documentary the texture artist mentioned that whenever she would make a new texture oftentimes a level designer would just grab it and use it simply because it was new and exciting. The problem was that if every level used every texture then they started to look same-y, so she actually had to start labeling the files as groups and tell people to try to avoid mixing them together. And this was supposed to be more organized than earlier games lol, you can imagine how thrown together they must’ve been.
I’m reminded of a similar story from the development of Deus Ex 1. There’s one level where you walk around an abandoned mansion searching for clues. Unlike the rest of the game, which is mainly stealth / fps, in that level you have to explore and solve a puzzle while listening to an NPC talk about her childhood growing up there. Apparently the guy making the level had to yell at his fellow level designers to stop adding enemies to the rooms of the mansion.
Anyway, sorry if my posts are really long and rambling, I just like talking about games.
I mean, that all depends on the game I suppose, but you’re assuming that sort of anecdotal interaction doesn’t happen now, which I would argue may have more to do with games not doing as much behind the scenes content right away as other media.
And if something killed it (big if), it was producers getting good, rather than graphics.
Look, games have changed a lot, nobody is denying that. The spaces for flexibility and creativity have moved around in some places, but not necessarily disappeared. It’s also true that games are more diverse now than they used to be. There are also more of them, by a lot. I have a hard time making uniform blanket statements about these things. Which of course also makes it hard to push back against inaccurate but simple, consumable statements like “the pores on characters’ skins killed fun gameplay” or whatever.
But there are tradeoffs in lots of directions here. People are talking a lot this month about Expedition 33 getting to those visuals with a small team by effectively using Epic’s third party tech very directly. That’s not wrong. You may have more moving pieces today, but you also don’t have to build a whole engine to render them or code each of them directly instead of having tools to set them up.
I wish the general public was a bit more savvy about this, though, because there are plenty of modern development practices and attitudes that deserve more scrutiny. It loops back around to the behind the scenes access, though. Nobody has time or interest in sitting down and arguing about prototyping, or why the modern games industry sucks at having any concept of pre-production or whatever. Gamers have always been quick to anger driven by barely informed takes in ways disproportionate to their interest in how games are actually made, and that part of the industry hasn’t changed.
That’s definitely true. Even with my ice gun example there’s actually a system in UE5 that does exactly what I was talking about with the 3D ice crystals (though, whether it works for animated objects with deforming meshes I don’t know).
Hey, love your take on this. Can you tell us more on what is “opportunity cost in collaborative media” and how it relates to games? I don’t understand these words, and I’d rather get the explanation from a human being :3
Right. Sorry about the overly nerdy way of saying that.
Basically, people tend to think that any time or effort spent doing one thing means time and effort not spent doing another thing. So good graphics means they took time away from designing the gameplay or whatever.
But that’s not how it works. In big projects everybody tends to be a highly specialized expert. So gameplay people are doing gameplay all the way through, modellers are modelling all the way through, rendering programmers are working on rendering features all the way through and so on.
When you have big teams like that it can actually become harder to make sure everything is coordinated and organized so that nobody has to sit and wait for anybody else. Complexity does grow and changing things does become much harder, just… not in the ways people often think. It’s not like there are freely interchangeable “game units” that you can choose to spend in either gameplay or graphics but not both, you know?
Okay! I don’t think that either. I think they’re underpaid and overworked like virtually everyone in the games industry and unable to put out quality products because of arbitrary deadlines. That kind of thing is much more common with AAA games (which studios don’t seem to know how to define either, given that now they’re going on about AAAA and AAAAA games) than it is with indie games.
I’m gonna need some sourcing for that assertion, because man, there are no more underpaid and overworked developers in the gaming industry than indies living on a friend’s garage, having two jobs and coding all-nighters on a passion project.
Crunch horror stories are real, but big “AAA” devs are more likely to have some type of overtime policy they can adhere to and a decent compensation package.
I’d argue about arbitrary deadlines, too, but it’s a case by case basis there. In any case, both indies and larger devs are often working to the same type of deadline, that being “we’re running out of money”.
Do you know people working in gaming or are you working in it yourself? Because “just ask for overtime and you’ll get it without any repercussions” absolutely doesn’t match the experience of anyone I know. Especially since people tend to jump from big studio to indie, not the other way around, for quality of life reasons.
People tend to jump from big studio to indie by way of either getting laid off or having a game they want to do that won’t get greenlit in a big studio (mostly because very few people get to even bring up projects to a greenlight process in the first place).
Working for ages with next to no financial security on the off chance that you pull off a minor miracle and get an investor backing you or your own startup money back is hardly what I call “quality of life”. Best case scenario you have the investment already lined up on your way out of a big dev, but that is getting harder these days.
On the other question, if I wanted to share my resume I would not post under a pseudonym, so apply your best judgement there.
I’ll say this, though, if that counts for something. I am NOT in the US.
I wasn’t asking for personal information beyond whether or not you’re in or adjacent to the industry, or anything I hadn’t already shared myself, peace ✌️
Oh, I take no offense in asking, I just don’t like disclosing even trivial stuff. Even stuff you can sort of reverse engineer from my post history. It’s more habit than anything else at this point.