Okay, dude, time to put your money where your gigantic mouth is. Mute this video (no cheating by listening to it first; you have to act like a REAL designer here, and real gameplay is silent when it gets given to us), then use one of the many freely-available, generative audio AI models that are all easily Googlable to generate for me some sound design that works for the visuals at the timestamp. Should be simple to do better than the sound designers over at Riot for an expert like yourself with access to free AI that makes their expertise and human, artistic perspective irrelevant and will totally steal their job. I mean, you clearly know more than me, so you should easily be able to do something that is requested in the first stages of any entry-level sound designer job interview.
Oh, what’s that? It sounds awful and doesn’t represent the character, let alone what’s happening on-screen at all? Hmm…nah, I must still be wrong somehow. I’ve got “cognitive dissonance” and “survivorship bias”, after all. I definitely don’t understand the strengths, efficiency-increasing potential and limitations of the technology we’re discussing better than a guy who thinks that because you can generate more textures with a trained diffusion model that that means more can and will be used on nonexistent parts of a game (because you have to apply textures to, you know, THINGS THAT EXIST IN THE GAME). And if you are able to put more things in a game, you definitely should, because the suits/customers will DEFINITELY demand it. It’s not like QA testers or the market research department will straight-up tell you in no uncertain terms that paying customers are gonna hate a bloated game or anything. That’s definitely how that works; no designer ever has to cut content in order to focus an experience and make the game feel good to have a fighting chance against its competition in the market; that never happens. Ask any dev or artist; there has definitely and for sure never been a single ounce of cut content in any development cycle ever since generative AI tools came on the scene and began getting incorporated into the commercial development of art, let alone games. This is because games for sure are not works greater than the sum of their parts, and extra ancillary features added solely because “suit want big game, use AI, give moar now” never ruins the balance of everything else in the game, leading to losses in sales that are easily predictable by market research requested by said suit. You’re clearly the expert here, not me. Please continue to school me with your 400 IQ takes, Stephen Hawking.
God damn, gamers are sooooooooooooo fucking dumb and recalcitrant. Seriously, y’all will rudely and ignorantly argue to the death with actual developers for DAYS rather than admit you don’t have one an iota of an idea as to what you’re fucking talking about, with egg on your face the whole way. Ugh.
Okay, dude, time to put your money where your gigantic mouth is. Mute this video (no cheating by listening to it first; you have to act like a REAL designer here, and real gameplay is silent when it gets given to us), then use one of the many freely-available, generative audio AI models that are all easily Googlable to generate for me some sound design that works for the visuals at the timestamp. Should be simple to do better than the sound designers over at Riot for an expert like yourself with access to free AI that makes their expertise and human, artistic perspective irrelevant and will totally steal their job. I mean, you clearly know more than me, so you should easily be able to do something that is requested in the first stages of any entry-level sound designer job interview.
Oh, what’s that? It sounds awful and doesn’t represent the character, let alone what’s happening on-screen at all? Hmm…nah, I must still be wrong somehow. I’ve got “cognitive dissonance” and “survivorship bias”, after all. I definitely don’t understand the strengths, efficiency-increasing potential and limitations of the technology we’re discussing better than a guy who thinks that because you can generate more textures with a trained diffusion model that that means more can and will be used on nonexistent parts of a game (because you have to apply textures to, you know, THINGS THAT EXIST IN THE GAME). And if you are able to put more things in a game, you definitely should, because the suits/customers will DEFINITELY demand it. It’s not like QA testers or the market research department will straight-up tell you in no uncertain terms that paying customers are gonna hate a bloated game or anything. That’s definitely how that works; no designer ever has to cut content in order to focus an experience and make the game feel good to have a fighting chance against its competition in the market; that never happens. Ask any dev or artist; there has definitely and for sure never been a single ounce of cut content in any development cycle ever since generative AI tools came on the scene and began getting incorporated into the commercial development of art, let alone games. This is because games for sure are not works greater than the sum of their parts, and extra ancillary features added solely because “suit want big game, use AI, give moar now” never ruins the balance of everything else in the game, leading to losses in sales that are easily predictable by market research requested by said suit. You’re clearly the expert here, not me. Please continue to school me with your 400 IQ takes, Stephen Hawking.
God damn, gamers are sooooooooooooo fucking dumb and recalcitrant. Seriously, y’all will rudely and ignorantly argue to the death with actual developers for DAYS rather than admit you don’t have one an iota of an idea as to what you’re fucking talking about, with egg on your face the whole way. Ugh.