by Chris » Fri Jul 11, 2025 5:02 am
The thing is, the sound environment IDs just specify effect parameter presets, not effect processors. Simply specifying an environment ID to a sound to play introduces ambiguity and can create unexpected costs depending on what environments are used (e.g. if you play 10 sounds with 10 unique environments, that'll be 10x the cost compared to playing those same 10 sounds with the same 1 environment). Dynamically managing which environment presets are active will also create additional costs, since an environment shouldn't stop immediately when sounds stop using it (there can be an audible decaying reverb tail that lasts for several seconds after the sound itself is done). And there's the issue that mods should be able to dynamically alter the effect parameters; for example, something like LiveReverb that continuously probes the room around the player to set the active environment, which currently can only select from the set of presets depending on the average distance to the walls, floor, and ceiling (with some checks for the sky), can be better realized by more precisely controlling the early and late reflection delays and gains, and even their general direction (e.g. panning the early reflections toward the closest surfaces) along with other reverb properties. Scripts probably shouldn't be able to modify presets, but they should be able to modify the parameters used.
A more useful interface may instead give scripts the ability to allocate effect processors and set the effect properties for them, while also specifying which sounds apply to which effects. This would allow scripts to control multiple simultaneous environments, for instance if the player is in some large cave, and there's a nearby opening to a sewer pipe, there can be one primary effect for the cave environment the player is in, and a secondary effect for the sewer pipe panned toward where the opening is, while each sound controls how much it contributes to each effect (a sound in the cave with the player will sound like it's in the cave with very little echoing into and back out of the pipe, while a sound in the pipe will sound like it's in the pipe with only some of the cave reverberation). This would also allow scripts to create/manage special individualized effects, such as a distortion effect that's only applied to a specific radio sound or something.
But figuring out how exactly to design an interface for this is what I don't know. I don't know if it's a good idea for scripts to directly allocate, and subsequently be responsible for deallocating, effect processors, considering each one can have a notable cost (honestly, 64 is quite generous, I wouldn't be surprised if even modern systems could struggle well before that limit). Sounds should also be able to adjust how much they apply to each effect as they move (a sound that moves out of or into the aforementioned pipe after it starts, for example), and currently I don't think there's a way for scripts to control individual sounds after they start. I also don't know how this should interact with the sound environment things that control the existing implicit effect processor, along with the underwater effect that's automatically applied when underwater. It might be that some of this is too low-level for scripts to be responsible for, and dynamic multi-effect stuff should be controlled at a higher level that lets the audio engine automatically work out the details. Or maybe not, I really don't know what would be best for GZDoom.
The thing is, the sound environment IDs just specify effect parameter presets, not effect processors. Simply specifying an environment ID to a sound to play introduces ambiguity and can create unexpected costs depending on what environments are used (e.g. if you play 10 sounds with 10 unique environments, that'll be 10x the cost compared to playing those same 10 sounds with the same 1 environment). Dynamically managing which environment presets are active will also create additional costs, since an environment shouldn't stop immediately when sounds stop using it (there can be an audible decaying reverb tail that lasts for several seconds after the sound itself is done). And there's the issue that mods should be able to dynamically alter the effect parameters; for example, something like LiveReverb that continuously probes the room around the player to set the active environment, which currently can only select from the set of presets depending on the average distance to the walls, floor, and ceiling (with some checks for the sky), can be better realized by more precisely controlling the early and late reflection delays and gains, and even their general direction (e.g. panning the early reflections toward the closest surfaces) along with other reverb properties. Scripts probably shouldn't be able to modify presets, but they should be able to modify the parameters used.
A more useful interface may instead give scripts the ability to allocate effect processors and set the effect properties for them, while also specifying which sounds apply to which effects. This would allow scripts to control multiple simultaneous environments, for instance if the player is in some large cave, and there's a nearby opening to a sewer pipe, there can be one primary effect for the cave environment the player is in, and a secondary effect for the sewer pipe panned toward where the opening is, while each sound controls how much it contributes to each effect (a sound in the cave with the player will sound like it's in the cave with very little echoing into and back out of the pipe, while a sound in the pipe will sound like it's in the pipe with only some of the cave reverberation). This would also allow scripts to create/manage special individualized effects, such as a distortion effect that's only applied to a specific radio sound or something.
But figuring out how exactly to design an interface for this is what I don't know. I don't know if it's a good idea for scripts to directly allocate, and subsequently be responsible for deallocating, effect processors, considering each one can have a notable cost (honestly, 64 is quite generous, I wouldn't be surprised if even modern systems could struggle well before that limit). Sounds should also be able to adjust how much they apply to each effect as they move (a sound that moves out of or into the aforementioned pipe after it starts, for example), and currently I don't think there's a way for scripts to control individual sounds after they start. I also don't know how this should interact with the sound environment things that control the existing implicit effect processor, along with the underwater effect that's automatically applied when underwater. It might be that some of this is too low-level for scripts to be responsible for, and dynamic multi-effect stuff should be controlled at a higher level that lets the audio engine automatically work out the details. Or maybe not, I really don't know what would be best for GZDoom.