The sounds of Tetrobot: from XACT to Unity
This article will expose how we handled the sounds and musics in Tetrobot and Co. using Unity Engine and some duct tape!
But before we start, let me tell you that all the musics and sounds effects were crafted by our dear Yann “Morusque” van der Cruyssen!
While reading this, you can listen to the Tetrobot and Co. soundtrack on Morusque’s Bandcamp.
XACT as a source of inspiration
Before building Tetrobot and Co. with Unity, our previous game (Blocks That Matter) was powered by the Xna Framework. This framework comes with a tool called XACT. It is an audio tool that allows sound designers to import their raw audios and build the sound effects that will be integrated in the game.
The effects can be more or less complex. For example, the “jump” sound effect in Blocks That Matter was made of 4 different sounds playing at the same time. 2 of them were grabbed randomly in a list of 36 .wav files. A random pitch variation was used on 2 other sounds to hide repetitions when jumping over and over again.
Using XACT was great because Morusque was able to work autonomously by building the audio database and adding simple but useful effects. On the game side, we were able to integrate this database very quickly. It was a matter of creating some built-in XNA objects, loading the audio database and calling soundBank.PlayCue(“name of the effect”) in the code!
We really wanted to have a similar tool in Unity. The following paragraphs describe how we managed to do that.
Scriptable Objects to the rescue
Benoit has already talked about the benefits of the Scriptable Objects for our Photoshop integration in Unity. I’ll add an extra pass on Scriptable Objects, since they are at the core of this XACT-like sound integration!
We decided that the only thing Morusque would have to do in Unity is drag and dropping the raw audio files (waves) in the project and create some Scriptable Objects called “Audio Nodes” to reference those waves!
An audio nodes represents a list of nodes. A node can be a start, loop or end node. Each node can reference several waves (audio clip). This amount of data is represented by… a Scriptable Object!
Here is what the code of such a Scriptable Object looks like:
The “Create” meta tagged with the [MenuItem(…)] is just a simplified version of what we used to be able to create the object in the Project View thanks to a simple context menu:
Morusque was then massively using this super sexy menu to create super sexy assets in the Project view :
By selecting on AudioNodes in the Project view, we were displaying a custom inspector like this one:
You can see the 3 existing types of nodes:
- Start Nodes (the green ones), represent the sounds played when the audio effect start playing!
- Loop Nodes (the yellow ones), represent the sounds that will loop until something in the code triggers a stop. By default, those Loop Nodes start after every Start Nodes are done.
- End Nodes (the orange ones), represent the sounds triggered after every Loop Nodes are done. The End Nodes can also be triggered when Start Nodes end if no Loop Nodes are present.
The number of nodes is not limited: you could have several of each kind of nodes, or having only 1 node in the simplest case.
Each node can be expended to tweak the details:
There is a lot of parameters in there, but the most important facts are:
- Each Audio Node can select the sound to be played in a “playlist” of AudioClips, with various selection mode (random, sequential, random no immediate repeat)
- The pitch can be randomly selected between 2 values to bring more variety even if the same sound is played several time in a row. A pitch curve is also available to make a precise tweak of the pitch over time
- Fade in/out effect can be performed on each kind of node
- Delays and offsets allow to tweak the start time of a given node (for example if you don’t want to wait for all the Start Nodes to end before launching the Loops)
- You can use attenuation! (even if it’s a faked one since we were not using 3D sounds)
This inspector describes the electricity beam sounds made by those 3 little guys in this example game screen:
To illustrate the influence of the different kind of nodes and to show you the Audio Nodes manipulation in motion, here is a little video where I’m messing around with the electric beam sound effect:[youtube]http://www.youtube.com/watch?v=5HKj3N3og4g[/youtube]
All the editing can (and must be) done when the editor is playing. This way we can directly test and create the sounds with the same constraints and results obtained in the standalone executable.
Going further with Feedbacks
We used another layer of Scriptable Object to create a data that we called “Feedback”.
A feedback is a Scriptable Object referencing:
- An Audio Nodes (the things that we describe in the previous paragraph)
- A Particle Oneshot (that allow to reference a shuriken particle Prefab and play it)
- A Camera Effect Oneshot (that allow to give some shakes to the camera)
To illustrate this, here is another little video where we are experimenting with the “Spit Feedback” triggered when the Psychobot spits some blocks in the game.
The video also illustrate one of the way of easily finding a Feedback we need to tweak by triggering it during the game and using the Console to ping it in the Project view:
Banks and bindings
Once the some AudioNodes have been created and references in Feedbacks, we still need to be able to reference those Feedbacks somewhere in order to trigger them when we want from the code.
This is done… by using a last Scriptable Object: the FeedbackBank!
This is a really simple table that allow us to create an entry and bind it in the code even if the Feedback hasn’t been created yet.
This code is executed when a switch change its activation state. A feedback is selected depending the switch on or switch off feedback need to be played. “FeedbackBank.block” is referencing the FeedbackBank Scriptable Object used to bind a Feedback to some variables (here switchOn and switchOff).
In the end, we just ask a manager that will check if it’s possible to play this feedback in this screen. If it’s possible, the feedback will be spawned at a given position.
In the game, all feedback outside of the screen are not triggered, except Audio Nodes of type Loop that are muted and will be unmute when the player enter the screen they were in.
All the animation in the game are made with the Animation editor window integrated to Unity. So if we need to play a Feedback during an animation, we are using the animation events. To connect an animation event to a Feedback, we create some MonoBehavior that we called AnimationBridge.
Here is an example with the spit animation of the Psychobot:
To have this nice “Feedback( Oneshot )” with a comprehensible “spitBlock” parameter combo box for the animation event, we added a PsychobotAnimationBridge MonoBehavior to the animated GameObject hierarchy.
This AnimationBridge looks like this:
This code is auto-generated. Because no one wants to maintain enum and switch cases like this! The “Oneshot” enum entries names are generated from the names of the variables in the FeedbackBank (for example, you can see the match between the “spitBlock” and the “bank.spitBlock”).
To generate the code, we use the Text Template Transformation Toolkit (T4) in Visual Studio. You use T4 to generate C# (or C++, or whatever) the exact same way you will use PHP to generate HTML (see this simple example).
Since we are not using 3D sounds and that a single Audio Feedback can play several sounds at a time, we felt it would be a lot easier to centralize the AudioSource components instead of putting many of them on all the GameObject that needed to play sounds.
As a consequence, AudioSource components are only created on a single “manager” GameObject.
We limited the number of AudioSource components to 16 during the development of the game, to make sure that only 16 sounds were needed to be played at the same time. We doubled that amount when releasing the game, so now 32 simultaneous can play at the same time (it’s never needed in the official levels, but who knows, it could occur in the community levels).
Those 32 AudioSource components are in a pool and are reused when we need to play a new sound.
When a Loop Audio Node wants to play and that the exact Loop is already playing, there is a reference count in the instance that will prevent several identical loops from playing at the same time. Similar thing happen with Start/End Audio Node where we discard a play request when a similar one is already playing, to avoid cacophony.
Player and referenced stuff
To help Morusque managing all the data he was creating, we provided him some tools using some custom Windows Editors.
The referencing tool was listing all the Audio Nodes and Feedbacks that were not referenced by any FeedbackBank.
And the other crucial tool was a general Audio Nodes player:
This player was listing all the Audio Nodes in the project, allowing to play them instantly rather than having to search for the Play button in the Inspectors of each asset.
This player was also highlighting in green the Audio Nodes that were playing in the game, to have a quick snapshot of all playing nodes.
Scriptable Objects FTW!
Scriptable Object is a super cool feature in Unity when it comes to store data. It’s really helpful to achieve some data driven scheme in your game.
All the tweaking can be done in play mode, with instant repercussion in the game and without having to use plugins to prevent the data from being restored after the editor exits play mode.
The only cons is that you’ll have to be careful when tweaking the data because there is no prefab: all is saved directly in the project! You can still use source control (and force text mode for the assets) or the Ctrl+Z power!
Semi integrated sounds and effects
The cool thing with this system is that the sounds and feedback are nearly integrated once they are created. Moreover, by using the basic Unity functionalities (Scriptable Objects, Audio Clips and Audio sources) rather than external tool (like FMod designer), we are making sure that we will have no compatibility surprise from platform to platform.
That’s it for this time!
We hope you’ve enjoyed this look behind the scenes for the audio part of Tetrobot and Co.
Could you give a little more detail on how you implemented the audio manager?
For now we have no idea if we will release the source of this (since it will need some cleaning first), but who knows!
For the moment, the only source we released, is a little tool called “Favority” which is a Scene and Project bookmarking window (bit.ly/favority).
If you have any question about the implementation, you can ask it here, and we will be glad to share some sample program or piece of code. But for the full implementation, as I was saying earlier: it will need a little cleaning to work on any project right now!
Are you planning to release an example or the source on your above implementation?