Till we meet again...

It is with a heavy heart that we announce that AtomJack will be closing its doors. It has been a wonderful experience and the team has built some amazing things that we regret we were not able to share with you. Much love to you all and thank you for your support these past two years.

-Allen & the rest of the AtomJack crew

Making Tools with Unity: WorldShape Part 2

Written by Max Anderson


In Part 1, I gave a quick look at what the WorldShape tool can do, but how does it actually work? How might you create your own similar tools? I won’t be able to explain every aspect of Unity, obviously, but I’ll try to point you in the right direction.

The bulk of the code is split between the WorldShape Component and the WorldShapeInspector “custom editor.” Let’s dive right in!


WorldShape Component

As I mentioned before, Components are the heart of Unity’s extensions, both at runtime and edit time. A Component is created by deriving from the MonoBehaviour class, which comes with a few benefits right out of the box:

  • It lets you attach instances of that Component to any GameObject in the Scene.

  • It automatically lets you see and change public properties of the Component in the Inspector view. In the visual aid above, you see these properties, like “Has Collider” and “Debug Mesh,” which I will talk about later.

  • There are several functions you can implement on a Component that allow it to hook into Unity’s runtime, like Start and Update.

Before we can make an editor, we need some data to actually edit. A 2D shape is really just a series of points in space connected by lines, so a WorldShape’s primary purpose is to be a list of points. We can do this a number of ways, but I chose to use Unity’s hierarchy to our advantage. A WorldShape Component looks at its direct children and treats them as vertices. Their order as children and position in space determine where the shape’s vertices exist. Even without the WorldShape editing tool shown above, we can select individual vertices and move them around using Unity’s built-in tools to change the shape.

But wait, where did those child vertices come from? All we did was add a WorldShape Component...

Normally, when you implement functions on a Component like Start or Update, these functions are only called when the game is running, not when editing. Fortunately, there is an attribute that can be added to Component class definitions, called ExecuteInEditMode that forces these functions to be called when the game is not running.

WorldShape uses ExecuteInEditMode to make sure the Start function gets called when a new WorldShape is created. Start is then used to get the vertices on this shape. If it finds zero vertices, it helpfully creates a starter set of four vertices laid out as a square.

ExecuteInEditMode also causes the Update function to be called every time the editor redraws itself. Update can then respond to values that may have been changed by the user, such as “Has Collider” and “Debug Mesh” being set to true or false. In this case, changing either of those values means we want to add, remove, or update additional data and components on the WorldShape, as seen here:

Most of what’s happening here is a result of Update being called as the screen redraws. A PolygonCollider2D Component is added, updated, and removed as boxes are checked and vertices are changed. If this shape is used to represent a trigger volume later on, being able to automatically keep a collider in sync with the shape is critical.

The Debug Mesh properties determine if and how a mesh will be created and triangulated to match the shape. Each change to the shape or the debug mesh properties can trigger a re-triangulation, or a re-creation of the mesh data, such as its vertex colors or UV mappings for the checkerboard material. This is all useful for a level designer creating geometry without art. Being able to actually see the shapes in game is important (so they tell me). The checkerboard texture helps the designer understand how large the shape is at a glance, which is necessary when designing for jump distances and clearance. The color tints allow for a quick assignment of meaning to an otherwise bland shape: Red means danger, green means healing, purple means puzzle trigger, etc.

We even see the shape being changed when one of its child vertices is selected and moved instead. This is because the Update function is run no matter what is selected, so it gives us a chance to re-evaluate any changes to the shape that may have occurred externally.

One thing to be very careful about when using ExecuteInEditMode is to make sure your editor-only code is truly editor-only. You have two ways of accomplishing this:

  • If you DO need Update called when the game is running to do something different, you should test against the Application.isPlaying boolean. This boolean is accessible from anywhere and will be FALSE when you are in editor mode.

  • If you DON’T need Update called when the game is running, you can surround it with an #if UNITY_EDITOR / #endif preprocessor macro block. This will make sure the code simply doesn’t exist in the published version, so there will be useless code in your build.

Now, not counting the wireframe shape editor, we already have a ton of power inside a single Component. What if we add more Components?

WorldShape is just that -- a shape in the world. It doesn’t do anything on its own. Yes, we can add colliders and meshes, but even the collider doesn’t act the way we want out of the box. So what do we do when we want to change the meaning of a shape? We add more components!

Here’s a WorldShapeSolid component, which makes the shape act like blocking terrain:

Here’s a WorldShapeTrigger component, which registers the shape with the world as a trigger volume that can be passed through:

And finally, here’s a WorldShapeClone that can reference another WorldShape in order to duplicate its vertex set in a new position:

Each of these components relies on an existing WorldShape. WorldShapeSolid and WorldShapeTrigger use a different technique to update after their host shape has changed. Whenever a WorldShape changes, it uses GameObject.SendMessage to dispatch a function call to other components on the same GameObject.

GameObject.SendMessage is normally used at runtime to allow components to communicate with each other without knowing if the message will be received. Interestingly for us, it also works at edit time if ExecuteInEditMode is present on both the sending and receiving components. With GameObject.SendMessage, we can have the WorldShape update itself completely before then letting other components know if something changed, so they can respond.

WorldShapeSolid uses this message to “repair the edges” of a shape by adding and configured WorldShapeEdge and EdgeCollider2D components.

WorldShapeTrigger uses this message to force “Has Collider” to be true, as well as configuring certain properties of the PolygonCollider2D so it works correctly with our characters.

WorldShapeClone is a bit different. Since it exists on a separate GameObject, it uses the familiar Update function to grab information from its source shape.



So far, we’ve been able to see some pretty neat features using JUST components. What happens when we go a step further and actually extend the editor?

In Unity, a “custom editor” can mean a few things. In this case, it means a class that extends the Editor base class. This is a bit of a misnomer because an Editor sub-class is normally used to create the block of GUI that shows up in the Inspector for a particular Component, so I tend to call them custom inspectors. A WorldShapeInspector is a custom inspector for WorldShape Components, obviously! Much more information about the basics of creating custom editors can be found in the Unity Manual and around the web.

Most custom inspectors will implement the Editor.OnInspectorGUI function, which lets you augment or replace the default inspector GUI in the Inspector View. I ended up not using this function at all, opting instead to implement the Editor.OnSceneGUI function, which allows you to render into the Scene View whenever a GameObject with a WorldShape component is selected. This is the basis for our wireframe shape editor.

Here is the OFFICIAL documentation for Editor.OnSceneGUI: “Lets the Editor handle an event in the scene view. In the Editor.OnSceneGUI you can do eg. mesh editing, terrain painting or advanced gizmos If call Event.current.Use(), the event will be "eaten" by the editor and not be used by the scene view itself.”

This is, frankly, a useless amount of information if you’re just getting started. What you really need to know is how Events work. The manual has this gem of information: “For each event OnGUI is called in the scripts; so OnGUI is potentially called multiple times per frame. Event.current corresponds to "current" event inside OnGUI call.”

So from this, we discover that Editor.OnSceneGUI gets called many times based on different events, and we can access the event via the static Event.current variable and check its type and other properties.

Okay, but how do we make that wireframe editor thingy?

Well, Unity has a few built-in tricks called Handles. In short, Handles is a class with a bunch of static functions that process different types of events to create interactive visual elements in the Scene View. Handles are “easy to use” but are far more complicated to actually understand, which you need to do if you hope to master them and create your own. It boils down to knowing how and when events are delivered, and what to do with each one. Understanding the event system is really the first key to understanding powerful editor extension.

I used Handles extensively for editing WorldShapes, as they are the most obvious and direct way to manipulate the shapes. There are five different types of handles used in the WorldShape editor:

  • Handles.DrawLine is used to draw the actual edges of the shape. This built-in function just draws a line with no interaction.

  • Handles.Label is used to draw the MxN size label at the shape’s pivot. Similar to Handles.DrawLine, this just draws some text into the Scene View with no interaction.

  • A custom handle was created for moving parts:

    • Solid circular handles are shape vertices.

    • Solid square handles are shape edges.

    • The semi-filled square handle that starts in the center is the shape’s pivot.

The Handles API comes with a very useful Handles.FreeMoveHandle function that I ended up duplicating for our custom moving handle. I did this for a few reasons, but the two most important were (1) I needed snapping to work differently, and (2) I wanted to know how to do it.

Unfortunately, explaining everything about Handles in detail here would be very dry, but if you’re interested in using Handles, I would HIGHLY recommend grabbing ILSpy to take a peek inside a good portion of the Unity Editor source code. It helps me tremendously every time I want to know how something is done “officially.”

At a high level, making a custom Handle involves understanding Events, Graphics, and “control IDs.”

Events will be “sent” to your function automatically via the static Event.current variable. Each event object has an EventType stored in the Event.type variable. The two most important event types you should familiarize yourself with are:

  • EventType.Layout -- This is the first event sent during a repaint. You use it to determine sizes and positioning of GUI elements, or creation of control IDs, in order for future events to make sense. For example, how do you know if a mouse is inside a button if you don’t know where the button is?

  • EventType.Repaint -- This is the last event sent during a repaint. By this point, all input events should have been handled, and you can draw the current state of your GUI based on all of the previous events. Handles.DrawLine only responds to this event, for example. There are many ways to draw to the screen during this event, but the Graphics API gives you the most control.

All other event types happen between these two, most of which are related to mouse or keyboard interactions. You can accomplish a lot with just this information, but in order to deal with multiple interactive objects from a single Editor.OnInspectorGUI call, you need to use “control IDs.”

Control IDs sound scary, but they’re really just numbers. Using them correctly is currently not well documented, so I’ll do my best to explain the process:

  • Before every event, Unity clears its set of Control IDs.

  • Each event handling function (Editor.OnInspectorGUI, Editor.OnSceneGUI, etc.) must request a Control ID for each interactible “control” in the GUI that can respond to mouse positions or keyboard focus. This is done using the GUIUtility.GetControlID function. Because this must be done for every event, the order of calls to GUIUtility.GetControlID must be the same during every frame. In other words, if you get a control ID during the Layout event, you MUST get that same ID for every other event until the next Layout event.

  • During the EventType.Layout event inside Editor.OnInspectorGUI, you can use the HandleUtility.AddControl function to tell Unity where each Handle is relative to the current mouse position. This part is where the “magic” of mapping mouse focus, clicks, and drags happens.

  • During every event, use the Event.GetTypeForControl function to determine the event type for a particular control, instead of globally. For example, a mouse drag on a single control might still look like a mouse move to all other controls.

I promise it’s easier than it sounds. As proof, here’s code that demonstrates how to properly register a handle control within Editor.OnSceneGUI:

int controlID = GUIUtility.GetControlID(FocustType.Passive);
Vector3 screenPosition = Handles.matrix.MultiplyPoint(handlePosition);

switch (Event.current.GetTypeForControl(controlID))
    case EventType.Layout:
            HandleUtility.DistanceToCircle(screenPosition, 1.0f)

Not so bad, right? The worst part of that is the HandleUtility.DistanceToCircle call, which takes the screen position of the handle and a radius, and determines the distance from the current mouse position to the (circular) handle. With this, you have a Handle in the Scene View that can recognize mouse gestures, but it doesn’t do anything yet.

To make it do something, we can add code for the appropriate mouse events to the switch statement:

case EventType.MouseDown:
    if (HandleUtility.nearestControl == controlID)
        // Respond to a press on this handle. Drag starts automatically.
        GUIUtility.hotControl = controlID;
case EventType.MouseUp:
    if (GUIUtility.hotControl == controlID)
        // Respond to a release on this handle. Drag stops automatically.
        GUIUtility.hotControl = 0;
case EventType.MouseDrag:
    if (GUIUtility.hotControl == controlID)
        // Do whatever with mouse deltas here
        GUI.changed = true;

You’ll see a few new things in that snippet:

  • HandleUtility.nearestControl [undocumented] -- This is set automatically by Unity to the control ID closest to the mouse. It knows this from our previous HandleUtility.AddControl call.

  • GUIUtility.hotControl -- This is a shared static variable that we can read and write to determine which control ID is “hot,” or in use. When start and stop a drag operation, we manually set and clear GUIUtility.hotControl for two reasons:

    • So we can determine if our handle’s control ID is hot during other events.

    • So every other control can know that it is NOT hot, and should not respond to mouse events.

  • Event.Use -- This function is called when you are “consuming” an event, which sets its type to “Used.” All other GUI code should then ignore this event.

  • I mention that “drag starts / stops automatically” in the comments, which means EventType.MouseDrag events will automatically be sent instead of EventType.MouseMove when a button is held down. You are still responsible for checking hot control IDs in order to know what handle is being dragged.

I didn’t put any useful code in there because at this point, the world is your oyster. Need to draw something? Add a case EventType.Repaint: block and draw whatever you like. Want to repaint your handle every time the mouse moves? Add a case EventType.MouseMove: block and call SceneView.RepaintAll [undocumented], which will force the Scene View to… repaint all!

With the Handles business out of the way, the rest of my Editor.OnSceneGUI function is about 200 lines long. In that 200 lines, I do the following:

  • Draw lines between every vertex using the Handles.DrawLine function.

  • Draw handles for every vertex that respond to mouse drags, snapping the resulting position and setting it back into the vertex position.

  • Draw handles for every edge (each adjacent pair of vertices) that respond to mouse drags, snapping the resulting position and applying the change in position to both vertices.

  • Respond to keyboard events near any of those handles to split edges or delete edges and vertices.

  • Draw a handle at the shape’s pivot position, that can be dragged to move the shape or shift-dragged to move the pivot without moving the vertices.

  • Calculate the shape’s bounding box and display it using the Handles.Label function.

  • Lastly, I use Unity’s built-in Undo functionality to make sure every one of those operations can actually be rolled back.

...and that is really it. There were a few stumbling blocks with Undo and some creating / deleting operations, but everything was smooth sailing once the handles were in place. Normally you wouldn’t have to write your own. Hopefully this has given you some insight into mastering the system, though, so you can make your own tools.

If you’ve made it this far, please enjoy this demonstration of one of our newer tools, the World Editor. This tool lets us fluidly create, change, and move between different “room” scenes that compose our streamed world using many other editor extension tricks. Perhaps a topic for another day?

Making Tools with Unity: WorldShape Part 1

Written by Max Anderson

It’s been said before how important tools are for game development. Good tools allow you to build solutions to problems in a more natural way. Imagine making images on a computer without image editing tools like Photoshop -- it’s possible, but not at all how you want to get things done.

Of course, building tools (especially good ones) takes time and resources just like any other part of a game. Additionally, tools are usually only ever appreciated by the team creating the game, so it is difficult to justify finding time to build and support tools when that time could be spent on “actually making the game.” Anything we can do to make tools development “cheaper” helps justify more and better tools.

One of the primary reasons we chose Unity to build Wayward was its reputation for extension and tool creation. Today I’ll be sharing some of the ways we’ve extended Unity to create our world-building tools. If you’re the technical type, I’ll also walk you through some neat pictures and explain what parts of Unity made these tools possible in Part 2.

First, what are we even building? Wayward is a 2D exploration platformer, so it needs some 2D platforms… and walls, and ramps, and ceilings, and etc. So, it needs shapes! Shapes will define the solid parts of the world. Unity comes with some 3D primitive shapes like cubes and spheres. You can move and resize them, but that’s about it. We can do better.

The first tool created for Wayward was the WorldShape “tool.” It looks like this:

Amazing! Okay, so this looks INCREDIBLY boring, but there’s actually a lot going on here. What we’re looking at is the editor for a simple 4x4 box that gets created when you add a WorldShape Component to an empty GameObject. Components are at the heart of Unity’s flexible game engine design, and they come with a lot of extension points at runtime and in the editor. WorldShape takes advantage of many of these features.

Here’s a better view of what’s actually happening in Unity:

We have three views here: Hierarchy, Scene, and Inspector. The Hierarchy shows a tree of all of the objects in our Scene. The Scene view shows the visual representations of our objects, as they might be seen in game. The Inspector shows modifiable properties of the Components on our selected object. This is all standard Unity stuff.

What’s interesting about these views, though, is how we got here and what we can do next. Take a look at this video, which shows me starting from scratch and then using the features of the WorldShape tool:

Here you can see how I create a new WorldShape, and how it is reflected in the three views. I create a new GameObject and attach a WorldShape Component. I also expand the Shape object in the Hierarchy to show the newly-created vertex objects. I can then perform a number of operations on the shape: adding, moving, and removing vertices and edges, moving the entire shape, and changing the pivot position of the shape. I can also undo all of these operations.

If I’ve done my job right, none of that looks impressive. With a simple set of keyboard keys and mouse gestures, a level designer should be able to make the shapes they want instead of being restricted to spheres and cubes.

This tool forms the backbone for all of our level editing. Here are some ways it is used:

  • Creating solid shapes in the world for characters to collide with.
  • Creating pass-through “trigger volumes” that detect character presence, which can be used to create puzzles and script conditions.
  • Creating shapes that can be moved along “motion paths,” which are edited with a similar tool.
  • Creating “effect volumes,” like spikes traps or lava, that affect characters within the shape.
The WorldShape editor allows me to prototype level designs in seconds. It’s easy to quickly construct a rough layout and adjust verts as necessary, while keeping everything aligned to our grid.
— Dan Rosas, AtomJack Level Designer

Was it easy to do? Yes and no. The first pass of this tool took a week or so as I was getting used to Unity, but now that I’ve learned my way around the engine, it would be pretty easy to reproduce. Regardless, it was well worth the effort, and has served as a baseline of knowledge for many subsequent tools.

If you’d like to know more about how this was actually made, see Part 2 for a detailed walk-through.



For the Love of Making Games: Ludum Dare 32

Written by ESandra Hollman

Our studio was founded by friends who enjoy making games together. Naturally as an independent studio, we believe in supporting the shared passion of making games. A simple way we can do that is by opening our doors to provide a collaborative workspace for a game jam. And thus, we were happy to do so for Ludum Dare 32!

Game jam headquarters for the weekend.

Game jam headquarters for the weekend.

Seattle has a thriving indie community which promptly expressed interest when our Character Artist, Kieran Lampert, threw out the idea of AtomJack hosting during conversations at local meetups. Just as we anticipated, we had a great turnout. The reserved spots filled up quickly, resulting in the need for a wait list. Although space was limited, we were still able to accommodate everyone for the Show & Tell event after the jam. It was very rewarding to see everyone’s ideas come to life after all their hard work over the weekend!

The theme is announced: unconventional weapon. Let the brainstorming begin!

The theme is announced: unconventional weapon. Let the brainstorming begin!

Kieran joined a team that he found via the Seattle Games Cooperative Meetup group. Being the talented artist that he is, he created all the art and UI assets for their game, Backfire.

Kieran meets his team and they hit the ground running with design discussions.

Kieran meets his team and they hit the ground running with design discussions.

This was my second Ludum Dare and, in my opinion, another success. One of the things I love about game jams is the opportunity to set everything else aside and give yourself a small period of time devoted to exploring any ideas you have, without concern for commercial viability or conventions of normalcy. I think my team had a really interesting, unique idea, and it was a lot of fun pursuing it.
— Kieran Lampert, team member that made 'Backfire'
From concept to completion in 72 hours. Kieran's sketches and art mock-up for 'Backfire'.

From concept to completion in 72 hours. Kieran's sketches and art mock-up for 'Backfire'.

As for the others, there were a few teams and many compo participants. Here are the thoughts of some of the participants following the conclusion of LD32:

It was a great experience and exceeded my expectations! The folks who attended were very helpful, enthusiastic and highly skilled in many areas, and it was definitely an eye opener in terms of finding out what can be accomplished in a short timeframe! Particularly when leveraging the powerful advantages offered by the Unity framework.
— Michael Smith, maker of 'Aeterno: Warrior of Light'
I’ve been to several game jams, but this was my first ever Ludum Dare. Someone, somewhere, offered teams a chance to work with an Oculus Rift, and my team jumped on that opportunity. We wanted to make a highly visual, environment-focused experience that explored the topic of personal anxiety, and to that effect we made our virtual world a dark, glistening expanse with pockets of light for the player to explore. I finally learned how to implement my music tracks in FMOD using Unreal Engine 4, and personally had great fun implementing adaptive music into the scene.
— Evan Witt, team member that made 'Laughter'

(Special thanks to Oculus for providing development kits to use!)

This was my first game jam, and I can’t think of a better way it could have happened. I went looking to meet other game developers and maybe learn a little about Unity. Both missions accomplished! With so many other devs around, I could get answers for all my Unity questions immediately. I was able to get the gameplay working on Saturday, so I could spend Sunday polishing. I’ll definitely want to be back if you guys decide you want to host another jam.
— Peter Hufnagel, team member that made 'Pacifist Piper'
This was my first solo Ludum Dare and I had a total blast. My favorite part was seeing everyone else’s games at the end of the jam and hearing their trials, tribulations, and lessons learnt. A huge thank you to AtomJack and Kieran for giving us a venue in which to collaborate!
— Constance Chen, maker of 'Investigator and the Case of the Unconventional Weapon'

See the Seattle Indies website for links to the games they made with us that weekend.

Pizza! It fuels game developers. (Also coffee. Lots and lots of coffee.)

Pizza! It fuels game developers. (Also coffee. Lots and lots of coffee.)

Collaborating, learning, experimenting and having fun - that’s what it’s all about! We hope to be able to host again in the future. Until then, keep making games, indies!

Animation Blending: Getting Off On The Right Foot

Written by Floyd Bishop

I’ve animated on all kinds of projects, including film, television, web series, and games. Of all the ways animation is used to entertain today, I enjoy animating for games the most! There is a level of empathy that a player has with a game character that cannot be matched in film or on a television show. In a game, YOU are controlling what the character is doing. While you may watch your favorite film three or four times, adding up to maybe 6 or 8 hours for a 2 hour movie, players will be spending many more hours with a game character. As an animator, I want to do the best work possible and make sure the characters are entertaining and I’m not wasting the players’ time.

When animating for a game, aside from the needs of the character, you also have to consider the needs of the game. The first thing I do is talk with Design to find out what kinds of things the player will need to be doing with a specific animation or movement. What is the character all about? What will they be doing? What kind of personality do they have? Are they confident, scared, silly, etc?

The Robot needs to look adventurous and active, but also needs to walk and run at a very specific pace. He has some very mechanical features, and should look heavy yet athletic when he moves around. He's strong, but can hustle if he needs to. With all that in mind, I can start to flesh out our Robot character.


I usually start animating characters by beginning with a stand animation. Robot_loc_stand was the first game animation created for this game. The file name starts with the character's name, then the type of animation (a locomotion in this case), then the name of the action. File naming is important for all kinds of reasons, most importantly for the game's code to know what animations to call for what actions. If a file is named wrong or placed in the wrong directory, the game cannot see it and therefore cannot use it. Keeping consistent filenames and directory structures across all characters helps to keep development going smoothly.

A stand animation is perhaps the most iconic animation a character has. The player will see it a lot, so it needs to look great. For the Robot's stand, I wanted him to look powerful and ready to move. Even though he's just standing there, he's got to keep moving. I have his eye looking around a bit, and added a blink to help keep him alive.


Design wants the Robot to run at a rate of 9 units per second. Now that I know how far the Robot should be able to travel in one second, I can start animating a run. I start my animation by keying the Robot at the origin on frame 0. I then slide the Robot forward on frame 30 to a distance of 9 units. When I play, I see the Robot moving forward at the speed requested by the designers. Now I need to animate the character running at that pace.

All animation in the game happens at the character's origin. The root of the character is then moved through the game world by code. As a result, the animations where the character moves through space have to be counter animated so that the character doesn't actually move. Think of a person running on a treadmill. They are running at a specific speed, but aren't really moving through space. This is what we need to make in order for the engine to have what it needs.


The Robot walks at a rate of 6 units per second. The walk was animated after the run. Yes, we literally ran before we walked on this project. The walk is important because sometimes the player may want to approach something slowly. Maybe they are inching forward on a ledge, or creeping up on an enemy? A run wouldn't really be what the player wants in these situations. It also helps make a nicer blend between not moving and the speed of a run.

There is more to it than that though. The animations have to blend together in a way that makes sense.

In Unity, we have complete control over how long it takes for one animation to transition into another animation. We can also control what part of one animation blends into what part of another. We need to make sure that we don’t get any stutter steps or awkward movement when we go from a walk cycle to a run cycle. The way we do this is to get the legs to match up. If a left leg is moving forward in the walk cycle, we want the blend to happen when left leg is coming forward in the run.

Hopefully you’ve learned a little bit about how some basic game animations are used, and understand how one animation is blended into another in order to make a seamless experience for the player.

One of the hardest parts of working on a game is not being able to talk about all the cool and exciting stuff you get to see on a daily basis. I look forward to being able to show off more animation as more of the game gets shown to the public. I’m anxious to meet you at upcoming events and of course, when you get a chance to play the finished game.

Thanks for reading!