Each of these components relies on an existing WorldShape. WorldShapeSolid and WorldShapeTrigger use a different technique to update after their host shape has changed. Whenever a WorldShape changes, it uses GameObject.SendMessage to dispatch a function call to other components on the same GameObject.
GameObject.SendMessage is normally used at runtime to allow components to communicate with each other without knowing if the message will be received. Interestingly for us, it also works at edit time if ExecuteInEditMode is present on both the sending and receiving components. With GameObject.SendMessage, we can have the WorldShape update itself completely before then letting other components know if something changed, so they can respond.
WorldShapeSolid uses this message to “repair the edges” of a shape by adding and configured WorldShapeEdge and EdgeCollider2D components.
WorldShapeTrigger uses this message to force “Has Collider” to be true, as well as configuring certain properties of the PolygonCollider2D so it works correctly with our characters.
WorldShapeClone is a bit different. Since it exists on a separate GameObject, it uses the familiar Update function to grab information from its source shape.
So far, we’ve been able to see some pretty neat features using JUST components. What happens when we go a step further and actually extend the editor?
In Unity, a “custom editor” can mean a few things. In this case, it means a class that extends the Editor base class. This is a bit of a misnomer because an Editor sub-class is normally used to create the block of GUI that shows up in the Inspector for a particular Component, so I tend to call them custom inspectors. A WorldShapeInspector is a custom inspector for WorldShape Components, obviously! Much more information about the basics of creating custom editors can be found in the Unity Manual and around the web.
Most custom inspectors will implement the Editor.OnInspectorGUI function, which lets you augment or replace the default inspector GUI in the Inspector View. I ended up not using this function at all, opting instead to implement the Editor.OnSceneGUI function, which allows you to render into the Scene View whenever a GameObject with a WorldShape component is selected. This is the basis for our wireframe shape editor.
Here is the OFFICIAL documentation for Editor.OnSceneGUI: “Lets the Editor handle an event in the scene view. In the Editor.OnSceneGUI you can do eg. mesh editing, terrain painting or advanced gizmos If call Event.current.Use(), the event will be "eaten" by the editor and not be used by the scene view itself.”
This is, frankly, a useless amount of information if you’re just getting started. What you really need to know is how Events work. The manual has this gem of information: “For each event OnGUI is called in the scripts; so OnGUI is potentially called multiple times per frame. Event.current corresponds to "current" event inside OnGUI call.”
So from this, we discover that Editor.OnSceneGUI gets called many times based on different events, and we can access the event via the static Event.current variable and check its type and other properties.
Okay, but how do we make that wireframe editor thingy?
Well, Unity has a few built-in tricks called Handles. In short, Handles is a class with a bunch of static functions that process different types of events to create interactive visual elements in the Scene View. Handles are “easy to use” but are far more complicated to actually understand, which you need to do if you hope to master them and create your own. It boils down to knowing how and when events are delivered, and what to do with each one. Understanding the event system is really the first key to understanding powerful editor extension.
I used Handles extensively for editing WorldShapes, as they are the most obvious and direct way to manipulate the shapes. There are five different types of handles used in the WorldShape editor:
Handles.DrawLine is used to draw the actual edges of the shape. This built-in function just draws a line with no interaction.
Handles.Label is used to draw the MxN size label at the shape’s pivot. Similar to Handles.DrawLine, this just draws some text into the Scene View with no interaction.
A custom handle was created for moving parts:
Solid circular handles are shape vertices.
Solid square handles are shape edges.
The semi-filled square handle that starts in the center is the shape’s pivot.
The Handles API comes with a very useful Handles.FreeMoveHandle function that I ended up duplicating for our custom moving handle. I did this for a few reasons, but the two most important were (1) I needed snapping to work differently, and (2) I wanted to know how to do it.
Unfortunately, explaining everything about Handles in detail here would be very dry, but if you’re interested in using Handles, I would HIGHLY recommend grabbing ILSpy to take a peek inside a good portion of the Unity Editor source code. It helps me tremendously every time I want to know how something is done “officially.”
At a high level, making a custom Handle involves understanding Events, Graphics, and “control IDs.”
Events will be “sent” to your function automatically via the static Event.current variable. Each event object has an EventType stored in the Event.type variable. The two most important event types you should familiarize yourself with are:
EventType.Layout -- This is the first event sent during a repaint. You use it to determine sizes and positioning of GUI elements, or creation of control IDs, in order for future events to make sense. For example, how do you know if a mouse is inside a button if you don’t know where the button is?
EventType.Repaint -- This is the last event sent during a repaint. By this point, all input events should have been handled, and you can draw the current state of your GUI based on all of the previous events. Handles.DrawLine only responds to this event, for example. There are many ways to draw to the screen during this event, but the Graphics API gives you the most control.
All other event types happen between these two, most of which are related to mouse or keyboard interactions. You can accomplish a lot with just this information, but in order to deal with multiple interactive objects from a single Editor.OnInspectorGUI call, you need to use “control IDs.”
Control IDs sound scary, but they’re really just numbers. Using them correctly is currently not well documented, so I’ll do my best to explain the process:
Before every event, Unity clears its set of Control IDs.
Each event handling function (Editor.OnInspectorGUI, Editor.OnSceneGUI, etc.) must request a Control ID for each interactible “control” in the GUI that can respond to mouse positions or keyboard focus. This is done using the GUIUtility.GetControlID function. Because this must be done for every event, the order of calls to GUIUtility.GetControlID must be the same during every frame. In other words, if you get a control ID during the Layout event, you MUST get that same ID for every other event until the next Layout event.
During the EventType.Layout event inside Editor.OnInspectorGUI, you can use the HandleUtility.AddControl function to tell Unity where each Handle is relative to the current mouse position. This part is where the “magic” of mapping mouse focus, clicks, and drags happens.
During every event, use the Event.GetTypeForControl function to determine the event type for a particular control, instead of globally. For example, a mouse drag on a single control might still look like a mouse move to all other controls.
I promise it’s easier than it sounds. As proof, here’s code that demonstrates how to properly register a handle control within Editor.OnSceneGUI: