Modeling a Stylized Character

One of the many reasons I enjoy making character models is because the process is a bit different each time. Similar to people in our everyday lives, each character has its own unique quirks, presenting its own set of challenges and puzzles to solve. So I wanted to share with you all an overview of the process by which I brought our Robot from 2D concept to 3D asset.

GETTING STARTED

This particular model started with a pretty clear-cut concept, and even before then a well established outline of what we wanted the character to be. Depending on who you work with, sometimes your art director gives you a fully completed concept like I had for our Robot. Other times, though, you may just be given a rough sketch, and your teammates are looking to you to pour your own influences into filling in the details. I’m lucky enough to be working with some pretty talented folks here, so the concept I was given by our art director was already imbued with a ton of appeal.

Here at AtomJack, we’re a pretty small, tight-knit group, which makes it much easier to throw ideas around and come to a consensus on things. Upon receiving the concept, I sat down with our art director, animator, and lead designer to get an overview of the character and specific needs for the model. How do his leg joints connect? What are the extents of his movement range? What material is this made out of? The discussion continues until we all feel like we’re on the same page, at which point it’s time to dive into production.

FORM

The exaggerated nature of our game’s art style was utilized in the Robot’s design to push the traits of his character. For example, the Robot’s large torso, shoulder pauldrons, and hands exude strength, but his color scheme and wide, green eye belied a nature that is more curious than menacing. The design David came up with already provided so much definition, so my task became ensuring that as much of the character as possible was maintained when transitioning from 2D to 3D.

When it comes to actually constructing the model, I tend to make two general passes. The first pass is centered entirely around trying to nail down the feel of the character. At this stage I’m not worried about any technical limitations, but instead focus on major design elements: silhouette, exaggeration of important features, overall aesthetic appeal, and so on. My goal is to make sure the 3D version looks and feels as much like the 2D concept as possible, while still maintaining the physicality of an object in three dimensions. Since I’m not concerned about restrictions, I tend to go back and forth between Zbrush and Photoshop, sculpting in high resolution and comparing to the concept as I go.

An early, high-poly pass on the Robot model in Zbrush.

Once I’m happy with the feel of the character, it’s time to switch gears. As I mentioned before, each character tends to be unique in its creation process, but one aspect all of them share is the need to find a balance between form and function. Games, though constantly evolving, still come with a fair amount of restrictions. For example, most modern games strive to maintain a runtime speed of 60 frames per second. For those who may not have experience with the inner-workings of game development, this means the game engine is gathering all information from the player (inputs to your gamepad or mouse/keyboard), as well as anything else happening at the same time (physics, graphics, animation, etc.), calculating all these interactions, and rendering all the results on your screen, 60 times every second. If you’re not mindful of the way you’re handling your systems and art assets, these calculations can quickly get out of hand and interfere with the players’ game experience, whether it be lag, lowered frame rate, game crashes, or any number of other problems. This is why balance is so important. I can make the characters as beautiful as humanly possible, but if the game doesn’t run it’s all for naught.

FUNCTION

My second pass, then, becomes one of feasibility. One of the questions answered in the initial discussion between myself, the art director, and our designer is the estimated size of our polycount budget for the character. Polycount refers to the number of polygons - usually triangles - that will be used to construct the model. Each polygon adds to the calculations that need to be made on the model when it’s being rendered in the game. So, generally speaking, the lower the polycount, the lower the stress on the engine. In games where there are potentially a lot of characters on the screen, it is important to be economical in the way you distribute polygons amongst your characters.

I use TopoGun to create a low-poly model from the high-poly Zbrush sculpt. There are many programs that allow you to do this, but Topogun is easy to use and produces great results.

I usually do a straightforward retopologization of the model in Topogun, trying to be efficient and economical in my distribution of polygons. Sometimes, however, I still end up over the polygon budget and need to go back and strip out polygons. The tricky part comes in how many polygons to cut, and where to do so. To make this decision, I usually stick to three criteria:

  • Silhouette - Are the polygons in question helping to strengthen the silhouette of the model?

  • Deformation - Are the polygons in question required to allow for smooth deformation when the model is rigged, skinned, and animated?

  • Ease of Texturing - Will these polygons make it significantly easier for me to hide texture seams or differentiate between different parts in the texture?

If the answer to these questions is “No,” then the polygons are removed. I continue to cut polygons until the model falls roughly into the budgeted polycount. Since the Robot is a main character - and therefore spends a lot of time on screen - and quite large, we allowed him a much higher polycount than we’d give other character models.

After modeling, the polygons are divided into parts, laid flat on a 2D plane, and painted over. The resulting 2D image, known as a “texture map,” is then applied to the model.

In addition to polycount, I need to consider how much texture space to allow for the character. Depending on your game engine and what the rest of your tech is like, textures can often be a larger tax on processing power than the amount of polygons in your scene. To determine the size of the texture I’m going to use on a character, I again weigh the costs and benefits of form versus function. I can free up a lot of processing power by making the texture maps smaller, but how large of an impact will this have on the fidelity and appeal of the character model (and ultimately the overall aesthetic appeal of the game)? Again, with the Robot, his size on screen and role as a main character gives him priority. This simply means that if resources are tight, corners will have to be cut in other areas. Not necessarily a problem, but something to be mindful of all the same.

TEXTURING

Again, given the stylized nature of the game, some distinct choices were made in terms of the textures and materials on the characters. The Robot, as with our other characters, is using an unlit surface shader, which means it derives its color and lighting information solely from the texture and is not influenced by lights in the scene. This comes with its own set of challenges, but what is gained is the ability to completely control the hue, saturation, and value of the textures to help them more closely match the concept (and, more importantly, its surrounding environment in-game).

The diffuse (unlit) texture being painted in Photoshop. The concept art is kept open for quick reference.

Lately I’ve been using a program called 3D-Coat to paint our character textures - along with a slew of other features, it allows you to paint directly on the model - but the Robot was one of our first fully textured characters and was painted in Photoshop. During the painting process I’m frequently comparing the concept art with how the textures are looking on the model. In addition, I’m trying to be mindful of how the Robot is looking in the in-game environment, even if it means simply overlaying a render of the Robot on a piece of environment concept art. It’s incredibly easy to become too focused on how a model looks in isolation, only to find out it sticks out like a sore thumb when placed in the engine.

The Robot model, before and after the painted texture map is applied.

FINAL CHECKS

I should mention that, throughout this entire process, I’m constantly checking in with the art director and making minor adjustments based on feedback. This ensures that the model is staying on course and remaining true to the initial concept. At the end of the process, it’s good to gather everyone and give the model another final once-over, just to make sure everything is as it should be before the model is handed off to the animator.

The model is complete!

The model is complete!

Having maintained the course, and suffering no major catastrophes, the Robot is finally all modeled, textured, and ready for animation!

Dev Blog: Art Tools

As a game dev topic, tools development might not be as exciting as gameplay or graphics programming.  However, as a small studio, specialized tools are essential for ensuring we make the best use of our limited resources.  And as a side note, I find that they can provide some really interesting problems to solve early in the development cycle … while also offering an opportunity to build strong relationships with the art and design teams.

In this blog post I’ll talk a little about the process for creating development tools, followed by a few examples of the art tools we’ve created over the last few weeks for our current project.

What is a Dev Tool?

As game developers, we all rely on a variety of software tools (think Maya, Photoshop, Perforce, Unity3D, Google Docs, and Visual Studio).  When the software doesn’t do everything we want it to (or when a particular task is repetitive) a programmer can author additional software on-top or between the off-the-shelf applications.

As an example, when I arrived at AtomJack, the first tool requested by our Animation Lead was a process to automate the conversion of a dozen robot animations into a format compatible with our game engine.  As a manual process, it was taking him more than four hours to complete.

So, working with Floyd to understand his workflow, I used a combination of MEL, Python, and C# to build a robust tool that would ensure any future Maya files (animated or not) could easily be converted, renamed, and transferred into our game engine.

Although it may have taken a few days of programmer time, it is expected that through the length of this project, this tool will save weeksworth of the animator’s time (and perhaps more importantly, limit the animator’s frustration).

Development Environments:

Most software used by content-creators already anticipates the need to extend or automate their functionality and will offer an Application Programming Interface (API) or language extension for doing just that.  Autodesk’s Maya offers the ability to run commands in their own Maya Embedded Language (MEL) and better yet, through their Python API. Both of these are even available in “headless” mode .. that is, without the graphic interface … improving performance. Adobe’s Photoshop allows developers a robust scripting API in a variety of languages: AppleScript, JavaScript, and Visual Basic.

In other cases, it can be a matter of exporting data from one application (using their proprietary file formats), processing the exported file, then importing the processed data into the other program. For these tools, I believe it is important to make the experience as seamless as possible. While it might be more common to find technical artists than previously, my experience is that most artists prefer simplicity over extensive features when getting data from one application to another.

However, if you are making a feature-rich application, you’ll frustrate your team if you haven’t included the expected functionality of modern applications (copy/paste, undo, file-selection, etc).  In these cases, I highly recommend using a language with a robust and simple implementation (like C#/.NET) for your Windowing applications. 

Unique Challenges:

A significant challenge when building tools that will bridge off-the-shelf applications, is that those programs are in constant flux.  New releases may change the definition of proprietary file-types, API libraries, or even folder paths.

As a simple example, Maya stores FBX export presets in the user’s folder:

\Documents\maya\FBX\Presets\2015.0\export\

If the artist automatically updates to the new version of Maya, the directory changes:

\Documents\maya\FBX\Presets\2015.1\export\

So, even in a highly controlled studio environment, hard-coded paths will cause problems.

Whenever working with directory paths I try to ensure that I automatically save the user’s last-used directories in a configuration file, then make that directory the default the next time the tool loads.  This ensures users are not required to repeatedly search through an extended folder hierarchy (a common result of combining large projects with version-control software).  Like good audio design, you know your tools are successful when the users don’t even realize they are there.

Another challenge for a small studio is knowing when to build a tool .. and when not to.  You may hear the statement, “At my last company we had a tool that could do XYZ”. But engineering resources vary significantly from studio to studio.  There is an old XKCD comic that does a great job of illustrating the limitations of automating a given routine task versus the time taken to automate it.

In the end, it will be scope, performance, reusability, available engineering resources, and the possibility of releasing tools publicly (through something like Unity’s Asset Store) that should factor into the decision on whether or not to develop a particular tool.

More Sample Tools

Export and Import Level Data:

For our current project, we want the artists to be be able to hand-paint the environmental art. However, the levels are built by our design team inside the game engine.  So, I wrote a set of tools to transfer that level information from Unity3D to Photoshop.  The tool moves a variety of context-rich level data and encodes it into individual Art Layers as part of a single Photoshop file. The information is then available for reference when painting environmental art.

Both of the animated gifs shown here demonstrate the scripts running in real time.

Slice, Save, and Reassemble

With the high pixel-density of modern consoles, it is important that the artists are able to paint at an extremely high resolution. And when the artists are painting across an entire room at a time (stretching across multiple screens as the game camera pans around) this can result in a lot of pixels. Unfortunately, even on high-end development PCs, Photoshop isn’t very responsive when working on images that can be up to 100,000 pixels wide.  To solve this issue, I’ve written a set of scripts that slice the PSD files into smaller pieces and then reassemble them when opened for edit.  The script will automatically find neighboring images and assemble them into a single PSD.  In this example, each square represents 4096^2 pixels for a total 3×3 authoring size of 12,288 x 12,288 pixels.

Export into Engine

As a final step in the environmental art authoring tool chain, the various Photoshop files must be exported into the correct size and format that are required by  the game-engine.  In this case, I’ve created scripts that will automatically subdivide the single 4096×4096 PSD along with its associated layers into 112 individual 1024×1024 PNGs spread across both orthographic and parallaxed backgrounds, as well as another set of lower-resolution thumbnails.  For performance considerations, any texture patches that contain only transparent pixels are automatically removed.

More details on our orthographic and parallaxing system will be provided in a later post.

Unexpected Advantages

Tools development is not only an important task during pre-production, it can be a lot of fun. While you are solving interesting problems that will benefit the entire game, you also have the advantage of knowing the users of the software are your colleagues.  If you’re open to the possibility, you can iterate quickly from immediate feedback, potentially resulting in really useful tools.  You’ll get to learn about the needs and work of the other disciplines, and through building strong relationships with those art and design teams, you’re more likely to anticipate their needs and provide tools they may have never imagined possible.

At least, that has been my philosophy.

Read more about tips and tricks for Scripting in Photoshop on my blog.


The A-Team

It seems like only a month ago we were writing about how the team at AtomJack is growing. Since then we’ve been heads down, nose to the grindstone, working on systems, fleshing out ideas, and putting controllers in hands to test our theories and find the fun. We will have more to share on the game itself in the coming year, so we thank you for your patience.

2000px-A-Team-Logo (1).png

Since our last update we’ve added not one, not two, but three more fantastic people to the team and filling out our new space quite nicely. So without further ado, we’d like to introduce you to the new team members.

Dan Rosas

Dan’s entire life has lead up to designing games, he just didn’t know it. Throughout college, he studied a wide array of seemingly disconnected majors – psychology, audio, and business management to name a few. After accidentally taking a computer programming course, he had stumbled upon the missing ingredient that tied together all of his passions. Dan attended DigiPen Institute of Technology where he worked with friends to create the award winning music shoot ‘em up, Solace. After graduating in 2012 with a BS in “Real Time Interactive Simulation” (which is just a fancy term for “Computer Science”), he began his professional game design career at 5th Cell, making puzzles for the Scribblenauts franchise. When Dan isn’t making games at work, he’s making games at home, constantly posting new projects on his website, CoffeeShopGameDev.

You may know Dan from such games as Interactive Cave Shooting Simulator

You may know Dan from such games as Interactive Cave Shooting Simulator

Kari Toyama

Originally from O’ahu, Hawaii, Kari settled in the Pacific Northwest 12 years ago. She’s been playing games since she could reach a keyboard and spent a lot of time with her brother playing anything they could get their hands on. While studying Psychology in college, she started to play online multiplayer games and fell in love with Halo 2. Determined to have a career doing something she loved, Kari started her new path as a tester on Halo 3 and spent six years working at Microsoft Studios on several AAA titles. Kari comes to AtomJack from PopCap Games where she most recently worked as a QA Project Manager on Peggle 2. When Kari isn’t playing games, she’s usually working on a crochet project or traveling.

Kari at HaloFest

Kari at HaloFest

Last, but not least, we are very excited to be working with C Andrew Rohrmann, also known as scntfc, as our Audio Design Lead, to bring our game to life through your ears and hearts.

SCNTFC

C Andrew Rohrmann – aka scntfc – has a long and diverse list of sound design and music projects on his resume. Scoring zombie filled genre flicks? Check. Large scale multi-channel sound installations? Check. The sounds coming from your Xbox Dashboard? He made some of those. Has he made music for ads for cars (Volkswagen), shoes (Adidas), and candy bars (Cadbury)? Yup, those too. He’s done a lot of stuff. He’s also only the second person in history to legitimately remix the music of Bob Dylan (for the soundtrack to Crackdown 2). He’s worked with such esteemed game creators as Superbrothers, DeNA, Funktronic Labs, 17-Bit, PopCap… and now Atomjack. That’s some good company!

He really does look like this.

He really does look like this.

Interested in being included in a future team update? Then come work with us! We are still hiring and you can find our open positions on the jobs page. We look forward to hearing from you!


Happy Halloween!

We hope you’ve enjoyed our week of Halloween shenanigans! Here is the entire collection of Halloween themed art featuring our Robot.

First, we made it festive in our studio with an AtomJack-o-Lantern.

Our Art Director, DRP, kicked things off with a quick stylized sketch.

Then our Animator, Floyd Bishop, did a nod to Scooby Doo with the Robot’s run cycle.

Finally, our Character Artist, Kieran Lampert, took the Robot trick or treating.

Thanks to our Art Team the Robot celebrated his first Halloween!