Making Progress!

Written by Allen Murray


Hey everyone, it’s been a few months since we’ve given an update on our progress. If you’ve been following the blog you’ve seen updates on our robot designs from art direction to modeling, an article calculating jumps and the process of building tools to support the team. It’s safe to say that we’ve only shown you a tiny little bit of the characters you will meet (and play) along the way - and we have so much more to share over the months ahead.

The team recently finished an early milestone in preproduction and we’re heads down over the next couple months, creating our playable prototype that showcases our gameplay systems as well as bringing all of the beautiful art, animation and cinematics into the game engine. Even though everyone has been working hard the past few months the work has largely been on concepts, tools, engine and gameplay systems - everything under the hood and not much on screen. Just recently have we been able to see the fruit of this labor as all of these pieces come together and it is amazing. This is that dark tunnel of game development where you have ideas and plans and you start building, layering on all of this work that is being done in parallel. It’s messy and oftentimes things are horrendously broken. But you work together, grinding it out and as you progress you start to see the light at the end of the tunnel - or in this case characters and environments come alive on your screen.


It’s still very early in our development cycle, but it’s moments like these that make us proud and we’re proving that our core gameplay works and is *fun* and becoming more and more confident in our plans for production. 

And one of the reasons we have been able to make such progress is the addition of our new Gameplay Programmer, Greg Brown who joined us this winter. Here is a little bit about Greg in his own words:

Greg Brown

Greg moved to Seattle from Salt Lake 11 years ago with a BA in Sociology and French no clue how one might use those professionally. He landed a game testing job at WildTangent and began working his ass off to expand his limited programming knowledge into something useful. While there he met a number of people who he would go on to work with at Flagship Studios’ small Seattle office on project called Mythos. After the collapse of that studio and the loss of their project, that team stuck together and form Runic Games in 2008. While at Runic, Greg worked on Torchlight, Torchlight II, and the publicly available content authoring and modding tools for those games. Recently, Greg was seeking something new and different (and not a Hack-and-Slash RPG) to work on and found it at AtomJack. Outside of game development, Greg loves language study, traveling, showing his age through his love of 90’s indie rock, his two cats, and learning how to dad.


Make a game with us

We recent starting hiring for a new position and are looking for a very talented painter to join our team to help paint all of the lush 2D environments and backgrounds for our game - as well as assist with additional concepts. If you or someone you know is interested, please check out the position here: Environment / Concept Artist.

It’s an honor to share these updates with you and we’re excited to show you more over the next year! Thanks again for taking the time to keep up with us at AtomJack and check back soon for more updates.


Modeling a Stylized Character

One of the many reasons I enjoy making character models is because the process is a bit different each time. Similar to people in our everyday lives, each character has its own unique quirks, presenting its own set of challenges and puzzles to solve. So I wanted to share with you all an overview of the process by which I brought our Robot from 2D concept to 3D asset.


This particular model started with a pretty clear-cut concept, and even before then a well established outline of what we wanted the character to be. Depending on who you work with, sometimes your art director gives you a fully completed concept like I had for our Robot. Other times, though, you may just be given a rough sketch, and your teammates are looking to you to pour your own influences into filling in the details. I’m lucky enough to be working with some pretty talented folks here, so the concept I was given by our art director was already imbued with a ton of appeal.

Here at AtomJack, we’re a pretty small, tight-knit group, which makes it much easier to throw ideas around and come to a consensus on things. Upon receiving the concept, I sat down with our art director, animator, and lead designer to get an overview of the character and specific needs for the model. How do his leg joints connect? What are the extents of his movement range? What material is this made out of? The discussion continues until we all feel like we’re on the same page, at which point it’s time to dive into production.


The exaggerated nature of our game’s art style was utilized in the Robot’s design to push the traits of his character. For example, the Robot’s large torso, shoulder pauldrons, and hands exude strength, but his color scheme and wide, green eye belied a nature that is more curious than menacing. The design David came up with already provided so much definition, so my task became ensuring that as much of the character as possible was maintained when transitioning from 2D to 3D.

When it comes to actually constructing the model, I tend to make two general passes. The first pass is centered entirely around trying to nail down the feel of the character. At this stage I’m not worried about any technical limitations, but instead focus on major design elements: silhouette, exaggeration of important features, overall aesthetic appeal, and so on. My goal is to make sure the 3D version looks and feels as much like the 2D concept as possible, while still maintaining the physicality of an object in three dimensions. Since I’m not concerned about restrictions, I tend to go back and forth between Zbrush and Photoshop, sculpting in high resolution and comparing to the concept as I go.

An early, high-poly pass on the Robot model in Zbrush.

Once I’m happy with the feel of the character, it’s time to switch gears. As I mentioned before, each character tends to be unique in its creation process, but one aspect all of them share is the need to find a balance between form and function. Games, though constantly evolving, still come with a fair amount of restrictions. For example, most modern games strive to maintain a runtime speed of 60 frames per second. For those who may not have experience with the inner-workings of game development, this means the game engine is gathering all information from the player (inputs to your gamepad or mouse/keyboard), as well as anything else happening at the same time (physics, graphics, animation, etc.), calculating all these interactions, and rendering all the results on your screen, 60 times every second. If you’re not mindful of the way you’re handling your systems and art assets, these calculations can quickly get out of hand and interfere with the players’ game experience, whether it be lag, lowered frame rate, game crashes, or any number of other problems. This is why balance is so important. I can make the characters as beautiful as humanly possible, but if the game doesn’t run it’s all for naught.


My second pass, then, becomes one of feasibility. One of the questions answered in the initial discussion between myself, the art director, and our designer is the estimated size of our polycount budget for the character. Polycount refers to the number of polygons - usually triangles - that will be used to construct the model. Each polygon adds to the calculations that need to be made on the model when it’s being rendered in the game. So, generally speaking, the lower the polycount, the lower the stress on the engine. In games where there are potentially a lot of characters on the screen, it is important to be economical in the way you distribute polygons amongst your characters.

I use TopoGun to create a low-poly model from the high-poly Zbrush sculpt. There are many programs that allow you to do this, but Topogun is easy to use and produces great results.

I usually do a straightforward retopologization of the model in Topogun, trying to be efficient and economical in my distribution of polygons. Sometimes, however, I still end up over the polygon budget and need to go back and strip out polygons. The tricky part comes in how many polygons to cut, and where to do so. To make this decision, I usually stick to three criteria:

  • Silhouette - Are the polygons in question helping to strengthen the silhouette of the model?

  • Deformation - Are the polygons in question required to allow for smooth deformation when the model is rigged, skinned, and animated?

  • Ease of Texturing - Will these polygons make it significantly easier for me to hide texture seams or differentiate between different parts in the texture?

If the answer to these questions is “No,” then the polygons are removed. I continue to cut polygons until the model falls roughly into the budgeted polycount. Since the Robot is a main character - and therefore spends a lot of time on screen - and quite large, we allowed him a much higher polycount than we’d give other character models.

After modeling, the polygons are divided into parts, laid flat on a 2D plane, and painted over. The resulting 2D image, known as a “texture map,” is then applied to the model.

In addition to polycount, I need to consider how much texture space to allow for the character. Depending on your game engine and what the rest of your tech is like, textures can often be a larger tax on processing power than the amount of polygons in your scene. To determine the size of the texture I’m going to use on a character, I again weigh the costs and benefits of form versus function. I can free up a lot of processing power by making the texture maps smaller, but how large of an impact will this have on the fidelity and appeal of the character model (and ultimately the overall aesthetic appeal of the game)? Again, with the Robot, his size on screen and role as a main character gives him priority. This simply means that if resources are tight, corners will have to be cut in other areas. Not necessarily a problem, but something to be mindful of all the same.


Again, given the stylized nature of the game, some distinct choices were made in terms of the textures and materials on the characters. The Robot, as with our other characters, is using an unlit surface shader, which means it derives its color and lighting information solely from the texture and is not influenced by lights in the scene. This comes with its own set of challenges, but what is gained is the ability to completely control the hue, saturation, and value of the textures to help them more closely match the concept (and, more importantly, its surrounding environment in-game).

The diffuse (unlit) texture being painted in Photoshop. The concept art is kept open for quick reference.

Lately I’ve been using a program called 3D-Coat to paint our character textures - along with a slew of other features, it allows you to paint directly on the model - but the Robot was one of our first fully textured characters and was painted in Photoshop. During the painting process I’m frequently comparing the concept art with how the textures are looking on the model. In addition, I’m trying to be mindful of how the Robot is looking in the in-game environment, even if it means simply overlaying a render of the Robot on a piece of environment concept art. It’s incredibly easy to become too focused on how a model looks in isolation, only to find out it sticks out like a sore thumb when placed in the engine.

The Robot model, before and after the painted texture map is applied.


I should mention that, throughout this entire process, I’m constantly checking in with the art director and making minor adjustments based on feedback. This ensures that the model is staying on course and remaining true to the initial concept. At the end of the process, it’s good to gather everyone and give the model another final once-over, just to make sure everything is as it should be before the model is handed off to the animator.

The model is complete!

The model is complete!

Having maintained the course, and suffering no major catastrophes, the Robot is finally all modeled, textured, and ready for animation!

Dev Blog: Art Tools

As a game dev topic, tools development might not be as exciting as gameplay or graphics programming.  However, as a small studio, specialized tools are essential for ensuring we make the best use of our limited resources.  And as a side note, I find that they can provide some really interesting problems to solve early in the development cycle … while also offering an opportunity to build strong relationships with the art and design teams.

In this blog post I’ll talk a little about the process for creating development tools, followed by a few examples of the art tools we’ve created over the last few weeks for our current project.

What is a Dev Tool?

As game developers, we all rely on a variety of software tools (think Maya, Photoshop, Perforce, Unity3D, Google Docs, and Visual Studio).  When the software doesn’t do everything we want it to (or when a particular task is repetitive) a programmer can author additional software on-top or between the off-the-shelf applications.

As an example, when I arrived at AtomJack, the first tool requested by our Animation Lead was a process to automate the conversion of a dozen robot animations into a format compatible with our game engine.  As a manual process, it was taking him more than four hours to complete.

So, working with Floyd to understand his workflow, I used a combination of MEL, Python, and C# to build a robust tool that would ensure any future Maya files (animated or not) could easily be converted, renamed, and transferred into our game engine.

Although it may have taken a few days of programmer time, it is expected that through the length of this project, this tool will save weeksworth of the animator’s time (and perhaps more importantly, limit the animator’s frustration).

Development Environments:

Most software used by content-creators already anticipates the need to extend or automate their functionality and will offer an Application Programming Interface (API) or language extension for doing just that.  Autodesk’s Maya offers the ability to run commands in their own Maya Embedded Language (MEL) and better yet, through their Python API. Both of these are even available in “headless” mode .. that is, without the graphic interface … improving performance. Adobe’s Photoshop allows developers a robust scripting API in a variety of languages: AppleScript, JavaScript, and Visual Basic.

In other cases, it can be a matter of exporting data from one application (using their proprietary file formats), processing the exported file, then importing the processed data into the other program. For these tools, I believe it is important to make the experience as seamless as possible. While it might be more common to find technical artists than previously, my experience is that most artists prefer simplicity over extensive features when getting data from one application to another.

However, if you are making a feature-rich application, you’ll frustrate your team if you haven’t included the expected functionality of modern applications (copy/paste, undo, file-selection, etc).  In these cases, I highly recommend using a language with a robust and simple implementation (like C#/.NET) for your Windowing applications. 

Unique Challenges:

A significant challenge when building tools that will bridge off-the-shelf applications, is that those programs are in constant flux.  New releases may change the definition of proprietary file-types, API libraries, or even folder paths.

As a simple example, Maya stores FBX export presets in the user’s folder:


If the artist automatically updates to the new version of Maya, the directory changes:


So, even in a highly controlled studio environment, hard-coded paths will cause problems.

Whenever working with directory paths I try to ensure that I automatically save the user’s last-used directories in a configuration file, then make that directory the default the next time the tool loads.  This ensures users are not required to repeatedly search through an extended folder hierarchy (a common result of combining large projects with version-control software).  Like good audio design, you know your tools are successful when the users don’t even realize they are there.

Another challenge for a small studio is knowing when to build a tool .. and when not to.  You may hear the statement, “At my last company we had a tool that could do XYZ”. But engineering resources vary significantly from studio to studio.  There is an old XKCD comic that does a great job of illustrating the limitations of automating a given routine task versus the time taken to automate it.

In the end, it will be scope, performance, reusability, available engineering resources, and the possibility of releasing tools publicly (through something like Unity’s Asset Store) that should factor into the decision on whether or not to develop a particular tool.

More Sample Tools

Export and Import Level Data:

For our current project, we want the artists to be be able to hand-paint the environmental art. However, the levels are built by our design team inside the game engine.  So, I wrote a set of tools to transfer that level information from Unity3D to Photoshop.  The tool moves a variety of context-rich level data and encodes it into individual Art Layers as part of a single Photoshop file. The information is then available for reference when painting environmental art.

Both of the animated gifs shown here demonstrate the scripts running in real time.

Slice, Save, and Reassemble

With the high pixel-density of modern consoles, it is important that the artists are able to paint at an extremely high resolution. And when the artists are painting across an entire room at a time (stretching across multiple screens as the game camera pans around) this can result in a lot of pixels. Unfortunately, even on high-end development PCs, Photoshop isn’t very responsive when working on images that can be up to 100,000 pixels wide.  To solve this issue, I’ve written a set of scripts that slice the PSD files into smaller pieces and then reassemble them when opened for edit.  The script will automatically find neighboring images and assemble them into a single PSD.  In this example, each square represents 4096^2 pixels for a total 3×3 authoring size of 12,288 x 12,288 pixels.

Export into Engine

As a final step in the environmental art authoring tool chain, the various Photoshop files must be exported into the correct size and format that are required by  the game-engine.  In this case, I’ve created scripts that will automatically subdivide the single 4096×4096 PSD along with its associated layers into 112 individual 1024×1024 PNGs spread across both orthographic and parallaxed backgrounds, as well as another set of lower-resolution thumbnails.  For performance considerations, any texture patches that contain only transparent pixels are automatically removed.

More details on our orthographic and parallaxing system will be provided in a later post.

Unexpected Advantages

Tools development is not only an important task during pre-production, it can be a lot of fun. While you are solving interesting problems that will benefit the entire game, you also have the advantage of knowing the users of the software are your colleagues.  If you’re open to the possibility, you can iterate quickly from immediate feedback, potentially resulting in really useful tools.  You’ll get to learn about the needs and work of the other disciplines, and through building strong relationships with those art and design teams, you’re more likely to anticipate their needs and provide tools they may have never imagined possible.

At least, that has been my philosophy.

Read more about tips and tricks for Scripting in Photoshop on my blog.

The A-Team

It seems like only a month ago we were writing about how the team at AtomJack is growing. Since then we’ve been heads down, nose to the grindstone, working on systems, fleshing out ideas, and putting controllers in hands to test our theories and find the fun. We will have more to share on the game itself in the coming year, so we thank you for your patience.

2000px-A-Team-Logo (1).png

Since our last update we’ve added not one, not two, but three more fantastic people to the team and filling out our new space quite nicely. So without further ado, we’d like to introduce you to the new team members.

Dan Rosas

Dan’s entire life has lead up to designing games, he just didn’t know it. Throughout college, he studied a wide array of seemingly disconnected majors – psychology, audio, and business management to name a few. After accidentally taking a computer programming course, he had stumbled upon the missing ingredient that tied together all of his passions. Dan attended DigiPen Institute of Technology where he worked with friends to create the award winning music shoot ‘em up, Solace. After graduating in 2012 with a BS in “Real Time Interactive Simulation” (which is just a fancy term for “Computer Science”), he began his professional game design career at 5th Cell, making puzzles for the Scribblenauts franchise. When Dan isn’t making games at work, he’s making games at home, constantly posting new projects on his website, CoffeeShopGameDev.

You may know Dan from such games as Interactive Cave Shooting Simulator

You may know Dan from such games as Interactive Cave Shooting Simulator

Kari Toyama

Originally from O’ahu, Hawaii, Kari settled in the Pacific Northwest 12 years ago. She’s been playing games since she could reach a keyboard and spent a lot of time with her brother playing anything they could get their hands on. While studying Psychology in college, she started to play online multiplayer games and fell in love with Halo 2. Determined to have a career doing something she loved, Kari started her new path as a tester on Halo 3 and spent six years working at Microsoft Studios on several AAA titles. Kari comes to AtomJack from PopCap Games where she most recently worked as a QA Project Manager on Peggle 2. When Kari isn’t playing games, she’s usually working on a crochet project or traveling.

Kari at HaloFest

Kari at HaloFest

Last, but not least, we are very excited to be working with C Andrew Rohrmann, also known as scntfc, as our Audio Design Lead, to bring our game to life through your ears and hearts.


C Andrew Rohrmann – aka scntfc – has a long and diverse list of sound design and music projects on his resume. Scoring zombie filled genre flicks? Check. Large scale multi-channel sound installations? Check. The sounds coming from your Xbox Dashboard? He made some of those. Has he made music for ads for cars (Volkswagen), shoes (Adidas), and candy bars (Cadbury)? Yup, those too. He’s done a lot of stuff. He’s also only the second person in history to legitimately remix the music of Bob Dylan (for the soundtrack to Crackdown 2). He’s worked with such esteemed game creators as Superbrothers, DeNA, Funktronic Labs, 17-Bit, PopCap… and now Atomjack. That’s some good company!

He really does look like this.

He really does look like this.

Interested in being included in a future team update? Then come work with us! We are still hiring and you can find our open positions on the jobs page. We look forward to hearing from you!