Home Compete Results Rules Resources Contact


A History of Starcraft AI Competitions (and UAlbertaBot)


Written By: David Churchill dave.churchill@gmail.com

Note: This article was last update early 2016.

Introduction

Since the first Starcraft AI Competition in 2010, the field of Real-Time Strategy Game AI has continued to rise in popularity every year. Entrants to these competitions submit Starcraft AI bots which do battle in the retail version of Starcraft: Broodwar. Inspired by earlier RTS game competitions such as the Open RTS (ORTS) Competition, these Starcraft AI competitions have become the defining place to showcase state of the art artificial intelligence agents for real-time strategy games. Starcraft AI agents are controlled using the Brood War Application Programming Interface (BWAPI), which was first developed in 2009 as a way for programmers to interact with and control the full game of Starcraft: Broodwar using C++. As the feature set and popularity of BWAPI grew and the first Starcraft AI agents started to appear, it was finally possible to have a proper AI competition in Starcraft. We will now give a detailed history of each major Starcraft AI Competition, as well as the progression of UAlbertaBot, our entry into these competitions. Please note that since I have been the organizer of the AIIDE competitions since 2011, I will naturally have more information written here about those particular competitions. Each competition will be discussed in chronological order in which it occurred, along with complete competition results and links to download bot source code and reply files for the AIIDE and CIG competitions.

If you would like to read a report of the 2015 AIIDE Starcraft AI Competition, you can do so here.

"Why not StarCraft 2?"

This is the question we always get asked when we tell people we are doing a Starcraft: BroodWar AI competition. This competition relies completely on BWAPI as a programming interface to BroodWar. BWAPI was created by reverse engineering BroodWar and relies on reading and writing to the program memory space of BroodWar in order to read data and issue commands to the game. Since the release of the StarCraft 2 AI API, we have been actively working with Blizzard to get the required functionality that would allow for the AIIDE competition to support StarCraft 2, but is not quite there yet.

There are other RTS game engines available for competitions as well. One such engine is ORTS - a free software RTS game engine which ran several competitions until 2010 when BWAPI was released and the first AIIDE Starcraft AI Competition was held. Another engine is microRTS, a Java RTS engine which plays a simplified grid-based RTS game and is designed specifically for testing AI techniques.

RTS AI Techniques

For an excellent up to date overview on the state of the art in StarCraft AI techniques and bot artchitecture descriptions, I highly recommend reading the following publications:

Acknowledgements

RTS AI Research and Competitions involve a tremendous amount of effort from many people, so I would like to thank those who have helped to organize current and past competitions, those who have submitted bots, and helped promote RTS AI in general. I would first like to thank the University of Alberta (UofA) RTS AI research group, of which I am a member. The UofA has been involved with RTS AI research since Michael Buro's call to action paper in 2003. The UofA hosted the ORTS AI Competition from 2006-2009, as well as the AIIDE Starcraft AI Competition each year since 2011. I would like to personally thank the current and past members of the UofA RTS AI research group for all their help over the years with promoting, organizing, and running these competitions, as well as for continuing to do world class research in the area. From left to right below are: Nicolas Barriga, David Churchill, Marius Stanescu, and Michael Buro. Not pictured former members include Tim Furtak, Sterling Oersten, Graham Erickson, Doug Schneider, Jason Lorenz, and Abdallah Saffidine.

I would also like to thank those who have organized and run past and current Starcraft AI Competitions. Thanks to Ben Weber for organizing the first AIIDE Starcraft AI Competition which sparked worldwide interest in the field of RTS AI research. Michal Certicky has put forth a phenomenal effort every year in continuing to run and maintain the Student Starcraft AI Tournament, as well as the persistent online ladder and bot live stream, and has helped tremendously in promoting the field of RTS AI. Organizers of the CIG Starcraft AI Competitions have included Johan Hagelback, Mike Preuss, Ben Weber, Tobias Mahlmann, Kyung-Joong Kim, Ho-Chul Cho, In-Seok Oh, and Man-Je Kim. Thanks as well to Krasimir Krastev (krasi0) for mainting the original Starcraft AI Bot Ladder. Also a big thanks to Santi Ontanon for developing the microRTS AI system. Last and certainly not least, I would like to extend a special thank you to Adam Heinermann for continuing to develop BWAPI, without which none of this research would be possible.

Starcraft AI Competitions

AIIDE 2010

The AIIDE Starcraft AI Competition was first run in 2010 by Ben Weber at the Expressive Intelligence Studio at University of California, Santa Cruz, as part of the AIIDE (Artificial Intelligence and Interactive Digital Entertainment) conference. A total of 26 entrants competed in four different game modes which varied from simple combat battles to the full game of Starcraft. As this was the first year of the competition, and little infrastructure had been created, each game of the tournament was run manually on two laptop computers and monitored by hand to record the results. Also, no persistent data was kept for bots to learn about opponents between matches. The 2010 competition had 4 different tournament categories in which to compete. Tournament 1 was a flat-terrain unit micro-management battle consisting of four separate unit composition games. Of the six competitors, FreSCBot won the competition with Sherbrooke coming in 2nd place. Tournament 2 was another micro-focused game with non-trivial terrain. Two competitors submitted for this category, with FreSCBot once again coming in 1st by beating Sherbrooke. Tournament 3 was a tech-limited StarCraft game on a single known map with no fog-of-war enforced. Players were only allowed to choose the Protoss race, with no late game units allowed. 8 bots faced off in this double-elimination tournament with MimicBot taking first place over Botnik in the final. As this was a perfect information variant of StarCraft, MimicBot adopted a strategy of ``mimic its opponent's build order, gaining an economic advantage whenever possible'' which worked quite well.

Tournament 4 was considered the main event, which involved playing the complete game of StarCraft: Brood War with fog-of-war enforced. The tournament was run with a random pairing double-elimination format with each match being best of 5 games. Competitors could play as any of the three races, with the only limitations in gameplay being those that were considered ``cheating'' in the StarCraft community. Since computer programs written with BWAPI have no limit to the number of actions they can issue to the Starcraft game engine, certain behaviours are possible which were not intended by the developers such as sliding buildings and walking ground units over walls, these sorts of actions are considered cheating and not allowed in the tournament. A map pool of 5 well-known professional maps were announced to competitors in advance, with a random map being chosen for each game. Tournament 4 was won by Overmind - a Zerg bot created by a large team from the University of California, Berkeley, who defeated the Terran bot Krasi0 by Krasimir Krastev in the finals.

Overmind relied heavily on using the powerful and agile Zerg Mutalisk flying unit, which it controlled to great success using potential fields. Overmind's overall strategy consisted of an initial defense of Zerglings and Sunken Colonies (static defense towers) to protect its initial expansion while gathering enough resources to construct its initial Mutalisks. Once the Mutalisks had been constructed, it sent them to the enemy base and patrolled and harasses around the perimiter of the enemy base. If the bot did not win outright on the initial attack, it would slowly patrol and pick off any units which were undefended, eventually wearing down the enemy to the point where it could overwhelm it with a final attack. The 2nd place bot krasi0 played a very defensive Terran strategy, constructing Bunkers, Sige Tanks, and Missle Turrets for defense. After a certain number of units had been constructed it would then send an army of mechanical units to siege the enemy base. This bot performed quite well, only losing to Overmind during the competition. An excellent article about Overmind was written by Ars Technica in 2011. There was also a man vs. machine match which played an expert human player vs. the top AIs, which can be seen here:

The first version of UAlbertaBot was started in summer 2010 and submitted to the AIIDE 2010 competition that September. A group of 6 students led by myself and Michael Buro at the University of Alberta constructed the initial version of UAlbertaBot using BWAPI and the Brood War Standard Add-on Library (BWSAL), which provided functionality such as basic build order planning, building placement, and worker management. The 2010 version of UAlbertaBot played the Zerg race, with one main strategy which relied heavily on the Zerg Mutalisk flying unit. While the bot's combat and micromanagement were well implemented, several major logic bugs in the early game and build-order planning of the bot meant that its overall performance in the 2010 competition was weak, and it was eliminated in the 3rd round of the bracket style competition by the Terran bot krasi0. This first implementation of UAlbertaBot was plagued with technical problems, and so the decision was made after the competition to completely re-write the bot for the next year's competition.

CIG 2010

After the success of the AIIDE 2010 competition, there was an attempt to hold a Starcraft AI competition as part of the Computational Intelligence in Games (CIG) conference. Organized by Johan Hagelback, Mike Preuss, and Ben Weber, the CIG 2010 competition was to have a single game mode similar to the tech-limited Tournament 3 from the AIIDE 2010 competition, but using the Terran race instead of the Protoss race. Unfortunately the first year of the CIG competition met with several serious technical problems, which stemmed from the decision to use custom made Starcraft maps as opposed to traditional human-tested maps, which caused the Brood War Terrain Analysis (BWTA) software to crash for many of the bots. Because of these frequent crashes, it was decided that no winner could be announced for the competition. UAlbertaBot was not submitted as an entry for this competition as it was being completely re-written.

AIIDE 2011

In 2011 the AIIDE competition was hosted by the University of Alberta and has remained there ever since, and is organized and run each year by myself and Michael Buro. Due to the low number of entries to Tournaments 1, 2, and 3 from the 2010 AIIDE competition, it was decided that the AIIDE competition for 2011 would only consist of the full game of Starcraft (with the same rules as the 2010 Tournament 4), with no smaller micromanagement tournaments. The 2011 tournament rules were also updated so that all entrants must submit the source code of their bot and allow it to be published after the competition is over, which was done for a few reasons. One reason was to lower the barrier to entry for future competitions - since programming a Starcraft AI bot was very time consuming, future entrants could download and modify the source code of previous bots to save considerable effort. Another reason was to more easily prevent cheating - with thousands of games being played in the tournament we could no longer watch each game to detect if any cheating tactics were being employed, which would be more easily detected by inspecting the source code (which was not allowed to be heavily obfuscated). The final reason was to help advance the state of the art in Starcraft AI by allowing future bots to borrow strategies and techniques of previous bots by inspecting their source code - ideally, all bots in future competitions should be at least as strong as the bots from the previous year. The 2011 competition received 13 entrants.

The 2010 tournament was run Ben Weber on two laptops, and games were played by manually starting the Starcraft game and creating and joining games by hand. As the physical demand was quite high, a simple random-pairing double-elimination tournament was played with approximately 60 games in total. This caused some negative feedback that results this elimination-style tournament was quite dependent on pairing luck, so for the 2011 competition we set out to eliminate all randomness from the tournament and play a round robin style format. Playing a round robin format requires far more games to be played, so for several months in the summer of 2011 Jason Lorenz (an undergraduate summer student of Michael Buro) and I wrote software which could automatically schedule and play a round robin tournaments of Starcraft on an arbitrary number of locally networked computers. This software used a server/client architecture with a single server machine scheduling games and storing results, and each other machine running the client software which started the Starcraft game, monitored the game's progress and recorded the results when finished. Bot files, replays, and final results were accessed by clients via a shared Windows folder on the local network which was visible to all client machines. The initial version of this software allowed for a total of 2340 games to be played in the same time period as the 2010 competition's 60 games, with each bot playing each other bot a total of 30 times. There were 10 total maps in the competition, chosen from expert human tournaments that were known to be balanced for each race, and were available for download several months in advance on the competition website. The AIIDE competition was modeled on human tournaments where the map pool and opponents are known in advance in order to allow for some expert knowledge and opponent modeling.

At the end of the five day competition, Skynet placed 1st, UAlbertaBot placed 2nd, and Aiur placed 3rd. Skynet is a Protoss bot written by Andrew Smith, a software engineer from the United Kingdom, and used a number of solid Protoss strategies such as a Zealot rush, Dragoon / Zealot mid-game army, and a late game army consisting of Zealots, Dragoons, and Reavers. Its soloid economic play and good early defense were able to hold off the more offensive Protoss bots of UAlbertaBot and Aiur. UAlbertaBot now played the Protoss race, and is described in detail in the next paragraph. Aiur was written by Florian Richoux, a graduate student at the University of Nantes and also played the Protoss race, implementing a few different strategies such as Zealot rushing, and a Zealot / Dragoon army. An interesting rock-paper-scissors scenario occurred between the top 3 finishers, with Skynet beating UAlbertaBot 26 games out of 30, UAlbertaBot beating Aiur 29 games out of 30, and Aiur beating Skynet 19 games out of 30. Overmind, the bot which won the AIIDE 2010 competition did not enter the 2011 competition stating that it had been found to be very vulnerable to early game aggression and was easily beaten by rushes from all three races. The Overmind team also expressed that they did not want the source code of their 2010 bot to be published, was was now a rule in the 2011 competition. In its place, a team of undergraduate students from Berkeley submitted Undermind, a Terran bot which ended up placing 7th.

UAlbertaBot had been completely re-written in 2011 by myself and Sterling Oersten (an MSc student of Michael Buro) and now played the Protoss race instead of the Zerg played the previous year. The biggest reason for the switch to the Protoss race was that overall, we found the Protoss strategies were much easier to implement from a technical point of view, and the strategies performed much more consistently in testing. The nature of the Zerg race strategies relied heavily on intelligent building placement, a problem that we had not spend much effort in exploring in our research at that time. Due to the Zerg being relatively weak defensively in the early parts of the game, its buildings must be place in such a manner as to create a 'maze' into the base in an effort to delay the enemy from reaching their worker units. The Protoss race however did not have this issue and had a relatively strong early game defense due to its powerful Zealot and Dragoon units. Another reason for the switch to Protoss the inclusion of the build order search system, which had been implemented with the simpler Protoss building infrastructure in mind, and did not work for the Zerg or Terran races. Given a goal as a set of desired unit type counts, this build order planning system was able to automatically plan time-optimal build orders for those goals in real-time during the competition, and produced far better results than the priority-based build order system of the BWSAL system used in the 2010 version of UAlbertaBot. The inclusion of the new build order planning system into UAlbertaBot was the first instance of such a complex search-based approach being used for planning in any of the bots for the Starcrfat AI competition. This new version of UAlbertaBot implemented a very aggressive early game Zealot rush strategy which overwhelmed its opponents, winning many of its games in a matter of a few minutes. If the initial Zealot rush strategy did not outright kill the enemy, it transitioned into making ranged Dragoon units for its mid to late game strategy. UAlbertaBot performed quite well, placing 2nd overall in the competition and only having a losing winning rate against Skynet and Undermind. Skynet was able to stop UAlbertaBot's early rushes with its impressive early game Dragoon kiting, allowing it to kill several Zealots with a single dragoon. Undermind's strategy consisted of building several Terran Bunkers as early defense which shut down UAlbertaBot's aggressive rush.

CIG 2011

The 2011 CIG competition was organized by Tobias Mahlmann from IT University Copenhagen and Mike Preuss from Technische Universitat Dortmund. With lessons learned from the previous year, this year's CIG competition was held with a standard human test map pool of 5 maps, however unlike the AIIDE competition the maps were not revealed to the entrants ahead of time. The competition adopted the same rule set as the AIIDE 2011 tournament by playing the full game of Starcraft with fog of war enabled, and not allowing cheating. The two major rule differences were that the CIG competition did not enforce an open source requirement for entry, and the map pool of 5 maps was not annouced ahead of time to the entrants. Since both the AIIDE and CIG competitions were held in August (due to the scheduling of the conferences) and often received the same bot submissions, the CIG organizers decided that changing the tournament rules slightly by not announcing the map pool ahead of time would lead to more varied results. A total of 10 bots were submitted to the competition, and since the CIG competition did not yet have access to any tournament automation software, games were run by hand on just a few computers. Because of this, instead of one large round robin tournament like AIIDE, the competition was split into two pools of 5 bots each and round robins of 10 games per bot pairing were played. After this group stage, the top 2 bots from each pool went into a final pool and another round robin of 10 games per pairing was played with the final rankings being a result of this final pool round robin. Despite UAlbertaBot beating Skynet in the first pool's round robin, Skynet took 1st place in the final pool, with UAlbertaBot in a close 2nd, followed by Xelnaga in 3rd, and BroodwarBotQ in 4th place.

As both competitions were played within a few weeks of each other, no major strategy changes were made to either Skynet or UAlbertaBot for the CIG competition. The only changes that were made to UAlbertaBot was the removal of a few pieces of hand-coded map information specific to building placement on the maps that were present in the AIIDE competition and replaced by algorithmic solutions.

SSCAIT 2011 (Detailed Results)

The first Student Starcraft AI Tournament (SSCAIT) was held in the winter of 2011, organized by Michal Certicky at Comenius University, Bratislava. This tournament was founded as part of ``Introduction to AI" course taught by Michal Certicky at Comenius University, and as part of the course each student was required to submit a bot for the competition. As the course had many students, the competition had a total of 50 entrants consisting entirely of students. Custom software written by Michal Certicky was used to automatically schedule and run the games for the tournament. The tournament format split the 50 entries into 10 groups of 5 people for group stages with the top 16 finishers advancing into a double elimination bracket, with the final winnner being Roman Danielis, a student from Comenius University. Not many details are known about the bots or strategies for this competition as it was not highly publicized outside the university, and as such UAlbertaBot did not participate.

AIIDE 2012

The AIIDE 2012 competition was again hosted at the University of Alberta, with a major difference from previous tournaments: persistent file storage which allowed the bots to learn throughout the course of the competition. The tournament managing software was updated so that each bot had access to a read folder and a write folder contained on a shared folder which was accessible to all the client machines. During each round bots could read from their 'read' folder and write to their 'write' folder, and at the end of each round robin (one game between each bot pairing on a single map) the contents of the write folder were copied to the read folder, giving access to all information written about previous rounds. This method of copying after each round ensured that no bot had an information advantage within a round due to match scheduling. 10 bots competed in the 2012 competition, and at the end of the 5 days 8279 games had been played - 184 between each bot pairing. The final results were almost identical to those of the 2011 competition with Skynet coming 1st, Aiur coming 2nd and UAlbertaBot coming in 3rd place. Aiur's performance was improved by the inclusion of a new strategy it called \textit{cheese}, an early game Photon Cannon rush strategy which other bots were not prepared for.

The 2012 Man vs. Machine matches can be seen here:

For this competition, UAlbertaBot had one major update - the addition of the SparCraft combat simulation package. In the 2011 version, UAlbertaBot would simply wait for a threshold number of Zealots to be produced and then continuously send them at the enemy's base without ever retreating. The combat simulation module in the updated 2012 version allowed for the estimation of battle outcomes, and was used in battles to determine whether or not we predict the current battle will result in a win for us or the enemy. If we were predicted to win we would continue to attack, and if the enemy was predicted to win we would retreat back toward our base. This new tactic proved quite powerful in practice, however Aiur's early game defense had improved dramatically from the previous year's competition ended up edging out UAlbertaBot for 2nd place. UAlbertaBot also implemented three distinct strategies for the 2012 competition, the Zealot rush from 2011, a Dragoon rush, and a Dark Templar rush strategy. The bot also used persistent file IO to store match data against specific opponents, and decided which strategy to use against a given bot by using the UCB-1 formula. This learning strategy worked fairly well, taking the bot from a 60% win rate at the beginning of the tournament to a 68.6% win rate at the end of the tournament. One of the major reasons why UAlbertaBot came behind Aiur in this tournament is that the Dragoon and Dark Templar strategies were poorly implemented, with the strategy selection learning algorithm eventually learning to always choose the Zealot rush strategy - wasting previous wins on exploring the other strategies. If the Zealot rush strategy had been picked every game, UAlbertaBot would have placed 2nd.

CIG 2012

In 2012 the CIG competition used the AIIDE tournament managing software and was able to play many more games over the course of the tournament. There were 10 entrants to the competition, with many of the bots being the same as the AIIDE competition. A pool of six unknown maps was used, different from those used the year before. A total of 4050 games were played, with each bot playing each other bot 90 times. As in the AIIDE competition, persistent file IO was available for bots to learn, however due to networked folder differences between the AIIDE and CIG tournaments, it did not fully work as intented. Also of note was that there were over triple the reported bot crash rate as the AIIDE competition, so it was obvious that there were some technical issues in using the AIIDE tournament managing software. Skynet again won the competition, with UAlbertaBot coming in 2nd place, Aiur in 3rd place and Adjutant in 4th place. UAlbertaBot didn't use any file IO or learning for this competition as it ended up not working very well for the AIIDE 2012 competition, and the Zealot rush strategy was implemented which allowed the bot to clinch 2nd place.

SSCAIT 2012 (Detailed Results)

A few months later in December was the second SSCAI tournament, which was more widely advertised to competitors outside of Comenius University. Again, the competition contained many bots from the Michal Certicky's AI course and in total there were 52 entrants. The tournament format was a single round robin with each bot player each other bot one time for a total of 51 games played per bot. After the round robin was complete, the final standings were split into two different categories: student division and mixed division. To qualify for the student division the bot had to be written by a single student author, and the final rankings were decided by a scoring system with 3 points for a win and 1 point for a draw. The final standings for the student division placed Matej Istenik (Dementor bot) from the University of Zilna in 1st, Marcin Bartnicki from Gdansk University of Technology in 2nd place, and UAlbertaBot in 3rd place. Mixed division results were open to all competitors and saw the overall top 8 bots from the competition play off in a single elimination bracket. In the finals, IceBot defeated Macin Bartnicki to take 1st place. For this competition, the CIG 2012 version of UAlbertaBot was used due to SSCAIT using an unknown map pool.

CIG 2013

In 2013 the CIG competiton was scheduled a few weeks ahead of the AIIDE competition. The tournament managing sofware which was to be finished for the AIIDE competition was being re-written and not yet finished, and so the 2012 version of the software was used which meant the learning file IO system was not yet working on the CIG system for the same reasons as the previous year. Due to additional technical difficulties with the tournament setup only 1000 games were played, just a quarter of the games that were played in the previous year. The top 3 finishers of the competition were identical to the previous year, with Skynet placing 1st, UAlbertaBot getting 2nd place, and Aiur getting 3rd place. Xelnaga moved from its previous 6th place finish up to 4th place. UAlbertaBot was undergoing some major changes (described in the next section) which were not yet ready in time for the CIG 2013 competition, so the AIIDE 2012 version of the bot was submitted to compete while the changes were being finished for the AIIDE competition a few weeks later.

AIIDE 2013 (Detailed Competition Report)

For the 2013 AIIDE competition, the tournament managing software was almost completely re-written to be more robust to different types of network setups. Previous versions of the software had relied on the existence of a shared Windows folder for storing files, which was completely re-written to use Java socket communication instead. All bot files, replays, results, and IO folders were zipped and sent via Java sockets, meaning that the tournament could now be run on any network configuration which supported both TCP (for Java sockets) and UDP (for Starcraft's network play). A tutorial / demonstration video of this updated tournament manager software can be seen here:

There were only 8 total entrants to the 2013 competition, which was the smallest number of entries for any tournament to date, however the quality of the submitted bots was quite high. A total of 5597 games were played which allowed for 200 games per bot pairing, 20 on each of the 10 maps which remained unchanged from the previous year. The usual suspects did quite well again in the 2013 competition, however it was UAlbertaBot that won the competition, dethroning Skynet, the winner of the previous two competitions who came in 2nd place. In 3rd place was Aiur, and in 4th place was Ximp, a new bot written by Tomas Vajda, a student from Michal Certicky's AI course at Comenius University. Ximp played the Protoss race and its strategy consisted of a very early economic expansion protected by Photon Cannons, playing extremely defensively while building an army of Carriers - very strong late-game flying units. Once a certain number of Carriers were constructed they would sweep across the map destroying everything in their path. Unfortunately a bug in Ximp's code resulted in it crashing 100% of its games on the Fortress map, resulting in many free wins for its opponents. If it had not had this bug it would have easily placed 3rd in the competition. Also of note was Aiur's impressive strategy learning over time which took its initial 50% win rate gradually up to 58.51% by the end of the competition which moved it from 3rd place to 4th place.

Several improvements were made to UAlbertaBot for the 2013 competition, but most importantly was the upgraded version of SparCraft. UAlbertaBot previous included a simpler version of SparCraft, but had far less accurate results than the new SparCraft module which correctly simulated all damage and armor types which the previous system did not support, yielding a much more accurate simulation which greatly improved the early game performance of the bot. The Dragoon and Dark Templar strategies were removed from the bot and it reverted to only doing a Zealot rush, which was by far its strongest strategy. The only exception to this was against Skynet, where the bot performed a Dark Templar rush as the previous year's competition had shown it was quite weak to that strategy. A bug was found in the 2012 version of UAlbertaBot which caused all of the worker units of the bot to chase any units which came to scout the bot before a certain time threshold was reach, which resulted in several games being outright lost to Skynet and Aiur as soon as this happened due to lost mining time. This major bug and several other smaller ones were fixed, and the overall winning percentage was by far the highest it had ever been, beating out Skynet's 2nd place by almost 10% win rate.

SSCAIT 2013 (Detailed Results)

The SSCAIT competition again submitted many bots from the AI course in 2013, and had over 50 participants. The 2013 tournament consisted of nearly twice as many games as the 2012 tournament, as two rounds of round robin were played between each bot on maps randomly selected from the map pool. Again, the results were split into two categories, student and mixed, with the same rules as the previous year. In the student division, Ximp took 1st place, 2nd place was awarded to WOPR bot written by Soren Klett of Universitat Bielefeld, and UAlbertaBot again took 3rd place. Ximp's technical problems and early game mistakes from the AIIDE competition had been fixed and it performed very well during the round robin stage. For the mixed division, the top 8 bots from the round robin stage again played a single elimination bracket. UAlbertaBot was eliminated by IceBot in the quarter finals. The finals were played between IceBot and krasi0, with krasi0 winning and IceBot placing 2nd. The AIIDE 2013 version of UAlberaBot was used for the competition with no modifcations made.

CIG 2014

The CIG 2014 competition was organized by Kyung-Joong Kim, Ho-Chul Cho, and In-Seok Oh of Sejong University and received a total of 13 entrants. For 2014, the CIG competition used a total of 20 different maps which were unknown to the competitors before the competition started, which was by far the most maps ever used in a Starcraft AI competition up to this point. The tournament used the updated version of the AIIDE tournament managing software which meant that the file IO for learning was fully operational for the first time in a CIG competition. A total of 4680 games were played, which meant that each bot played each other bot 3 times on each of the 20 maps. Several previously strong bots got major updates for the 2014 competition, and the results reflected this.

IceBot won the competition, followed by Ximp in 2nd place, LetaBot in 3rd place, and Aiur in 4th place. IceBot had been competing since 2012, but never placed higher than 6th in a previous competition, its strategies were completely revamped and a much larger team was working on the bot in 2014 resulting in a much more stable and robust system overall. With a very solid early game dense it was able to defend against the many early aggressive bots. Ximp continued its late game Carrier strategy with just some minor updates and small bug fixes. LetaBot was a new Terran bot written by Martin Rooijackers from Maastricht University whose source code was based on the 2012 version of UAlbertaBot and adapted to play the Terran race. UAlbertaBot placed 5th in the competition due to it not having been updated for this year's competition, with the 2013 AIIDE version being submitted. Due to UAlbertaBot winning the AIIDE 2013 competition, several other bots implemented strategies specifically designed to beat it, and when combined with IceBot, Ximp, and LetaBot specifically designed to stop early game aggression it resulted in a 5th place finish.

AIIDE 2014

Due to the low entry numbers for 2013, much more advertising was done to attract entrants to the 2014 AIIDE competition. In addition to this, if a team which competed in the 2013 competition did not submit a new bot for 2014, their 2013 version was automatically resubmitted to the 2014 competition in order to measure how much the new submissions had improved. In total, 18 bots were submitted to the competition, making it the largest number of outside submissions to a competition to date. The 2013 versions of UAlbertaBot, Aiur, and Skynet were submitted to the 2014 competition since neither author had time to make any chances in the previous year. Since there were only two weeks between the CIG and AIIDE competitions, many of the submissions were identical and the results reflected this with the top 4 finishers being exactly the same. IceBot came in 1st, Ximp in 2nd, LetaBot in 3rd, and Aiur in 4th. UAlbertaBot ended up placing 7th overall and was the same version that had been submitted to the CIG 2014 competition.

The 2014 Man vs. Machine matches can be seen here:

SSCAIT 2014 (Detailed Results)

In 2014 the SSCAIT was updated to use the AIIDE tournament managing software (modified to fit its infratructure), which allowed it to use the same file IO structure for learning, as well as allowing it to play more games in a shorter amount of time. Since all 3 major competitions were now using this software which was open source and available for competitors to use and test, the submission and running of tournaments was now more streamlined and users now could also their submissions to ensure that they would work for all 3 competitions. The tournament format and rules for 2014 were the same as 2013, with each of the 42 entrants playing each other bot twice for a total of 861 games played. Again the results were divided into student and mixed divisions. For the student division, LetaBot won 1st place, with WOPR coming in 2nd place and UAlbertaBot in 3rd. In the mixed division a single elimination top 8 was played and LetaBot ended up beating Ximp in the final to win the competition, with UAlbertaBot beating IceBot in the bronze medal match to take 3rd.

The version of UAlbertaBot submitted for this competition was the same as the AIIDE 2013 version, with just a few small behaviour tweaks in regards to unit positioning and building placement. After the releatively poor showing in the CIG and AIIDE competitions it was surprising to see this older version of UAlbertaBot take 3rd place in both divisions.

CIG 2015

There were significant rule changes to the 2015 CIG competition which was again organized by members from Sejong University. The most significant rule change was that the competition no longer enforced the open source policy for bot submissions, which surprised many people as it was organized as part of an academic conference. The second change was that they allowed multiple entries for a single author, which was controversial due to the possibility of collusion between the entries of a single author in which one bot could automatically lose to the other, or communicate information between them about previous matches. Thankfully no such collusion was detected after the matches had been played. The 2015 competition was run for half as long as previous competitions due to some last minute technical difficulties, resulting in only 2730 games being played between the 14 entrants.

The results for this competition were quite exciting, as the top 3 places were all taken by new entrants, all of them playing Zerg. The winner of the competition was ZZZBot written by software engineer Chris Coxe, which implemented a single strategy which was to 4-pool zergling rush its opponents, the quickest possible attack you can do in Starcraft. Despite the relatively simple strategy and implementation not many of the other bots in the competition were ready for the quick attack and fell very quickly to it. In 2nd place was tscmoo-Z, a Zerg bot written by Vegard Mella, an independent programmer from Norway. Tscmoo's bot focused mainly on mid-late game strategies, implementing nearly a dozen different build orders and strategies, learning which to implement over time against each opponent. In 3rd place was Overkill, another Zerg bot written by Sijia Xu, a data engineer from China. Overkill had several strategies in its arsenal, but relied mainly on Zerg Mutalisks in a similar manner to Overmind from the AIIDE 2010 competition. UAlbertaBot was undergoing major changes at the time of the CIG competition and was not ready for the new version to be submitted, so once again the AIIDE 2013 version of the bot was submitted to this competition, and ended up coming in 10th place overall.

AIIDE 2015 (Detailed Competition Report)

Since 2011, AIIDE competitions at the University of Alberta were held in an undergraduate computer lab utilizing 20 Windows XP machines. Since the computer lab was actively used by students, this meant that games could only be played between the end of one school term and the beginning of the next (usually at the end of August). In 2015 the competition was run on Virtual Machines so that games could be run for much longer - thanks to a significant effort by Nicolas Barriga (another PhD student in RTS AI with Michael Buro). In total, 4 Linux servers each hosted 3 Windows XP virtual machines for a total of 12 virtual machines for the tournament to run on. This, combined with the fact that AIIDE was scheduled later than normal (in November) meant that the competition could run for a full two weeks - more than double the running time from last year. An additional advantage to virtual machines was that the competition could be monitored and controlled remotely with remote desktop software. By using KRDC through an ssh tunnel, all 12 machines were able to be simultaneously controlled and the tournament could be stopped and restarted in just a matter of minutes from home.

25 bots registered for the 2015 AIIDE competition, with 3 being withdrawn for technical reasons, leaving 22 total entrants for the competition from 12 different countries, making it the largest AIIDE competition ever with the most international representation. 2015 also had the most even race distribution of any competition to date. In previous years, Zerg had been poorly represented, but this year there were 5 new Zerg submissions, many of which also competed earlier in the CIG competition. Race distributions from all AIIDE competitions can be seen below.


This was also the first ever Random race submission this year with UAlbertaBot. By choosing Random, the bot gets randomly assigned one of the three races by Starcraft after the game has started. This means that it is harder to program a bot to play Random since you need to have strategies for all three races, however it does provide an advantage since your opponent does not know what race you are until they actually scout you in the game. Several updates were also made to the tournament managing software which fixed a bug in the persistent file storage which sometimes caused results to be overwritten. With the switch to virtual machines the 2015 competition was able to be run for as long as desired, and so the competition was run for 14 days which resulted in a total of 20,788 games being played, 9 games between each of the 22 bots on each of the 10 maps, which was nearly twice the previous record for number of games played. For comparison, the number of games played in each major Starcraft AI competition is as follows:


The final results of the competition were extremely close, with 1st/2nd place and 3rd/4th place being statistical ties with less than a 1% difference in win percentage between them. The three new Zerg entrants did extremely well - with tscmoo coming 1st with 88.52% wins, ZZZKBot coming 2nd with 87.83% wins, and Overkill coming 3rd with 80.69% wins. UAlbertaBot came in 4th place with 80.2% wins while playing Random which is also a great accomplishment as it is much more difficult to implement than a single race. Another accomplishment by UAlbertaBot was that it had a greater than 50% win percentage against each other bot in the competition, however it did not get the highest overall winning percentage. The reason for this was that UAlbertaBot had a 2/3 winning percentage against some of the lower ranking bots due to the fact that one of the 3 races did not win against those bots. The full results of the competition can be seen, with the results of each bot pairing in the official results. The overall strategies for tscmoo, ZZZKBot, and Overkill remained unchanged from the 2015 CIG competition however each bot did implement several small bug fixes. We also held a man vs. machine match between an expert human Starcraft player and the top bots in the competition, which can be seen here.

Nearly every module of UAlbertaBot had been rewritten in the months leading up to the 2015 AIIDE competition, with the bot being fully transitioned from only playing Protoss to be able to play all three races. This required a much more robust and general approach to micromanagement and build order planning which before had been tailored specifically to Protoss. The build order search system was updated so that it could now perform build order searches for any of the three races, and several bugs were fixed that had previously caused some crashes for UAlbertaBot during the competition. The new version of this software was renamed BOSS (build order search system) and released on github as part of the UAlbertaBot project. Another major change to UAlbertaBot was the creation of a configuration file of options for the bot, which is written in JSON and parsed by the bot at the start of every game. This configuration file contains many options for strategic and tactical decision making, such as which strategies to implement for each race, unit micromanagement options, and debugging options. It also contains a database of opening build orders for UAlbertaBot which can be modified quickly and easily. By using this configuration file, all options and build orders can be edited without having to recompile the bot, leading to much faster development times and much easier modification and use by other programmers.

Before the competition, several days of test games were run versus many of the bots which had competed in the 2014 AIIDE and 2015 CIG competitions. A manual analysis of these matches was performed and several build orders and strategies were tailored to several of the bots such as Skynet, LetaBot, Ximp, and Aiur. For example: against Ximp, the bot would implement a heavy anti-air strategy since it was known that Ximp would always build carriers. Against Aiur, if the bot played Terran it would make a large amount of Vulture units since they would counter Aiur's Zealot rushes. This was a gamble since any of the opponents could have changed their strategies before the AIIDE competition, but many of them did not and so the modeling paid off in the long run. If the bot played against an unknown opponent, it defaulted to one of three rushing stratgies dependent on which race it was assigned: a Zergling rush for Zerg, Zealot rush for Protoss, or Marine rush for Terran. This led to a very successful competition for UAlbertaBot, with a winning ratio against every other bot.

SSCAIT 2015 (Detailed Results)