Q: What is the overall strategy/strategies of your bot? Why did you choose them? PurpleWave plays a fairly complete package of pro-style Protoss strategies. Almost anything that's objectively good and can be executed with the existing micro skills and building placement is in there. There are aggressive strategies, economic strategies, and some delightfully cheesy strategies too. It features a graph of strategy selections, like opening build orders paired with mid game transitions and late-game compositions, and learns to assemble the best combinations. Q: Did you incorporate any of the following AI techniques in your bot? If you did, please be as specific as possible a) Search-Based AI (Path-Finding, A*, MiniMax, MCTS, etc) PurpleWave has a general-purpose pathfinding library using A* which is used for lots of different purposes: Threat-aware pathfinding for retreating or Shuttle usage, for detecting wall-ins, andnavigating map obstacles. b) Offline Machine Learning (Supervised or Unsupervised, but not RL) PurpleWave can use pretrained data for its online learning (described below) but I didn't include any for AIIDE 2019. c) Offline Reinforcement Learning None d) Online Learning of any kind (Including competition file IO for strategy selection) PurpleWave detects what strategies opponents used (Spawning Pool timings, expansion timings, tech rushes, etc.). Using historical data on that and previous games, PurpleWave learns what strategies work best for use in future games. e) Influence Maps PurpleWave uses potential fields for navigating flying units, and as part of the cost function in the threat-aware pathfinding. f) Custom Map Analysis PurpleWave uses BWTA for region segmentation and identifying chokepoints. g) Hard-coded or rule-based strategy / tactics Most of what PurpleWave does is rules-based, though usually in sufficiently flexible ways that I'd hesitate to call it "hard-coded". h) Analysis of bots from previous competitions / hard-coded specific bot counter strategies I run large test sweeps against known, likely, or representative opponents. Against opponents who are returning unchanged or unlikely to change, PurpleWave uses the best builds from training. For updated opponents, I give it a selection of builds appropriate for priors about the opponent (for example, giving a small pool of conservative builds against weaker opponents, and a larger pool against opponents that are likely to show up with surprising or string strategies). All of this is designed to minimize exploration time, which in a win-percentage tournament is crucial to winning 100% of games against weaker bots while not being exploitable against stronger bots. i) Any techniques not mentioned here Q: How did you become interested in Starcraft AI? Back in 5th grade, I loved building WarCraft II campaigns and customizing the AI scripts. Afterwards I got into game AI competitions and writing AI for WarCraft III custom maps. When I found out this year that Brood War AI was going strong, I felt compelled to try my hand at it! Frankly, I think I just like making little people on a screen do clever things. Q: How long have you been working on your bot? Since January 2017. Q: About how many lines of code is your bot? 49,330 Q: Why did you choose the race of your bot? I wanted the first version of PurpleWave to do something simple and effective that nobody else was doing. So I had it do a proxy 2-Gateway rush. And it's primarily played Protoss since. But I've intended for it to play all three races from the beginning. At this point, PurpleWave plays all three races competently. All the tournament preparation goes into Protoss, making it the strongest race, but it can compete as all three. Q: Did you use any existing code as the basis for your bot? If so, why, and what did you change? I use BWMirror (which ships with BWTA) for communication with BWAPI. PurpleWave also works withJBWAPI (which ships with a Java implementation of BWEM) and should switch to it full-time soon. Q: What do you feel are the strongest and weakest parts of your bot's overall performance? PurpleWave tends to have better overall strategies than its opponents. It wins games on macro alone. It also has some of the most robust unit control, able to move armies in coherent formations around the map. A major rough spot is building placement. The placement is not precise enough to safely execute a number of strategies a player at its level should be able to. Q: If you competed in previous tournaments, what did you change for this year's entry? The threat-aware pathfinding is much better than the last (COG 2019) version. The previous version did strictly downhill pathfinding as a performance-saver; since then I've gotten the A* pathfinding fast enough to use for all purposes. Q: Have you tested your bot against humans? If so, how did it go? You have to be a pretty competent StarCraft player to beat PurpleWave. It plays at around a 1750 level on ladder. I don't think I could beat it without exploiting known early-game weaknesses. You can see lots of games against it at https://www.youtube.com/channel/UC_E4wuEHs3CDj3K4mdWu4dQ/videos?view=0 Q: Any fun or interesting stories about the development / testing of your bot? In spring 2017, I split PurpleWave into a series of independent systems which run in synchronous batches, in order to maximize the calculations it can do within tournament time limits. Each system measures the distribution of how long one batch step comprises, and tries to only run systems that it expects won't overrun the frame time. In the original version, the final step would be preemptively doing garbage collection if it was done with all its tasks with time to spare. I figured this would reduce the odds of a garbage collection spike occurring mid-frame and causing a time spike. For the next year and a half, I aggressively profiled PurpleWave to improve its performance. Yet, as each system appeared to get faster, PurpleWave's overall performance became slower, and slower, until it was a major issue. I finally stumbled on the cause: As PurpleWave's systems ran faster and faster, the pre-emptive garbage collection was invoked more and more frequently. And the manual garbage collection was *incredibly slow*. Thus it caused the tragic, ironic issue: The more performant PurpleWave's code got, the slower it ran. A real fun mystery of a bug -- and the fix was just deleting a two-year old line of code! Q: Any other projects you're working on that you'd like to advertise? Folks interested in StarCraft AI may enjoy some of my team's work: TorchCraftAI / CherryPi: https://torchcraft.github.io/TorchCraftAI/ TorchCraft: https://github.com/TorchCraft/TorchCraft High-Level Strategy Selection under Partial Observability in StarCraft: Brood War: https://www.ias.informatik.tu-darmstadt.de/uploads/Team/JoniPajarinen/RLPO2018_paper_31.pdf Forward Modeling for Partial Observation Strategy Games - A StarCraft Defogger: https://arxiv.org/pdf/1812.00054.pdf Growing Action Spaces: https://arxiv.org/abs/1906.12266