AIIDE StarCraft AI Competition - Survey Feel free to answer as many questions as you like, but it would be great if everyone answered everything! Many people are interested in learning as much as possible about the bots that competed! Please feel free to provide external references/links as necessary Bot Name: Steamhammer Bot Race: zerg Author Name(s): Jay Scott Affiliation(s): - Nationality(s): USA Occupation(s): - Bot URL: http://satirist.org/ai/starcraft/steamhammer/ Personal URL: http://satirist.org/ Affiliation URL: - Questions about your bot (please answer as many as you can, especially Q 1-3) Q: What is the overall strategy/strategies of your bot? Why did you choose them? Steamhammer knows close to 100 opening lines, ranging from fast rushes to extreme macro play, and can choose almost any reasonable unit mix depending on the situation: Versatility first. The aim is systematic development to produce the best performance in the long run. Q: Did you incorporate any of the following AI techniques in your bot? If you did, please be as specific as possible a) Search-Based AI (Path-Finding, A*, MiniMax, MCTS, etc) Various trivial 1-ply searches with hand-written evaluators for local alternatives, like “where to expand to next?” or “where to cast dark swarm?” b) Offline Machine Learning (Supervised or Unsupervised, but not RL) c) Offline Reinforcement Learning d) Online Learning of any kind (Including competition file IO for strategy selection) Opponent modeling is a key skill. Steamhammer records game information including the opening line it followed and the predicted and recognized enemy plan. Starting at the second game, it tries to choose an opening that counters the predicted enemy plan. With more experience, it tries to balance openings likely to win with exploration of alternatives. If the enemy appears to always follow the same plan, Steamhammer tries to exploit the predictability with a variant of epsilon-greedy, modified to account for the issue that Steamhammer knows more openings than it can explore even in a long tournament. If the opponent varies its play, Steamhammer uses an ad hoc weighted random choice algorithm to vary its play in response, so that its choices are difficult to exploit. e) Influence Maps f) Custom Map Analysis Yes. I’m in the process of replacing BWTA. g) Hard-coded or rule-based strategy / tactics Mostly. h) Analysis of bots from previous competitions / hard-coded specific bot counter strategies None. In a long tournament, I concluded that it produces only small gains compared to learning an opponent model, so I spent effort elsewhere. i) Any techniques not mentioned here Q: How did you become interested in Starcraft AI? I love the game, and I have always been an AI person. Q: How long have you been working on your bot? Since December 2016. Q: About how many lines of code is your bot? wc counts 123K lines. That includes 82K in BOSS which is not used for zerg. Q: Why did you choose the race of your bot? It’s the most fun. Q: Did you use any existing code as the basis for your bot? If so, why, and what did you change? UAlbertaBot is a strong base. Changes are many. Q: What do you feel are the strongest and weakest parts of your bot's overall performance? Tactical play and micro are improved this year, but still weaker than strategic decisions. Worst is zergling micro due to a bug introduced in this version, especially affecting ZvZ games. Q: If you competed in previous tournaments, what did you change for this year's entry? Too much to list. Few actions the bot takes are exactly the same as last year. Q: Have you tested your bot against humans? If so, how did it go? Antiga/Iruian (a skilled amateur) played a good number of games against it, experimenting with different builds, and lost 2 games. Steamhammer can win occasionally. Q: Any fun or interesting stories about the development / testing of your bot? Versions made shortly after the major tournaments seem to play the best. Apparently my tournament preparation timing is a little off. Q: Any other projects you're working on that you'd like to advertise? http://satirist.org/ai/starcraft/blog/ http://satirist.org/whale/ Optional Opinion Questions: Q: What is your opinion on the current state of StarCraft AI? How long do you think before computers can beat humans in a best-of-7 match? The best now have good chances against amateurs, but are still nothing next to pro players. We don’t have a good strength measure to estimate how long it may take. Q: What do you feel is the biggest hurdle (technological or otherwise) in improving your bot's AI? I don’t have the resources to do long learning runs. Also, before I even get that far, I have a lot of restructuring planned to put the bot into good shape to support machine learning at different levels of abstraction. It will take a long time. Q: Which bots are the most interesting to you and why? CUNYBot has an interesting economic model. SAIDA has the most detailed understanding of what its opponent is doing. AIIDE Specific Question: Q: Do you feel that the current format of iterated round-robin win percentage is a good indicator of bot skill ranking? If not, how would you change it? Yes. It’s simple and fair. Any alternative has a hill to climb. The underlying issue in tournament design is that one number cannot capture the strength of a participant. We can tell because the crosstable is not a smooth curve; each player does better against some and worse against others of a given average strength. In the limit as the number of participants goes to infinity, the per-opponent anomalies average out to zero, but in practice the number of participants is small. Given that, the only fair choice is to weight all participants equally, as in a round robin. Theoretically you might be able to do a little better with a Bayesian analysis to choose the pairings with the highest information content, and then calculate the fair results... but I’ve never seen an algorithm to do that, and if there is one it would be difficult to sell people on.