From cig2014@easychair.org Wed May 21 15:56:25 2014 Date: Wed, 21 May 2014 20:26:14 +0200 From: CIG 2014 To: Todd Wareham Subject: CIG 2014 notification for paper 19 We regret to inform you that your submission to the 2014 IEEE Conference on Computational Intelligence and Games (CIG 2014) was not accepted. We received a large number of submissions and each paper received careful consideration before decisions were made. The reviewers’ comments on your paper are included below for your reference. We thank you for your submission and hope that you will consider nonetheless attending the conference and enjoying the exciting program planned. Best of greetings, Moshe Sipper and Mirjam P Eladhari, Program Chairs of CIG 2014 ----------------------- REVIEW 1 --------------------- PAPER: 19 TITLE: Exploring Options for Efficiently Evaluating the Playability of Computer Game Agents AUTHORS: Todd Wareham and Scott Watson ----------- REVIEW ----------- This paper presents some interesting (although mostly expected) theoretical results around the difficulty of essentially testing interactions between NPCs and between NPCs and players for achieving given goals (which the authors call playability). As expected, without quite substantial limitations on the problem, this problem is intractable in the usual theoretical sense, i.e. being solvable in polynomial time. As stated above, I find the results interesting and therefore evaluate the paper as a weak accept, but there are quite a few things that I consider problematic. First of all, the authors make a few assumptions that I consider problematical: considering a polynomial in general as tractable makes games programmers only laugh! People for whom the factor in a linear behavior is something that is looked at to shave off a factor of 1.5 or so do not consider any polynomial as tractable. Also, as many games show, if there is a path through the interactions that is successful that is acceptable and the authors should keep in mind that the game designers can manipulate the scenario or level to achieve that (I recommend looking at Williams-King, D. ; Denzinger, J. ; Aycock, J. ; Stephenson, B.: The Gold Standard: Automatically Generating Puzzle Game Levels, Proc. AIIDE 2012, Palo Alto, 2012, pp. 191-196. for how levels with (negative) interactions with NPCs can be generated that are solvable and that require not much run time). In general, I am not very happy with the references. First of all, there is a strong relation to multi-agent systems and while I am not very familar with theoretical results there, there are definitely people interested in the complexity of analyzing interactions between agents (MAS is all about interactions). Also, looking at the definitions, it is not possible to forget a fact and that is definitely a limitation (although I expect it will not change the theoretical results). For me, the attacks that had me forgetting the properties of my weapons in Zangband were always the most annoying. The biggest problem I have with the paper is that we do not even have proof sketches or at least comments about the ideas behind the proofs. Giving an URL is definitely not enough! I would suggest to use the one and a third pages that are empty plus the space that can be freed by getting rid of the discussion section starting with the paragraph "To conclude ..." (I have no clue what the authors think they accomplish with that part of the section) to provide proof sketches or at least proof ideas (i.e. what problem is reduced and what polynomial transformation function between problems is used). ----------------------- REVIEW 2 --------------------- PAPER: 19 TITLE: Exploring Options for Efficiently Evaluating the Playability of Computer Game Agents AUTHORS: Todd Wareham and Scott Watson ----------- REVIEW ----------- In this paper the authors present a complexity theory-method for evaluating the playability of computer game agents. In essence, the authors provide a interesting approach to evaluate (A)FSM-based agents' playability. However, the author's provide little argumentation to a) why PCG should change to include the generation of agents, b) why it would be, even remotely, interesting for a game designer to release the play experience direction to a (A)FSM and c) why (A)FSM would be the sought after socially believable agent in relation to 1) different game genres and 2) in relation to exemplified genre be enough to increase the play experience. If the authors would motivate their research in the context of agents or AI-controllers and not as they do 1) without depth try to shoehorn it into PCG after an extremely short motivation and 2) relate it to industry/game design needs. Just go for MAS-approach. PCG-methods explore control and variety together with the quality of the content! which are not mentioned in extent in the paper. I would also see an extended discussion on how the contribution can help the PCG-methods. In short, provide a better motivation and intended target audience for the paper and I am sure it will get accepted. MAS and AI-controllers perhaps? ----------------------- REVIEW 3 --------------------- PAPER: 19 TITLE: Exploring Options for Efficiently Evaluating the Playability of Computer Game Agents AUTHORS: Todd Wareham and Scott Watson ----------- REVIEW ----------- This could be an interesting paper. It proposes a framework for checking whether it is possible for a player to achieve desired outcomes in a game, through interactions with other players or agents. I have three problems with the paper: 1. The maths is a little too hard for me to follow; 2. I am note sure how reasonable it is to have the proofs in a separate online location; 3. I'm not sure that there is any CI here. The connection with CI seems tenuous. I am inclined to suggest that it is novel and potentially interesting enough for CIG, if only someone could check the proofs. One small typo: "whether are not" should be "whether or not" ----------------------- REVIEW 4 --------------------- PAPER: 19 TITLE: Exploring Options for Efficiently Evaluating the Playability of Computer Game Agents AUTHORS: Todd Wareham and Scott Watson ----------- REVIEW ----------- This paper describes a method for evaluating the playability or correctness of procedurally generated NPC agents for games, a problem which is shown to be NP-hard using complexity analysis for a specific category of agents. The paper suggests that certain conditions can be identified under which evaluating playability of generated agents would be computationally tractable, but restricting the inputs through the game agents, the human players, or the game design. The paper also suggests that such analysis of tractability could be used to identify game designs that would be playable and hence appealing to human players. Unfortunately, the technical aspects of the paper fall beyond my expertise, so I must defer the evaluation of the paper to the other reviewers. ----------------------- REVIEW 5 --------------------- PAPER: 19 TITLE: Exploring Options for Efficiently Evaluating the Playability of Computer Game Agents AUTHORS: Todd Wareham and Scott Watson ----------- REVIEW ----------- The paper proposes finite state machines to evaluate the playability of autonomous agents that play game characters. I was not sure is the the state machines also generate the behaviour of characters of if they only represent the observable interactions, but are implemented in any form. I assumed that the state machines are implementing the behaviour of the characters as well. Nevertheless, the generation of such state machines is not discussed. The paper begins arguing for the need of procedural content generation and the need for validating the solutions generated, thus, I assumed that the state machines are generated automatically, however, this is not clear. Authors, should make this clearer. The problem of assessing if the autonomous characters in a game are providing the correct experience is quite relevant and important, and not very often tackled. For this reason, I believe that the paper deserves an opportunity. But, the goals assessed for the player experience are quite limited, and, more importantly, the framework seems to assume that the agents are reactive and not proactive. In the introduction is stated: “that variations of the search techniques described above could be adapted for this purpose”, but no search techniques were described before… It is not clear why you need three examples (e.g. fig 1 and 2). Those examples should be motivated better and explained in the text. On page 3 is stated: “for simplicity, we only consider the case in which the other agent X is a human player”. This needs further justification. What does this assumption simplifies? and why? Table I is a bit hard to read, and the results A and G, mentioned in the text, are not there. You should sustain better the implications presented in the discussion. For example, in this one “Only restrictions of groups of parameters for which playability is known to be tractable might affect the perceived human difficulty of gameplay” the relation that you make with the human perception of difficulty is not trivial. Please give more details regarding argument for this. In the conclusions, you refer to “promising augmentation of the classic finite-state machine”, but the novelty and value of the “new” state-machines is not given in the paper. Why are they promising?