| ||||
| ||||
![]() Title:Increasing the Upper Bound for the EvoMan Game Competition Conference:SYNASC2020 Tags:Artificial Intelligence, artificial neural network, EvoMan, game-playing agent, Genetic Algorithm, Neuroevolution, Particle Swarm Optimization, Proximal Policy Optimization, Q-learning and Reinforcement Learning Abstract: This paper describes a comparison between algorithms for evolving agents able to play the game Evoman. Our team took part in the "Evoman: Game-playing Competition for WCCI 2020", and won second place; beyond finding a good agent to satisfy the requirements of the competition - which aim at a good ability to generalise -, we have surpassed the existing non-general, best-known upper-bound. We have managed to exceed this upper bound with a Proximal Policy Optimization algorithm, by discarding the competition requirements to generalise. We also present our other exploratory attempts: Q-learning, Genetic Algorithms, Particle Swarm Optimisation, and their PPO hybridizations. Finally, we map the behaviour of our algorithm in the space of game difficulty, generating plausible extensions to the existing upper-bound. Increasing the Upper Bound for the EvoMan Game Competition ![]() Increasing the Upper Bound for the EvoMan Game Competition | ||||
Copyright © 2002 – 2025 EasyChair |