ICFP Contest 2012: A new language makes for fun and frustrationby Mike Pavone
July 17, 2012
A new language
You can poke at the source for TP in the repository. In particular, checkout the samples and modules directories for an idea of what TP code looks like.
The first half of Friday felt pretty good. My teammate, Bill, was picking up the language well and despite having to stop and add some bindings for file IO to the language I felt like our pace was good. As the day progressed; however, we started running into serious bugs in the language implementation. As a result, I spent most of my time during the first 2 days of the contest working on the compiler while Bill worked on the simulator.
Things started to look up on Sunday morning. We were able to get the simulator working for the basic game and I was finally able to turn my attention to our bot. By the end of the contest I had a bot that could solve the smaller basic maps. Unfortunately fatigue and deficiencies in the string library prevented Bill from successfully implementing any of the other game modes and I was too busy trying to make our bot suck less to help.
Our bot performs a breadth first search of all valid moves and every k steps it picks the n best states according to a heuristic based on the number of lambdas collected, the distance to the next goal (either a lambda or the open lift) and whether the lift is still accessible. For our submission k was set to 3 and n was set to 128. Additionally, we maintained a hashset of game states (only the hashes themselves were stored for space reasons) to eliminate useless paths (waiting when no rocks are in motion, moving back and forth over the same spaces, etc.). We also stored the current best terminal state, so that if we failed to find a solution we could return that.
With the settings we used in our submission, our bot is able to solve contest1, contest3, contest4, contest5 and contest7 with scores matching the best ones on the web validator. It solves contest2 with a score of 280 (one less than the best). It manages a respectable 1129 on contest6. On contest8 it fails to stop the initial falling rock from blocking the lift and eventually gets trapped in the chamber on the right side of the map. It ends with a score of only 706. Unfortunately, for contest9 and contest10 it takes far to long to end its search and I didn't have time to implement SIGINT handling so it won't return it's current best terminal solution when time is up. Eventually (after 10+ minutes) it comes up with a score of 1943 for contest9.
Since support for the other game modes was never finished in the simulator and the bot depends on the simulator code it generally doesn't do well on maps using them (though it does manage a 280 on flood2).
You can find our submission in our repo.
While I'm somewhat disappointed with the final performance of our bot (and I'm sure I'll be even more disappointed with it once I see the contest results), I'm pretty satisfied with our performance this year. The immaturity of the language was a major obstacle and time sink, but despite that we were still able to submit a reasonable entry. I also feel like we worked better as a team this year. The task was a lot of fun and a good challenge.