We played with the idea of using subversion instead of a DVCS. I thought our workflow might require too many merges, but in the end Mercurial's branches worked well for us. It also made bringing old bots back to life pretty easy. Doing the same with subversion would probably have been more complicated.
Our implementation was done in Python. Field values were either integers or objects with run methods (aliased with __call__). Functions like attack, heal and K that take more than one turn were implemented as nested class definitions. It was simple, and worked well. S was the most deeply nested:
class S(Func): def run(self, arg): class S_partial1(Func): def __repr__(next_self): return "S(%s)"%arg def run(next_self1, next_arg1): class S_partial2(Func): def __repr__(next_self2): return "S(%s)(%s)"%(arg, next_arg1) def run(next_self2, next_arg2): return arg(next_arg2)(next_arg1(next_arg2)) return S_partial2() return S_partial1()
We started with some very basic bots and worked our way up to more complex behavior. Many of our strategies are rather sensitive to interference as they where (initially) launched from the first few slots. If those slots are damaged by the opponent everything fails. Trying to capitalize on this we tried to monitor actions of opponent and attack slots that gets most "activity".
Unfortunately the monitoring is rather complex and slow. So we experienced timeout issues on the test server.
We also tried to detect how zombie cards are played and heal its target ASAP. Seems to work quite well and makes the opponent waste a few moves and messes with their state.
Overall we think our strategies were sufficiently smart, but some of it just took too many turns and processing time.
We wasted a few hours debugging a non-issue... I just forgot to call stdout.flush(). Also, in the end we might have submitted a buggy version. Oops! Will have to see how it goes. I should have made unit tests.
This year's ICFP was a lot of fun! The organizers did not make any of the mistakes made in previous years: The problem was well defined, there wasn't any obfuscation for the sake of obfuscation, the barrier to entry was low and there weren't any serious issues with the test server. It was all done very well. From their blog comments they always seemed friendly and in good spirits. It's subtle things like this that I think make a competition fun.
If there is any criticism then it's with the submission form. It probably made the organizers' job much easier, but it would have been nice to see the state of your own submission.
Edit: We only bothered getting one camera working this year.