Monday, August 12, 2013

icfp 2013 contest

This year's ICFP contest is in program generation. Niel had the most experience with genetic programming and advised us while Loftie and I did most of the coding.

We used DEAP (which is an evolutionary computation framework) and random code generation (which does not take into account programs from previous iterations). We used the builtin python multiprocessing library to run multiple versions with different parameters and strategies concurrently.

To run the generated lisp-like expressions, the program is translated into Python source code strings (containing lambda's) and compiled with eval into real python functions. The evaluation of the compiled python functions on inputs is fast enough (at least 70k evalutations per second on a single cpu). Python's eval compilation is more limiting (about 5k compilations per second). A better solution should probably generate partially evaluated functions, but there where variable scope issues that I didn't feel like debugging.

This didn't matter much though because the processing time was distributed over the code base: Expression compilation, evaluation, expression parsing, DEAP's individual construction/mutation and individual fitness calculation all took about the same time.

We used PyPy2.1, which seems to have helped. It's difficult to profile exactly as it was run over different machines with different architectures and there were stochastic behavior. Most of the problems (up to size 12) we ran on a 8 core laptop. For later problems we started up a few servers and ended running on 40 cores.

We had a good solution strategy which was solving problems at a good rate. Unfortunately we just started running things too late and ran out of time. There were still a lot of easy problems left on the servers. With another 2 hours we believe we could have ended in the top 25.

The contest was very well done. We encountered no problems with the server or problem specs. Many thanks to the organizers!

Monday, July 16, 2012

ICFP 2012 contest

With this year's ICFP contest I was mostly on my own. Albert helped interpret the spec at the start, though.

The source code is here:

For the lightning round I implemented the spec and wrote some typical Miner bots (greedy, random, ...). The update function was made faster by remembering and only updating rock positions instead of all positions of the map. The Miner I ended up submitting for the lightning round did an A* search that looked 7 moves ahead. A hash of the map's state was used to avoid searching recurring positions.

7 moves is not really deep enough for anything non-trivial, though. The maps are large enough that interesting things happen further away. A full transposition table might have helped, and I'm sure the update function could have been much faster if implemented smarter and in Cython. PyPy runs the code in about a quarter of the cPython time. (The A* search also penalized a large Manhattan distance to the nearest lambda to avoid ties and be attracted to lambdas further than 7 moves away, but I disabled that before submitting, due to not trusting the code.)

In any case, A* is boring. Everyone was going to do that. And larger teams would find smarter branching rules in languages faster than python.

First I implemented a Monte Carlo method Miner. For each of the 6 action, about 10,000 random successive actions of length 0-20 were generated the maximum score used to decide on an action to take. Repeat for each time step. The primary problem with a naive generation is that you need to do a lot of them to calculate representative properties.

A quick reading up on tree searches for Go AIs, suggests that "Upper Confidence Bounds for Trees" is state of the art. I took the literature's advice and thought about the problem as a Multi-armed bandit. So I implemented an "epsilon-greedy Miner" to select an action, with the Monte Carlo Miner embedded to simulate each "pull of the lever". The results aren't that bad, however, it's way too slow for the actual competition.

I couldn't really keep up with the updating specs. It's not that much work, but without a team I just ran out of energy tracking down a few simple bugs. I had a testing framework that helped a lot: input maps and commands mapped against expected final states and scores (created from the example validator). Even with this there was still a bug at the end where rising water killed the robot too quickly.

I gave up half-way into the competition. There was a lot I still wanted to do: an explicit search tree (instead of the implicit one created by recursive function calls), epsilon-greedy on each search level and eliminating transposition branches in the tree. I also wanted to train parameters such as depth, epsilon and number of iteration per branch estimation.

This year's problem was a lot of fun. It was simple to start with and had as much complexity as you wanted to put in.

To see my writeups of previous years go to the icfp label page.

Tuesday, June 21, 2011

icfp 2011 contest

Our approach was more relaxed this year. I suspect we were more in the mood for fun hacking than doing well on the leader board.

We played with the idea of using subversion instead of a DVCS. I thought our workflow might require too many merges, but in the end Mercurial's branches worked well for us. It also made bringing old bots back to life pretty easy. Doing the same with subversion would probably have been more complicated.

Our implementation was done in Python. Field values were either integers or objects with run methods (aliased with __call__). Functions like attack, heal and K that take more than one turn were implemented as nested class definitions. It was simple, and worked well. S was the most deeply nested:
class S(Func):
    def run(self, arg):

        class S_partial1(Func):

            def __repr__(next_self):
                return "S(%s)"%arg

            def run(next_self1, next_arg1):
                class S_partial2(Func):
                    def __repr__(next_self2):
                        return "S(%s)(%s)"%(arg, next_arg1)
                    def run(next_self2, next_arg2):
                        return arg(next_arg2)(next_arg1(next_arg2))
                return S_partial2()

        return S_partial1()

We started with some very basic bots and worked our way up to more complex behavior. Many of our strategies are rather sensitive to interference as they where (initially) launched from the first few slots. If those slots are damaged by the opponent everything fails. Trying to capitalize on this we tried to monitor actions of opponent and attack slots that gets most "activity".

Unfortunately the monitoring is rather complex and slow. So we experienced timeout issues on the test server.

We also tried to detect how zombie cards are played and heal its target ASAP. Seems to work quite well and makes the opponent waste a few moves and messes with their state.

Overall we think our strategies were sufficiently smart, but some of it just took too many turns and processing time.

We wasted a few hours debugging a non-issue... I just forgot to call stdout.flush(). Also, in the end we might have submitted a buggy version. Oops! Will have to see how it goes. I should have made unit tests.

This year's ICFP was a lot of fun! The organizers did not make any of the mistakes made in previous years: The problem was well defined, there wasn't any obfuscation for the sake of obfuscation, the barrier to entry was low and there weren't any serious issues with the test server. It was all done very well. From their blog comments they always seemed friendly and in good spirits. It's subtle things like this that I think make a competition fun.

If there is any criticism then it's with the submission form. It probably made the organizers' job much easier, but it would have been nice to see the state of your own submission.

Edit: We only bothered getting one camera working this year.

Thursday, July 08, 2010

ICFP 2010 contest

Our team had 5 members this year: Myself, Loftie Ellis, Niel Thom, Pieter Holtzhausen and Albert Swart. It was Niel's first time participating and helped tremendously. We had a lot of fun. Mostly due to the team dynamics. The contest itself... not so much.

We soon figured out the organisers tried to make things interesting by leaving out large parts of the spec. This might have been a good idea if there wasn't multiple plausible interpretations for the single provided example. And Occam's razor did not hold true.
It took us about 24h to convince ourselves that our initial assumptions were incorrect. Our interpretation did not produce a consistent mapping. If it wasn't for a comment on the IRC channel, we might have wasted even more time trying to track down non-existent bugs.

Overall I was disappointed in this year's problem. The barrier to entry was just too large. In fact, only about a quarter of registered teams received any points. Meaning they either couldn't figure out the circuit format or couldn't compute the prefix. That's just silly. It should be easy for a team to get going, after which things can get as complex as they want.

I doubt many teams got to the really interesting problems. If Loftie didn't submit the correct prefix 5am on Monday (about 7h before contest end), we wouldn't have had any points.

I hope next year is organised by an American university. From my experience, their's are usually more fun.

I again made some time lapse videos of us working. We played a lot of StarCraft towards the end :) I might put it on youtube some time.

Update: Videos follow

Monday, June 29, 2009

icfp 2009 contest

Here's a quick technical writeup about our entry for this year's ICFP contest.

Our team of 4 is called "xor eax, eax". i.e. myself, Loftie Ellis, Pieter Holtzhausen and Albert Swart.

After a bit of discussion, Loftie and I set to work coding up the virtual machine specification in Python. We used the builtin array type to manage the bytecode, but later switched to numpy. Albert translated it to a C version, which Pieter hooked into Python code with ctypes. I wrote a simple GUI in opencv.

We encountered a few issues switching between our 32bit and 64bit machines. We had an 8x2GHz machine to our disposal, but ended up not really using it.

Pieter wrote the basic Hohnmann transfer code, which Loftie used to manually determine target orbits for the first 2 problem sets. I used Python structs to write the submissions file.

We finished it well within the lightning round time. In fact most of the work happened in the first 8 hours. At one stage we where fourth on the leader board. Unfortunately we spent too much time on the manual tweaking Saturday and was only able to do problem sets one, two and a single scenario of problem set 3 for the lightning round.

We decided to approach problem 4 as a series of jumps between different orbital radii. In other words: to reach a target we need to do a Hohnmann transfer to a radius that will cause the target and our ship to meet. We did not have enough sleep and energy to explore the optimal sequential order of the jumps in too much detail.

So Sunday I set to work on setting up scipy's optimization library to search for the optimal radius. We tended to primarily use the Golden section search. That worked well and converged in around 35 iterations on average.

Monday we didn't really do anything (except for playing a game of starcraft). The last update of the score board placed us at position 77. Since then we raised our score to around 2200, if I recall correctly, by primarily completing the remaining scenarios of problem 3.

(Edit: The final scores are in. We finished 70th out of 328 with a score of 2275.)

We should have spent less time manually tweaking behavior and rather try to find a more general solution. We also should have looked at the scenario's in more detail, since we later discovered them to be simpler than the assumptions we were working with. I think these two factors sucked more of our energy than we should have allowed it to.

I was hesitant when we began writing the VM in python, but it was surprisingly efficient. The C version was around 5x faster I would guess. Some of the other members were surprised by some python's scoping strangeness. It's interpreted nature allowed us a lot of experimentation, however. We used a local subversion repository, which worked out well.

All things considered it was a good competition. The only major upset was a re-release of the problem 4 binary, which invalidated a few hours of our work. I have never really read about astrodynamics before this contest and found it very interesting.

During the competition I continuously captured one image every minute from a webcam I mounted on the wall with sticky tape. Using mencoder I compiled it into a 25fps video. So each second in the video represents the passage of 25 minutes.

Tuesday, July 15, 2008

ICFP 2008 programming contest

It's been about a week since our team (myself, Pieter Holtzhausen, Loftie Ellis and Albert Swart) completed the ICFP programming contest. I think I've had enough sleep now for a little writeup.

The most important problem was that the provided sample maps and server was not necessarily representative of what we will be judged against. Assuming martians are not a threat (either not smart or too few), the optimal strategy is to ignore them, keep accelerating and dodge craters and boulders on the way to the home base. If a map contains martians of significant intelligence, they will have to be avoided.

Our approach was to continuously switch between 2 modes:

1. Given a goal position (initially the home base), move towards that goal. If there is an object in the path towards the goal, set the goal to be directly to the side of the obstacle. Once we arrive at a goal, reset the goal to the home base and repeat.

2. If our strategy is to take martians into account, predict if we will collide with a martian on our current velocity and path. If we will collide, change the goal position to be somewhere else.

We decide on our mode based on what was learned about the map. By default we start in the first mode. If we were killed by a martian or have observed more than 4 martians at once in a previous run, start the next run in martian avoidance mode. If we ran into a crater, ignore the martians by next time starting in the first mode.

We wrote everything in Python. We spent most of the time on tweaking the goal following code, keeping the server communication stable and fighting with the lack of a Windows version of the server. Towards the end we tried A* path finding to hook into the goal following system, but decided not to include it in our final version. We also have an alternative visualization system based on OpenCV.

I'm quite proud of our entry. We never collide with any craters or boulders if enemies are ignored. If enemies are taken into account we effectively make running away from them our short term mission. I'm a bit worried they will test our code against maps with dead ends, or that our martian avoidance code fails (because it hasn't been thoroughly tested).

I predict the first few places will be taken by entries that can characterize the type of map quickly.

Friday, March 30, 2007

buying a katana

I've been bitten by the Kendo bug. I've been doing it for a few months now and I feel it's time I got myself a real blade.

However, I don't want a wallhanger. I want something that can cut. Fortunately Cheness focuses on quality cutting-blades available for less than 300USD.

I was really impressed by their "Shura". I've read a lot of really good reviews, especially the one at sword-buyers-guide and the destruction test at RS knives.

Living in South Africa, I knew there would be some problems getting it shipped here. So I decided to purchase it through Paul from sword-buyers-guide. I am very thankful to Paul for going out of his way to help me. His customer support is excellent.

The sword is now on its way. Hopefully it will get here safely.

Sunday, December 03, 2006

interactive recognition of hand-drawn circuit diagrams

After six years of study I completed my degree of BEng/MScEng (with computer science) last week. Yay! Here is a video of the final prototype:

You can download my full thesis or some slides.

Edit 2011/02/01: Here is the source code. It's a bit of a mess and it's going take a bit of work to get it going on newer systems because it probably requires an older wxWindows. The interesting parts are in src/ If you find it useful or reference my work I'd love to hear about it.

One of my favourite opinionated parts:

It is generally believed that state coverage is superior to code coverage. It is the author's opinion, however, that state coverage is less important in Python than other languages. As Python typically processes collections (such as lists) using an iterator and not incrementing indices, there are less ways in which errors can occur. Since variables also need not be declared it is also less likely that null pointers would be dereferenced. This is, for example, one of the most frequent sources of errors in the Java programming language.