I’ve finally got time to study Windows Azure a bit more tonight.
The task to achieve during this session was simple: reliably execute arbitrary code against the running cloud without even touching the Development Fabric.
This could be useful, for example, in automated benchmarking of the evolutionary algorithms that is fired by the CC.NET integration server. Running a test against Rastrigin’s Function in multi-dimensional (8+) search space is one of the simplest ways to evaluate speed of convergence, but it definitely takes a bit of CPU.
So “outsourcing” heavy computational tasks from the integration server to the Windows Azure Cloud might come really handy.
This could be done by:
- Configuring the cloud with some worker roles that just regularly check the Blob storage service for the incoming messages. Incoming message is loaded as Assembly and executed. Output is saved to the outgoing queue.
- Creating simple console application to upload the assemblies to the Blob storage from the outside and retrieve the results back.
This scenario is simple to implement and so it has worked.
However, there was one small issue with the reliability – memory leaks. As you know, loaded assembly cannot be unloaded. So while loading and executing the code, we will have the growing memory footprint. The cloud is supposed to kill the worker nodes that get out of hand, but this is just not a nice approach.
Theoretically we can set role state to RoleStatus.Unhealthy to make the fabric reload it, but in practice this had no effect. Neither did RoleEntryPoint.Stop
This might be because the cloud is hosted against the Development Fabric (Windows Azure Fabric will, most likely, behave differently, but I do not have the access to it, yet) or because this thing is just getting started.
The time will tell. Anyway, so far the technology looks really promising and easy to use.