In my previous post I showed a way to process a batch of data concurrently using Haskell. One drawback to that approach was that all the data to be processed was loaded into memory. This evening I found a better way that only loads a small subset of data into memory at a time.
EDIT: I present a better way to do this in part 2 of this post.
At my day job I often find myself needing to pull a bunch of data from the database and process each row. The consumer is generally really slow, executing either a bunch of IO bound or CPU bound tasks on each row (downloading files, resizing images, processing epubs, uploading files, etc).
Today my friend, Bryan, gave me a great idea for a Rails monkey patch. It goes something like this:
Happstack makes it simple to return a JSON response with the correct
Content-Type header using Aeson.
If you provide an API that allows users to log in with either an email address or a username,
(<|>) can provide an elegant solution.
Sunday, September 15, 2013
One of Each
My friend, Lance, proposed an interesting problem which ended up having a fairly elegant solution, so I figured I'd blog it.
Given an array of arrays, create every possible combination using a single element from each array. For example, given
[[1,2],[3,4]] produce the solution
Thursday, August 29, 2013
Making our build server talk
At my day job, we have an iMac that runs Jenkins and runs our test suite, among other things. It also happens to have some really nice speakers attached to it. One thing it was sorely lacking was the ability to remotely execute the OSX say command.
That was until today.
There comes a point in every developer's life when he or she needs to mix pure and monadic validations. Fortunately, with Digestive Functors, it is very straight forward.
By default, the underlying SAX parser used by the Scales pull parser will validate the DTD on load. This can be really slow and requires an internet connection. You can ignore the DTD spec by creating an instance of
SimpleUnboundedPool and telling Scales to use it.