I was listening to Bruce Tate in Mostly Erlang podcast, when, around the 57th minute of the episode, the topic of the conversation became the feasibility of FP becoming mainstream. He explains that FP has not emerged yet, but it’s reluctantly about to do so, being the end of Moore’s law the main driver as concurrent/parallel programming becomes more critical.

I agree with him about this trend, although I also think it’s easy to overestimate the time it will take. I think it’s going to be very slow.

Another interesting idea, he mentions is that within the FP field, a clear winner may arise, as it happened with the C/C++, and all its descendants. The four candidates he postulates are:

He dismisses Haskell for not being approachable enough, similarly to when Lisp came out.

I also think there’ll be a winner, understanding for winner, the language most programmers and companies will use (i.e. Java would be a winner now). But I don’t see being a winner language, in this sense, so relevant. To me, what’s really important is to see how prominent is going to be the software written in that language, and how much of current business will be run in code written in such language. A language could be considered not mainstream by programmers and companies, but there may be several critical widespread software that wouldn’t be possible without it. For example, I see this trend going on with Erlang being used in critical IM services, distributed databases, message queuing systems, etc. Current shops with expertise in Erlang, even if not mainstream, have a clear competitive advantage when writing scalable systems.

In this sense, I think Haskell may never be a winner, but the question is still open about how crucial will it be in the future. What will be the business perspectives for the few companies proficient in Haskell? Who cares if there are few proficient Haskell programmers as long as there is a pool of Haskell programmers to hire (which currently it seems there is) and taking into account the strong hint that if someone is proficient Haskell she has already very high chances of being a very productive programmer?

I guess this same argument was the same used with Lisp, but now there are some differences. Lisp idioms are, inherently, very heterogeneous, whereas Haskell, in spite of not having the equivalent of being Pythonic, is much more understandable among programmers working in different projects. Secondly, there are certain parallelization/concurrency problems that are very hard to get them right with traditional languages, for example, STM. I’m not an expert in Lisp but I guess it lacked the killer feature that make using it a need, more than just a nice advantage.

In conclusion, my opinion is that Haskell has a very good chance of becoming very relevant even if it doesn’t become the FP language winner. I see its purity and side-effect management as enablers of killer features in the area of concurrent/parallel programming. Somehow, the vibrancy of the Haskell community these days, reminds me of the [Python] community before Google made it popular. Will all this be enough to make Haskell relevant? I guess time will tell.

These days, with the pervasiveness of the Internet, an impressive wave of new spoken audio content is making steadily its way into the mainstream. I’ve always been a big fan of spoken radio, but now I find myself listening only to independent podcasts, mostly hosted by amateurs, who are able to provide fresh and genuine content in a medium where innovation has been stagnant for many years.

The least I can do for all the countless hours of entertainment they are giving me is to publicly express my admiration for them.

Hardcore History

When a new episode of this podcast comes out I try to wrap up everything I’m doing and take a hiking route with my dog long enough to listen to the new episode uninterrupted. I think what makes Dan Carlin outstanding as a narrator is his great ability to immerse the listener in the situation of the era and to describe how it felt to be in the shoes of the people living through it. All in all, after listening to several episodes, I came to realize we are not so different from our ancestors, and in most cases, no matter how horrible the consequences of some actions were, we’d probably have done the same.

HistoCast (Spanish)

This history podcast is carried in a tertulia format by a panel of history aficionados. The general tone is informal but at the same time it’s quite rigorous regarding the information they provide and throughout in their analysis. The anecdotes they manage to find are priceless.

Colectivo Burbuja (Spanish)

When living in a country like Spain, with a government filled with corrupted crooks who control practically all traditional media outlets, independent podcasts like this one are really appreciated. It mostly consists of debates with speakers of diverse ideologies analyzing current news, mainly economic. It really helps understand what is really going on in this country, full of strange contradictions.

Security Now

Steve Gibson could be accused of being an expert in PR more than an expert in information security but, in spite of this, I find hard to deny his didactic ability to introduce the listener to different security concepts and to summarize the most important security events of the week.

Linux Outlaws

By the title one would expect a highly technical podcast, but more than a podcast about the Linux world, this is about using Linux as a excuse for numerous ranting, cynical comments and nerdy jokes, which I find really funny.

12 Byzantine Rulers and Norman Centuries

Even if they are considered podcasts, they feel more like history audiobooks. Lars Brownworth fabulously narrates relatively unknown amazing epochs which were highly influential but didn’t get the historical popularity they deserved.

The Joe Rogan Experience

Joe Rogan interviews in a funny and informal tone diverse personalities. I view these discussions as examples of honest civilized conversations between individuals with very different backgrounds. I also found very interesting podcasters through this podcast.

The HaskellCast

Because these days I’m coding full-time in Haskell, in this podcast, I really appreciate the interviews to prominent figures in the community. What I like the most about listening to them instead of just reading what they write, is that somehow, by hearing them explain what they do, it’s easier to grasp the train of thought that led them to do what they did.


This history podcast is very informationally dense but fun to listen to nevertheless. Mike Duncan does a great job explaining the high complexity of events that led to big changes in history. He’s also the author of History of Rome which I still haven’t gone through but hope to start doing it that at some point.

Pasajes de la Historia (Spanish)

This is one of the few podcasts I listen that it’s taken from traditional media. Unfortunately, Juan Antonio Cebrián passed away prematurely but left an incredible legacy in the form of tales based on historical figures.

Let’s talk Bitcoin

This is also a podcast I follow for professional reasons. If I had to choose a single Bitcoin podcast this would be the one.

Large part of Haskell code is just about imports. In many programming languages there is no ambiguity left about how to import, but Haskell leaves some room for personal style in this regard. There are some recommendations out there about importing style, but most is left to common sense. Your own judgment, once you are comfortable with Haskell, should be perfectly fine, but newcomers who care about style consistency might feel a bit lost when writing the import list for their own packages 1, specially since there are many slightly different styles in the wild for Haskell imports. In this post, I’ll try to explain the rationale of the style I follow.

One basic principle I’ll be following for all my criteria is that, like I guess most programmers, code style is about reading code, not writing it. When writing, you can make the assumption that someone editing the code has access to editing tools, whereas this assumption doesn’t hold so easily for the readers of your code.

Explicit imports

Anyone reading the Python official tutorial for the first time has to read through this when reaching the section about modules:

There is even a variant to import all names that a module defines:

from fibo import *

This imports all names except those beginning with an underscore (_). In most cases Python programmers do not use this facility since it introduces an unknown set of names into the interpreter, possibly hiding some things you have already defined.

Note that in general the practice of importing * from a module or package is frowned upon, since it often causes poorly readable code. However, it is okay to use it to save typing in interactive sessions.

This means that anything but a small built-in language core, which it can be easily memorized, has to be explicitly exported to be in scope. This is great for newcomers reading any Python code, you are always aware, with no extra tools, where everything is coming from.

Considering that Python is my programming language I learned first, you can understand why I get a bit annoyed when I’m reading other language I’m not so familiar with, and names just pop up in scope without knowing where they come from. And no, I don’t want to use ctags or a full blown IDE every time I’m reading casually some code on the GitHub.

So then, why Haskell, a language with such a great reputation for being so well designed, doesn’t follow these Python principles, which look so advantageously obvious? To be fair with Haskell, we have to understand that the class system in Python is frequently (ab)used2 just for organizational purposes. Haskell, by being a pure functional programming language, doesn’t add all the cruft of OOP classes just to deal with this issue. Instead, it uses a very limited module system which could be argued is a weakness of the language, but I believe it fits nicely with the unofficial Haskell slogan of avoid success at all costs.

From my understanding, this means that if there is not an optimal solution for a core language feature, it’s preferable to keep the bare minimum everyone agrees on and don’t try to impose a half-baked solution that will have to be maintained forever for legacy reasons. Taking this into account, I’d rather have a dumb module system easy to understand, than having to deal forever with the complexity of historical design accidents.3

So in Haskell we have to bite the bullet and accept that the imports lists are going to be quite complex, and making absolutely every import explicit like in Python would be too cumbersome on the programmer who is coding, so we have to reach some kind of trade-off between explicitness for the reader vs convenience for the writer.

Internal imports

A popular style recommendation like the one from Johan Tibell is to import explicitly everything outside the package, and make internal imports implicit.

I consider Tibell’s way a good rule to follow for most projects. It’s a good compromise because when reading a module from a givne package, it’s reasonable to assume the rest of modules of such package are usually nearby.

The other popular way recommended in GHC is to make everything implicit. Breaking Tibell’s rule in the case of GHC may be understandable because in a project as large as GHC, the import lists would tend to be quite complex. I’d also assume anyone trying to hack on GHC is above beginner’s level and should be familiar with internals of the project. But for most projects, I think Tibell’s recommendation is a good default.

It could be argued that if there are many internal modules being imported in the same module, it could become difficult to follow from which module comes each name. It’s true that this sometimes happens, but most of the time I’d attribute to a smell. When this happens, I’d look for the following:

  • Are modules not modular enough? For good modularity the communication between them should be as minimal as possible. Perhaps the code needs to be rearranged in entirely different modules to allow better separation of concerns.
  • Is the package too large? Maybe it’s time to separate the packages in more subpackages.
  • Are you always importing a group of modules that somehow can be logically grouped together? All these modules could be consolidated in a single module that just re-exports everything for the module group. This, in fact, a very used pattern in Haskell for code organization.

The only exception to external explicit importing is Prelude, which is almost always implicitly imported. Prelude is the closest you get to built-ins in Haskell.

Whenever I have some name clash with a Prelude name I don’t hesitate to hide the Prelude version if the context makes it clear that the imported name is not the same as the one in Prelude. For example, I’d hide the Prelude.span, if I’m not using it and if the imported span deals, for example, with an span HTML element. But I wouldn’t hide Prelude.writeFile for Data.Bytestring.writeFile because it’d be misleading. In case of not hiding, I’d use a qualified import, but I’ll comment more about them below.

Some people also give the built-in status to other very frequently used modules such as Control.Monad or Data.Monoid in the base package. Even admitting that anyone with some experience with Haskell wouldn’t have any trouble with these imports being implicit, I still import them explicitly. I consider that, for experienced programmers who are not familiar with Haskell, the names in Prelude are enough to keep in mind, so, in my opinion, asking to memorize more modules rises too much the barrier of entry. I suffered this myself when learning Haskell for the first time, so I swore to myself I wouldn’t do it in the future.


The usual convention for importing type constructors is to import them implicitly using the (..) notation, but I don’t follow this convention because if many type constructors are brought into scope, we have the same problem as with the scope of functions.

I only use A(..) if there is only a constructor for type A and it’s named A as well, which is the usual convention. If that’s not the case I also import the single constructor explicitly.4

Qualified imports

They are frequently cited as the solution to module organization. However I’m uneasy about them and try to use them the least I can.


When there is a very long list of imports it’s often argued that it’s better for maintainability to just use a single qualified import, otherwise it’s too much work to change the list of imports anytime there is an API change in an external module. But I think it’s the other way around, maintaining that list makes sure you are using the API properly, and if you get an error when upgrading the API, you are more likely to get an import error which can be easily spotted. On the other hand, with a qualified import, the module being upgraded can inadvertently introduce names in scope provoking clash errors which may be harder to debug.

It’s true that it’s a bit extra effort to be constantly maintaining a long list of imports, but with a decent code editor it shouldn’t be too much of a problem. I usually toggle between implicit and explicit imports while finding out a good solution to some code I’m writing, when I’m satisfied I make sure everything is exported explicitly again.

Letter soup

It’s quite usual to find qualified imports with a capital letter like: import qualified Data.ByteString as B or import qualified Data.ByteString as S or import qualified Data.ByteString.Lazy as B or import Data.Binary as B… you get where I’m going.

The problem with qualified names with just a few characters, it’s that the chance of clashing is very high, so the same qualified import ends up with different letters depending on the module, something I find confusing, specially when you get used to associate a particular character to a particular module. Aside of this, I don’t find aesthetically pleasing to read all over the code single capital letters followed by some function, but this may be just me.

There are exceptions to this recommendation, of course, which I’ll explain below.


One obvious solution to the problem described below is to not use short qualified names but full words like ByteString instead of B, Binary instead of B or Text instead of T. But then, what happens when you have a module using everywhere Data.ByteString and Data.ByteString.Lazy? Do you write prepend every function with ByteString an ByteStringLazy? Common sense would tell us that this is too verbose, specially for a language like Haskell where terseness is one of its most touted features. I’ll explain when to use long names for qualified imports below.

Import list as an introduction

When I’m opening a module, I like going through the list of imports to prepare my mind for the context of the module. When I find something like import qualified Data.Binary as Binary, the first thing I think to myself is: is this module going use just one function from Binary or is it going to use many of them?. I know I can have a quick glance at the rest of the module to get an idea, but this adds just more friction for cases where, for example, I want to quickly navigate through all the modules of a package in order to get quick overview.

That’s why I prefer to have explicit lists, even when qualified imports are being used. For such case, however, I acknowledge that I don’t always follow my own advice. I consider them nice to have, not very important.

When qualified imports are OK

The first broad scenario has to do with Prelude. Is the module being imported going to clash with several other functions from Prelude that I’ll also be using or are difficult to distinguish by context? If this is the case then I’ll try to use a qualified import, specially when the original author recommends it. The usual suspects in this list are imports like Data.Foldable (F), Data.Traversable (T), bytestring (B, B8, L, L8), text (T, TIO, TL), containers (Map, Set), pipes (P, PP), etc. I try to follow the same letter convention everywhere. But notice that if I know I won’t use the Prelude version at all and from the context it can be clearly distinguished that is not the Prelude function, then I’ll hide it as I explained above for the span example. It’s import to notice that for these packages, the types can be usually imported unqualified without any issue.

The second scenario is when 2 imported modules clash with the same names. In this case I’d use qualified names for just the conflicting functions. For example, Binary.decode and Cereal.decode. If the modules are the usual candidates for single letter qualified names like like bytestring and text I’ll keep using the single letter, otherwise I’d use a long name.

There is one last case where using long qualified names would be OK with me. For example, when the function uses a very vague name where it’s difficult to guess what’s really about, it may be appropriate to prepend it with the module name. For example, the get and put functions from the State monad are much easier to identify when writing them as State.get and State.put.

Order of imports

Some criteria for ordering imports is important because it makes it possible to predict in which order modules appear. If you get used to the same pattern of appearance, you can quickly find what’s in the module and what is not.


In Tibell’s guide it’s recommended to group the imports by standard library, third party software, and local package imports. I follow this too, but, firstly, I distinguish in the standard library the modules coming from base, GHC libraries and packages belonging to the Haskell Platform. Secondly, where Tibell recommends to sort alphabetically between groups, I try to follow the rule of which package is (or should be) most frequently used overall, and within each package which module is most prominent. When this is not obvious then alphabetical sorting should be used.

Of course, there is no precise way to define which package is more frequently used. I’d leave this entirely to your own personal experience but you can get some idea by checking the reverse dependencies of a package, or the downloads in Hackage, or grep <module> | wc a bunch popular packages.

The main purpose of this rule is to try to make it easier to skim through the most usual imports first and focus at the end on the rare module exports. This is also important when trying to minimize dependencies, you can quickly spot which ones you can try to drop.

For example:

import Control.Applicative (...)
import Control.Monad (...)
import Data.Monoid (...)
import Data.Foldable (...)

Here, I put Applicative before Monad because even though, in practice, it might be less used, my own judgement tells me it’s more general than a Monad, so it should be more frequent. Between the Control and the Data module names I choose to sort them alphabetically, I don’t know which one is most usual. Whatever you decide, it’s always better to stick with the same preference everywhere.

Notice also that I don’t take into account the length of the export list or how frequent are the functions appear in the module itself. That would, perhaps, be valid criteria but they wouldn’t make the import list very repeatable.

Types and functions

I group first the Types with their constructors; next, infix functions and lastly, all other functions.

When there is a mixture of qualified and unqualified imports for the same import I still group them together, with the unqualified names going first. I don’t like having the qualified imports and unqualified imports grouped separately because usually I find myself moving back and forth some functions between them.

There is an exception here though. When the module being imported is re-exporting names defined in other modules I then group them after the ones which are defined directly in the imported module.


I use multiple lines when the list of imports overpasses the specified text width and a indentation of 2 spaces when happening.

I also add spaces to module lists but not for constructors, just to give a quick hint that they are constructors. For example:

import module1 (A(A1,A2), B, (-|-), func1, func2)

… unless the constructors are multiline, which is not that frequent though:

import module
  ( A ( A1
      , A2
      , AN
  , B
  , func1

I know of editing tools to make vertical alignment very simple, but, personally, I don’t find vertical alignment improving that much in readability. The words in the same line tend to be too separated.

Canonical example

The import style followed by Cloud Haskell packages aligns quite well with my particular style. This is a modified version taken from Control.Distributed.Process.Node: 5

import Prelude hiding (catch)
-- 'base' imports
import Control.Category ((>>>))
import Control.Applicative ((<$>))
import Control.Monad (void, when)
import Control.Concurrent (forkIO)
import Data.Foldable (forM_)
import Data.Maybe (isJust, isNothing, catMaybes)
import Data.Typeable (Typeable)
import Control.Exception (throwIO, SomeException, Exception, throwTo, catch)
import System.IO (fixIO, hPutStrLn, stderr)
import System.Mem.Weak (Weak, deRefWeak)
-- imports from the rest of the Haskell Platform
import Control.Monad.IO.Class (MonadIO, liftIO) -- 'transformers' package
import Control.Monad.State.Strict (MonadState, StateT, evalStateT, gets)
-- these are likely to clash with local bindings
import qualified Control.Monad.State.Strict as StateT (get, put)
import Control.Monad.Reader (MonadReader, ReaderT, runReaderT, ask)
import Data.ByteString.Lazy (fromChunks)
import Data.Map (Map, partitionWithKey, filterWithKey, foldlWithKey)
-- these are likely to clash with other names
import qualified Data.Map as Map
  ( empty
  , toList
  , fromList
  , filter
  , elems
import Data.Set (Set)
import qualified Data.Set as Set
  ( empty
  , insert
  , delete
  , member
  , toList
import Data.Binary (decode)
import Network.Transport
  ( Transport
  , EndPoint
    -- Assuming there is only the 'Event' constructor
  , Event(..)
  , EventErrorCode(..)
  , TransportError(..)
  , ConnectionId
  , Connection
  , newEndPoint
  , closeEndPoint
    -- This would re-exports in 'Network.transport'
  , EndPointAddress
  , Reliability(ReliableOrdered)
-- qualified because names are too vague
import qualified Network.Transport as NT
  ( receive
  , address
  , close


I was keeping all these rules in my head until after a constructive discussion with Roman Cheplyaka about the topic, I decided to write them down in a post that I could use as a reference for myself and for my colleagues. But, by no means, I’m trying to claim my style is better than any other, this is what I follow as of today, and will surely evolve as my experience in Haskell grows.

If you just got into Haskell and find yourself trying to follow some consistent importing style through your code, but lack the hands-on experience to assess what’s best for you (and if you are control freak like me), you might want to follow this blindly until you have more skin in the game and can make a more confident decision of what style you prefer to stick with. One advantage of this style is that, even if other Haskell programmers don’t like it because of its extra editing work, it’s still easily readable.

But remember one thing, all this doesn’t matter if a project already follows its own style. Consistency is always better for readability, even if you don’t like the style. So always trust more your common sense than any styling guide for which it’s impossible to define every scenario you may encounter in real life.

  1. For code contributions it’s easy, just follow what the original author is already doing.

  2. This could be arguable considered the main use case for OOP in most languages.

  3. There is some research going on but still a long way to reach consensus.

  4. For exports I think it’s alright to always use (..), the constructors are in the same module.

  5. My modifications will surely break the code, this is just a sample demonstration.

One recommendation you often hear when reaching an acceptable level of basic Haskell is to make your code more polymorphic. The Haskell Prelude is heavily biased towards lists, so an immediate gain in polymorphism is to make your code work not only for lists but for any instance of Traversable or Foldable.1

Since a Haskell list is an instance of Traversable and Foldable, we can still operate as usual on lists with the new polymorphic code:

>>> import qualified Data.Foldable as F
>>> mapM_ print ['a'..'c']
>>> F.traverse_ print ['a'..'c']

Notice here that we’ve gained the extra advantage of being able to use Applicative instead of Monad. Here, we don’t need the extra power of Monad, and the Applicative, by being weaker, is also more generalizable.

But aside of lists, there is another instance of Traversable/Foldable defined by default for us: Maybe. You could think of a Maybe as a list of 1 or 0 elements, so when you are traversing it you either do something with the element if present or do the default Applicative/Monad action (pure and return respectively) if not present. How is this useful then? Have you ever found yourself writing case expressions like this?

>>> :{
>>> let printM m = case m of
>>>                     Just n  -> print n
>>>                     Nothing -> return ()
>>> :}
>>> let m1 = Just 1 :: Maybe Int
>>> let m_ = Nothing :: Maybe Int
>>> printM m1
>>> 1
>>> printM m_

The function maybe would improve a bit, syntactically speaking: maybe (return ()) print.

Maybe it’s just me, but that return () smells too much of a default behavior to me. Somehow, there should be a way to avoid it. Well, here is where Foldable instance of Maybe comes handy:

>>> :set -XScopedTypeVariables
>>> let printM' :: Maybe Int -> IO () = F.traverse_ print
>>> printM' m1
>>> 1
>>> printM' m_

To be fair, for this trivial example, it would be a bit frivolous to use the Maybe Foldable instance just to avoid the case expression, but when you are in an intricate case expression ladder, this idiom can make your code much more readable.

  1. Roughly speaking, the Traversable instance would be used for operations that do something on each element of the structure while maintaining same structure in the output. A Foldable instance would be used for collapsing the elements into anything else.

I finally found some time to migrate my blog to Octopress from Wordpress.com. The critical reason to migrate from Wordpress has been the support for nice code syntax highlighting, something I couldn’t have wordpress.com, at least for free. I know there are very nice wordpress plugins for syntax highlighting but in order to use them I would have to host it myself. I don’t want to go through the hassle of maintaining a typical PHP/MySQL stack or to be worried about being slashdotted.

Having worked with an excellent documentation tool like Sphinx, I started looking to static blog generators meant. It turned out that Manu Viera, a colleague working at Yaco with me, shared the same itch and had already looked several static web generators in Python, which is our main language at Yaco. Manu found pelican the best candidate but still I found it a bit immature, not something like something like Jekyll.

Then I found Octopress, a framework built on top of Jekyll with several plugins, including syntax highlighting or automatic support for disqus comments.

The migration from wordpress was not too painful. I used the default Jekyll script to import wordpress posts and disqus importer for the comments. After some sed commands I got nice markdown formatted scripts.

I had some trouble in the beginning configuring an isolated Ruby runtime in Arch Linux just for Octopress but after discovering rbenv, everything went smooth. (I prefer rbenv instead RVM with rbenv I know at any moment what it’s doing).

Deploying an Octopress generated site to github pages is as easy as pie.

Aside of nice Python syntax highlighting now I have some extra advantages I didn’t have with wordpress.com:

  • Markdown syntax when writing my posts.

  • I can use the best text editor to mankind: vim :P

  • My blog data becomes more manageable. If at some point I don’t want to host it github, I could just to push it somewhere else with no modification.

  • I got a very nice default theme for free, that aside of looking good, it’s also very easy to tweak and maintain.

  • Now I have a good excuse to learn Ruby outside of RoR influence. Ruby is one of those languages I wish I would be better at, even if Python remains my main working language.

In any case, I must say the service provided by wordpress.com has been quite good but this one of those cases where you have to say: “Sorry, it’s not you, it’s just me”.

Pyramid is a WSGI application framework that primarily follows a request-response mechanism. However, if you need to work with events you can still use them. It comes with some default event types that are emitted implicitely by Pyramid as long as you have a subscriber for them. For most applications the default event types are enough, but what if you want to write your custom event type and emit it explicitly from your code? It turns out that the application registry that Pyramid uses by default comes with a handy notify method. Pyramid uses this method internally  for its default events. Here is how you would take advantage of it:

from pyramid.events import subscriber

class MyCustomEventType(object):
    def __init__(self, msg):
        self.msg = msg

def my_subscriber(event):

def my_view(request):
    request.registry.notify(MyCustomEventType("Here it comes"))
    return {}

When running the application, every time a request goes through my_view, an event with a message is emitted, in this case, “Here it comes”. The subscriber then handles the event by printing the message, but it could do anything you want.

Notice that I’m using a decorator to hook my_subscriber. In order for the decorator to work you have to make sure you call the scan method when configuring the application.

Be aware though, that all these events are synchronous because Pyramid is primarily a request-response framework, all the events emitted block until the subscribers are done. If you want non-blocking events in Pyramid you could spawn a process from the subscriber or come with some other solution.

But the events in Pyramid are just another functionality that it offers. Pyramid is not a event-oriented framework, if you want to go all the way with async events you should look into Twisted or Tornado.

I have been using Arch Linux for 3 years now. I still use Debian and Ubuntu for the servers I administer but I acknowledge Arch Linux has taught many valuable lessons.

With Arch Linux there is very little in your system that you are not aware of. You have to configure everything yourself by editing config files. The process is not that hard because all those configuration files are meant to be tweaked. You also count with an excellent wiki to help you.

The Arch Linux philosophy doesn’t try to shield the user from complexity with extra layers. Instead it focuses on making the direct configuration as simply as possible. For example, writing a proper boot script is much straightforward than in other distros. At the same time if you are not careful you have more chances of really screw things up everything.

Arch Linux aggressively updates from upstream sources. This has the advantage and disadvantages of being always in the bleeding-edge. I also like the idea of putting more responsibility about the stability of software in developers than in packagers, as long as you are aware of this as a user. As a user you have to assume the responsibility of being at the cutting-edge. Things may not go always smooth but you count with excellent tools to manage chaos.

That brings me to the real killer feature that makes Arch Linux shine over the rest: the packaging system. PacmanABS, AUR, makepkg and the PKGBUILD format are just great. You usually don’t have to mess with packaging that much, everything installs nicely and dependencies are correctly handled, specially if you stick to the official repository.

But if you don’t like something about a package or need another version you have all the tools in place for the creation and introspection of packages without disrupting pacman bookkeeping (pacman is the equivalent of dpkg/apt-get in Debian).  Let me illustrate all this with something I had to deal with this week.

I decide to use Compass to make my stormy relationship with CSS smoother. Compass is a Ruby gem, the usual way to install gems is through Ruby packaging system but I don’t want to mess with the Ruby libraries already installed in the system with pacman. If I install those gems as root pacman will not be able to keep track of them, everything could break in the future, and most importantly, without an easy solution.

A way to deal with this issue is to install the Compass gem in some directory and handle the runtime somehow. You usually end up with a new runtime environment for each project you start. There are excellent tools to manage runtimes in Ruby like Rake, but boy, I already have enough managing my Python virtualenvs.

I see that Compass is already in AUR. AUR is a very liberal package repository where anyone can upload source packages. When you install from AUR you usually have a review the PKGBUILD, the comments of other users and check how many users have voted the package to be included in official repositories. With tools like yaourt the whole process is very smooth.

Alright, the ruby-compass PKGBUILD looks good to me so I install it. Now compass is a good system citizen and can be updated, installed and uninstalled through pacman. Compass works as expected but it turns out that the most interesting feature I wanted to use in Compass is only available in the latest version of Compass, the version in AUR is not the latest one.

No problem, it’ll probably be some version bumps and I’ll be done. I download the PKGBUILD, bump the versions and build the package again but then I realize that the new version depends on new Ruby gems that are not in AUR.

At this point I would avoid getting into a dependency hell and go for Rake, but wait, I’m using Arch Linux, let’s see what happens if I continue with the Arch flow.

I take the PKGBUILD of Compass as a template, which is generic enough for any Ruby gem, and use them for the Ruby dependencies. I update licences, versions checksums, build them and done, everything works. They are all coming from rubyforge and follow the same building conventions, making my life easy as a packager.

I upload the PKGBUILDs to AUR with just one burp command. Now I can install the latest version of compass through pacman without any issue. I then send my modified version of PKGBUILD to the original Compass packager who updates it. That’s it, now anyone can install the latest version of Compass with all its dependencies from AUR. I now can install Compass at home with just one command: just yaourt -Sy ruby-compass.

Now I just have to keep an eye in new updates on the dependencies I’m now maintaining in AUR but rubyforge offers an excellent notification system for gem updates.

That’s it. The whole thing took less than 30 minutes.

I don’t know if nowadays writing a DEB package spec is that hard, I acknowledge I never tried. The tutorials I found about them drove me away when I considered it some years ago.

It’s not only the packaging format itself, there is also the community and policy aspects. Editing your PKGBUILDs is something that every Arch Linux user does. For AUR there is very little regulating making the packaging smoother process at the expense of shifting the trust on the packages to the user. In general, most packages in AUR are good enough but for production machines I still value more the trust the Debian and Ubuntu package maintainers.

That’s where open source community shines, you have many choices.

If you want to prevent Chameleon from rendering some portions of an HTML template you might be tempted to do something like this:

<!-- <div>${context.name}</div> -->

However Chameleon will still evaluate what’s inside the ${…} block even if it’s within an HTML comment. Chameleon must do this because you might want to insert conditional comments.

This dummy tal:condition block will do the job:

<span tal:condition="None"> <div>${context.name}</div> </span>

Chameleon ignore anything inside the condition block.

After almost 3 years in The Netherlands working as proteomics informatician at Albert Heck’s lab, I’m moving to Seville, Spain, to work as a web developer for Yaco Sistemas, a fresh and dynamic open source friendly company.

This is an important shift in my career since I won’t be working on proteomics informatics and academic research anymore. I have mixed feelings about leaving proteomics. On one hand I like the area because there are plenty of tough challenges to be solved. But on the other hand I’m glad I can dedicate all my time to develop web applications, that might not be as sophisticated as proteomics software, but that will be immediately useful for the masses. I love web development and the Python community but within proteomics I could only intersect with the Python web development community quite sporadically. Now I’ll have the chance to be part of it full time.

Personally, The Netherlands is the most comfortable and easy-going country I ever lived. Here I had the chance to work with very smart people and made friends that will never forget. What I have learned during these years is priceless.

But I can’t deny my origins, Spain is where I feel at home even if sometimes I don’t find it too exciting because I’m too familiar with the culture. However Seville is quite far from my hometown, in the North of Spain. The culture in the South is very different from the North, so in a way, I’ll be another foreigner excited about the peculiarities I discover about Andalusian culture.

After speaking with Marius Gedminas at freenode, he gave me enough hints to rewrite my previous async view example with locks instead of Value, which is prone to race conditions. I also added a queue to allow jobs to wait for being processed.

from multiprocessing import Process, Lock, Queue

job = 0
q = Queue(maxsize=3)
lock = Lock()

def work():
    import time; time.sleep(8)
    job = q.get()
    print("Job done: {0}".format(job))
    print("Queue size: {0}\n".format(q.qsize()))
    if not q.empty():

def my_view(request):
    global job
    if not q.full():
        job += 1
        # Not running
        if lock.acquire(False):
            print("Job {0} submitted and working on it".format(job))
            print("Job {0} submitted while working".format(job))
        print("Queue is full")
    print("Queue size: {0}\n".format(q.qsize()))
    return {'project':'asyncapp'}

With every request a job is sent. Here the queue accepts 3 jobs. The recursion in work makes sure there is only 1 process working at a time.

I will leave my previous example with Value because it’s easier to understand but this version is much safer.

Update: You can avoid the use of locks by using 2 queues.