Implementing for today (not for tomorrow)

When we start programming professionally for some company it’s too easy to fall in the situation of implementing something with too many just in case’s, or what if’s.

I used to say that we were taught that way when we studied Software Engineering or Programming, but now I’m not that sure about it. I think that we just didn’t get the point. When we were told about writing reusable code we automatically thought: “okay, this is about writing code today so that I can reuse it tomorrow, right?”.

Then we realize that we really don’t know what the requirements will be tomorrow, and we end up with lots of awesome classes and methods and interfaces we wrote that are now completely useless. Now we need to maintain them and they are difficult to adapt, so in the short term they start to smell badly and within a couple of months that code is completely rotten.

So, the solution seems to be easy at first:

Implement to fit the requirements of today. As long as your code fits the SOLID principles, KISS and DRY and a couple more, you are safe and sound.

First thing first: we software engineers are cool when inventing acronyms; aren’t we?

Okay, but let’s go to the point…

The problem is that sometimes you come up with some solution that is “surely” SRP but only “poorly” OCP, “reasonable” LSP and “yeah, why not?” DIP.

Then you ask some workmate and he suggest an approach that is “pityful” SRP but a “confident 100%” OCP.

Regarding patterns, sometimes Iterator is almost okay but it makes the “iterated” a little bit couple to the “iterator”. Sometimes the best approach is not easy: Adapter, Mediator or Proxy?

Regarding object oriented detailed design: inheritance, interfacing, delegation or just composition?

Which one is better?

Let’s consider an example like the following: you have to implement a system that handles employees. Some employees are paid in a month rate and other ones by hour, some other ones have a basic income and the rest depends on the sales rate they achieve in that month. Finally some other ones can choose how they want to be paid: by hour or in a month rate.

Regarding the database, it’s easy to identify an inheritance in the Entity Relationship diagram. Inheritance is designed in one of 3 ways:

  • Implement the parent table and the children tables
  • Implement only the parent table
  • Implement only the children

Regarding the code, you can go for the Template Method pattern and use inheritance with an abstract Employee class and the actual Employee types inheriting from it or go for a Strategy pattern and use delegation (assuming .NET programming) to calculate the income.

That’s what it comes to my mind as a first approach.

So, which one is better?

It depends.

It depends on what? We implement for today, don’t we? So there must be a perfect solution to this problem.

But silver bullets never ever exist in software, d’you remember?

The trick here for experienced developers is to anticipate to the change somehow. If I was in such a situation like the one depicted above I would ask my customer something like:

  • Is it possible that a customer changes his / her income mode?
  • Is it possible that new payment options arise?
  • Is all the employees data required to be loaded in every operation?

Going back to the database, we know that if we use only one EMPLOYEE table we will have lots of nulls for those columns that don’t apply for some specific payments. That’s not cool, we are wasting lots of disk space. If we use the “children-only” option then we will have lots of redundancy in the common columns (again, a waste). Finally, if we use the parent table AND the children we will need a JOIN to load all the employee information, which may damage the performance of the overall system.

About the code, we all know that inheritance is hard to maintain because if new children classes appear and they don’t fit in the common parent we start to have several levels of hierarchy and in a couple of years we have a hierarchy too rigid, hard to maintain, hard to test and hard to refactor without breaking anything. On the other hand, if payment options don’t change then the Template Method may be easier to understand and more natural and elegant to use than the Strategy pattern.

So, software designs aren’t often good for every single situation. Solutions are good for some requirements in a certain point in time, and we should base our decisions on that. As long as we have a clean design, if eventually we need to do a change and then we realize that our code is not prepared for that kind of change, we refactor with the support of our tests. But let’s face it: there’s no perfect design. It depends on the requirements. Even the very best piece of software design may fail the SOLID principles under some twisted change of requirements. Let’s embrace refactor, then.

The important thing to get here is that, when we need to give some solution to a certain problem we need to find several solutions so that the right questions arise and therefore those questions will lead us to choose the more appropriate solution according to the answers we get. Then we choose the simplest, cleanest, SOLID-est solution ever possible for today’s requirements, anticipating tomorrow changes if (and only if) we have a strong certainty what those are.


Currently I’m working in a team where almost everyone has fewer experience than me regarding .NET technologies and Web development. I have to recognize that I have been programming in .NET for about ten years by now (not all the time, I hate when I’m asked this question when applying for a job position because I consider it plain stupid: no one does the same thing for more than a couple of months).

But I still have little experience in web, just a couple of years of ASP.NET development and a couple of years more in PHP when I started to work as a professional developer… and I really remember little of PHP programming.

So, at the very beginning we had to get things done and we were little experienced. I put in the senior developer shoes and started studying ORMs, Dependency Injection tools, the new stuff in ASP.NET MVC 5, jQuery, Bootstrap, AngularJS, SharePoint…

And started to train the rest of the people in the things I read and learnt on weekends. Did that made my an expert in something? Absolutely not, I just wanted to share what I was learning with my pals.

Eventually I had the opportunity to do some speech in a software event. Several of my colleagues were invited to participate as well, but they were a little bit reluctant at the very beginning:

– What if there’s someone who knows more than me and corrects me or fools me in public?

I haven’t talked in public more than just a few times, but I’ve never found someone that rude. People go to events with a positive feeling of learning something or sharing some insights or knowledge about the topics covered in the talk; it rarely happens that some expert in something goes to some talk just to fool someone… and I think that the audience would not have fun with such a troll.

If that eventually happens, there are ways to workaround uncomfortable situations:

– Hum, I’m not sure about that; maybe you’re right.

– I agree with you at some extent, but…

– I don’t have the answer right now; lemme check it and I’ll come back to you then; could you give me your e-mail address after the talk, please?

– I see your point but you have to consider that…

– If you don’t mind I’ll be glad to discuss this issue with you after the talk, now let’s move on to the next topic so that the audience don’t get bored and we don’t run out of time, I hope you understand, d’you agree?

And don’t be afraid of saying that you’re wrong. Be humble upfront. If you feel that you know a little bit about a topic then share it! You can just start being humble: “hey guys, I’ve worked with this some time but I don’t know yet all the gory details about it, you know”. Besides, realizing that you’re wrong some time after you wrote or said something is ok, it means you’ve continued working on that, it means that you’ve learnt, you have a deeper understanding on the topic. That’s an improvement, good for you!

Please, don’t get me wrong: I’m not saying that it’s good to talk about something by merely reading about it. It’s good to train, to play, to research a little bit further and then talk about it or teach other people about it.

To give an example I was asking my workmates to avoid conversions such as “.ToList()” just to iterate on a collection with a foreach loop because it’s non-sense, but then I was introduced to EntityFramework and the problem of buffering a query in EF to avoid keeping connections open, so I had to go back and say: “Hey dude, when I said: don’t use .ToList() never, ever… ok, when it’s about EF things are not that easy…”.

So, I’d like to emphasize: share your knowledge with others. If eventually someone finds a mistake in something you said or wrote that’s cool, you can learn from your mistakes. If you don’t share your insights there’s little room for improvement and learning. No-one will blame you for being wrong. All of us are eventually wrong about something, and that’s okay.

The Inversion of Control Pattern (IoC)

Hi there! It’s me, remember me?

It’s been a while, a lot of things have happened; good things, no worry about that… I just forgot about this blog and recently I read some good stuff about something that I could use to make a good article… so here we are!

Today I’ll talk about a pattern I’ve been using for more than one year and a half and it’s a proven usefulness: The inversion of control pattern (IoC).


When a class (say ClassA) depends on other class (say ClassB) and it needs some instance of it in order to do its job, we say that ClassA is coupled with ClassB. If ClassA needs to know a lot of details of ClassB then we say it’s tightly coupled.

public class ClassB
    public void SomethingCool() { ... }

public class ClassA
    private ClassB svc;
    public ClassA()
        svc = new ClassB();

    public void DoSomethingInteresting()

Coupling is a problem for two reasons, mainly:

  • Changes in ClassB have the risk of breaking ClassA.
  • Designs are not flexible and it’s difficult to modify ClassA so that it uses the services provided by ClassB differently.

Normally, the best way to reduce coupling consists of introducing some abstraction layer between both, implemented by some abstract class or interface, so that ClassA depends on ClassB in those abstraction terms.

public interface ICoolService
    void SomethingCool();

public class ClassB : ICoolService
    public void SomethingCool() { ... }

public class ClassA
    private ICoolService svc;

    public ClassA()
        svc = new ClassB();

    public void DoSomethingInteresting()

This solves almost everything, because ClassA still needs to determine what implementation of the abstraction will it use.

Moving out of the dependant class (ClassA) the creation of the dependency (ClassB) is what the IoC (Inversion of Control) Pattern tries to solve. The name reflects what it does: invert the control bewteen the “dependee” and the so called “depender” (ClassA -> ClassB).

IoC is an abstract pattern that doesn’t explain how to achieve our purpose. Two common ways to implement it are:

  • Service Locator
  • Dependency Injection

The Service Locator Pattern (SL)

The aim of this pattern is to achieve IoC making the software components get their dependencies from an external component called Service Locator (SL).

There are two kinds of SL:

Strongly typed

public interface IServiceLocator
    ICoolService GetServiceB();

and then:

public class ClassA
    private ICoolService svc;

    public ClassA(IServiceLocator sl)
        svc = sl.GetServiceB();

    public void DoSomethingInteresting()

The code who instantiates ClassA needs to provide an instance of a service that implements IServiceLocator. This interface returns elements of services interfaces: IService1, IService2,… to that all client classes (such as ClassA) are loosely coupled to the services. As you can see, we delegated the problem of instancing some precise ClassB on the SL.

The problem of this approach is that the IServiceLocator is not flexible regarding the services it provides. Adding new services after it was defined may require to modify some existing classes in order to use those services.

Weakly typed

public interface IServiceLocator
    TService GetServiceB<TService();


public class ClassA
    private ICoolService svc;

    public ClassA(IServiceLocator sl)
        svc = sl.GetServiceB<ICoolService>();

    public void DoSomethingInteresting()

Thus, the SL can provide new services without knowing about them ahead of time, reducing maintenance.

On the other hand, the weakly-typed SL can do little custom configuration for each service provided, since it is so generic.

The Dependency Injection Pattern (DI)

Instead of having an intermediary object that handles all the dependencies, these are directly received during instanciation as constructor parameters. This kind of DI is called Constructor Injection.

public class ClassA
    private ICoolService svc;

    public ClassA(ICoolService someSvc)
       svc = someSvc;

This way dependencies are explicit int he constructor and, by reading the it, we can figure out what dependencies have the whole ClassA.

Contrariwise, when using the SL we need to read the whole ClassA looking for the SL uses to get a picture of the dependencies ClassA has.

An alternative to the Constructor Injection is the Property Injection, which consists of defining in the client class (ClassA) a property of the interface implemented by the service it depends on and rely on someone filling it.

public class ClassA
    private ICoolService svc;

    public ICoolService CoolService
        get { return svc; }
        set { svc = value; }

The constructor is cleaner -or even redundant- since it doesn’t require all the services stuff.

The problem with this implementation is that we trust that someone will provide the property instanciated, otherwise a null exception will be raised… unless we protect every single access to the service checking for null, which could hide bugs in the code.

That’s why Constructor Injection is strongly recommended.

Dependency Injection Containers

There’s some unanswered question still:

how we provide the implementations of the dependencies in the first place?

So far we’ve moved the dependencies out of the business code but that’s just… moving the problem to a different place instead of solving it. If the whole system bases on DI then someone will provide implementations for… everyone else.

Here’s where Dependency Injection Containers come in place. DI Containers are libraries that implement what we need. They provide an API to define who implements what, so that when someone needs the what, some who is provided by the container. It’s like a SL extracted in an external library.


Testing gets huge benefits of using IoC, since we can mock dependencies and inject them in the code by just setting up a little bit the DI Container to tell it that, when testing, the implementations are provided by the mocks and not the actual implementations. Therefore, with no fluff in code we can easily test it.

Stack Browser app

Hi, there, it’s been a very long time since the last post… I promise to write something else more often and keep this good ol’ blog updated more frequently…

Today I’m glad to announce that I’ve just published my first Android app: Stack Browser!


Main Stack Browser screen

What stackoverflow is

Stack Browser is intended for software engineers that regularly use stackoverflow. For you to know, this website is probably the most well-known Q&A website regarding computing. It was created several years ago by Joel Spolsky (FogBugZ, among others) and Jeff Attwood (Coding Horror). Since it was started back in 2008, stackoverflow has become the most popular Q&A website and almost anything you google related to programming you probably will be targeted to this site. Due to its success, lots of sibling websites have appeared since then, all related to the idea of someone asks something and the community answer, and both questions and answers are promoted by users, so users get high rankings and their geek-ego becomes increased (I’m just kidding). Nowadays, all those sites are grouped by the StackExchange concept (yeah, let’s say conecpt or idea).


Bookmarks management.

 What Stack Browser is

So, this non-official app lets users browse (actually, browse) stackoverflow questions and answers. This is my first release of my first app, so the set of features is rather small yet:

  • Browse different categories of latest questions: active, most viewed / answered, featured, new, unanswered, most popular in the week / month
  • Filter by some term the results listed.
  • Search on the website by title, tags and answers.
  • Read the detail of the questions and answers.
  • Bookmark your favourite questions to read them later.

Detail of a question and its answers.

My first aim in this release was to make easy for users browsing questions related to some specific term or issue, read answers and keep them in a handy place to seek them later. I’m not that interested currently in: show who wrote what, posting or voting. I was interested in the contents, going to the point.

There are other apps in Google Play available, most of them are not very useful or are just web browser embeeded in Android applications; a couple of them are very nice but they don’t make the searching or reading a topic as easy as I’d like to see in a stackoverflow app.

My concern was to provide an app that makes reading a topic easy in a reduced-size screen.



Question expanded.

Tablets are one of my future targets, but first things first, I wanted an app in my smartphone to read stackoverflow questions in an easy way.

During the development of this app, I had to figure some possible solutions out to provide an acceptable way of reading questions and answers: one of those features was to let users expand the question to use all the available space, to show an answer in a different activity when it’s touched and to provide different layouts for portrait and landscape orientations.

That said, I’d like to develop more features that I have planned for the future, if users rate this app and like it. In the meantime I’ll be developing new Android applications, but if I get some acceptance by the community, these are the features I’d like to develop in the near future:


Landscape layout for a specific question.

  • Tablet specific layouts to fully use all the space available.
  • Translate the app into spanish, german…
  • Log in and voting features (cool!)
  • List users and their profiles.
  • List tags and information about them.
  • Support for other Stack Exchange sites (cool!)

So, I hope you like this app, download it, rate it and send  us feeback via Facebook,Twitter, Google+ or rate it in Google Play, or leave a comment just below!

La Ley de Servicios Profesionales y la Informática

La polémica ley de la que actualmente existe un borrador pero que en próximas fechas debería confirmarse, ha suscitado muchas polémicas. En este artículo trataré de aclarar y de informar, en la medida de lo que conozco, de cómo la LSP (Ley de Servicios Profesionales) nos afecta a la Ingeniería en Informática.

Lo que dice la ley

En esencia la ley dice que se elimina la mayoría de atribuciones profesionales. Esto significa que no hará falta que un ingeniero industrial preste su firma para un proyecto de construir una nave, la de un arquitecto para la construcción de una casa o la de un ingeniero de puentes para construir un puente. Se conservan algunas atribuciones muy específicas pero las más generales se suprimen, entendiéndose que no es necesario ser ingeniero en A para firmar un proyecto A, sino que es suficiente conocer los entresijos de dicho proyecto y tener cierta experiencia.

Esto, que en principio puede sonar muy bárbaro (¡¿ingenieros de montes firmando proyectos de construcción de puentes?!) quizá no lo sea tanto porque se conservan ciertas atribuciones específicas y se eliminan las demás (desconozco qué se conserva y qué se elimina), y la idea es, quizá, noble: reducir la carga burocrática y abaratar costes a la hora de realizar trámites para llevar a cabo obras. De ser así, ojalá no afectara sólo a ingenierías y arquitecturas, sino también a la abogacía y la notaría, ¿verdad?

Cómo nos afecta a los Ingenieros en Informática

En dicho escrito se establece que los profesionales de la informática podemos seguir con la nomenclatura de “Ingenieros”, si bien no se nos incluye en el reglamento que se establece para las demás ingenierías. Y aquí está la enjundia del asunto, amigos, en que no estamos con las ingenierías mayores, con casta y solera. Y de lo malo podemos darnos con un canto en los dientes: otras ingenierías nuevas han perdido hasta el trato de ingenierías…

¿De dónde viene todo esto?

De la “Ley Guerra” de 1986, llamada así por ser establecida por el entonces ministro Alfonso Guerra. Dicha ley regulaba las competencias de las distintas ingenierías; la Ingeniería Informática no existía aún en España, los pocos profesionales del sector no estaban aún organizados y por tanto no levantaron la mano para hacerse oír.

Por aquél entonces se estableció la regularización, mientras que ahora se trata de hacer lo contrario: des-regularización de atribuciones.

¿Cuál es el verdadero problema?

El problema real está en el hecho de que no se nos trate como a las demás ingenierías, en el hecho de que seamos tratados como una ingeniería “menor”. Actualmente este es el problema, dado que la desregularización de las atribuciones en principio no nos afecta en absoluto. Sin embargo, esta situación actual abre las puertas a muchos problemas futuros:

  • Si en el futuro vuelven las tornas y se decide re-regularizar las atribuciones (ya se hizo y probablemente se vuelva a hacer), nosotros que no estamos en el saco nos quedaremos fuera, y posibles atribuciones como por ejemplo Seguridad Informática, Criptografía o Gestión de Proyectos quedarán fuera de nuestras manos y pasarán a estar controladas por ingenieros industriales o de telecomunicación (sigh). Dado que nosotros no nos metemos a construir puentes pues pueden imaginarse lo que significa dejar en manos de personal no cualificado tareas de tamaña relevancia.
  • El hecho de estar menospreciados de una u otra manera afectará a que muchos futuros profesionales se decidan por otros estudios más prestigiosos, tendremos una fuga de cerebros en nuestro sector y dado que el área de las tecnologías es posiblemente el área con mayor futuro en España y en el mundo sería una pérdida irreparable para el país y para el avance de nuestra sociedad global.
  • Trato, respeto, igualdad… esas cosas básicas que se presuponen a todo individuo y a las que a veces hay que apelar para que te escuchen…

¿Quién es el malo de la película?

Esta ley empezó a barruntarse en el anterior gobierno socialista, y tiene su continuidad en el actual gobierno popular. Como he señalado antes, la ley no es ni buena ni mala, eso el tiempo lo dirá y el caso es más complejo de lo que parece. El asunto está en que nosotros nos quedamos fuera, y España no puede dar la espalda a más de 150.000 profesionales punteros en hacer avanzar al país. Para que se hagan una idea, los ingenieros de minas, que sí están en el grupo de los Elegidos, no pasan de 5.000.

De hecho hay políticos socialistas, de UPyD y populares que nos apoyan, comprenden la situación e incluso comprenden el escarnio que el actual escrito nos supone.

El verdadero problema está en los tecnócratas, los lobbys de poder que se encuentran apoltronados en los puestos relevantes de la Administración y no dan su brazo a torcer. Son éstos los que tienen la influencia para aconsejar y decidir el escrito final y que no quieren que nadie les quite su jugoso pastel. Estos lobbies están conformados por ingenieros industriales, de caminos, de telecomunicación… que dicen que estos chicos de los ordenadores no tienen nada que hacer, no son ingenieros y que cualquiera puede desempeñar su trabajo.

Dado que la labor desempeñada por la Ingeniería en Informática se encuentra fuertemente regida por procedimientos ingenieriles heredados y adaptados de otras ingenierías y no se trata de procedimientos científicos o artísticos, creemos firmemente (y así los planes de estudios lo estipulan) que somos tan ingenieros como otro cualquiera.

¿Qué es lo que queremos, entonces?

Que se nos trate como a las demás ingenierías. Ni más, ni tampoco menos. Queremos estar donde el resto de las ingenierías estén.

¿Qué podemos hacer?

Difundir el mensaje. Poner en conocimiento de todos los profesionales de la Ingeniería en Informática que esto es un problema que nos afecta a todos; que no nos va a quitar el plato de lentejas de cada día, pero que podemos perder una oportunidad CLAVE para futuras decisiones que atañen a nuestro sector y que nos competen, por simple y llana justicia.

Por encima de nosotros están los Colegios de Informática y distintas asociaciones y grupos de presión que tratan de hablar con políticos, difundir el mensaje y conseguir apoyos; la barrera principal no obstante radica en los lobbys que antes mencionaba.


En próximas fechas el escrito dejará de ser un borrador para convertirse en ley. El grupo socialista trató de confirmar el escrito la pasada legislatura, pero a última hora se echó atrás dado el bajo consenso y nuestras protestas en la calle. Así que lo dejó para el siguiente Gobierno y se quitó el marrón de encima. Ahora toca decidir, y ahora es cuando nos la jugamos.

Howto: Execute a command and use the output in Python

Let’s do something very simple to show a couple of things:

  • How to script Plastic SCM commands;
  • How to execute a program in Python and do something useful with the output.

Say that you want to check if there’s been some activity in a specific Plastic SCM server in a period of time. To do that, a good idea is to check if new changesets have been created. But let’s suppose that you have hundreds of repositories, so you really don’t want to check ‘em all one by one, do you?

Right, so let’s write some Python code to achieve this:

from subprocess import Popen, PIPE

proc = Popen(["cm", "lrep", "--format={1}"],
    stdout=PIPE, stderr=PIPE)

repositories = proc.stdout.readlines()

for repository in repositories:
    repository = repository.replace('\n','')
    print "Repository: %r" % repository
    proc = Popen(
         "changesets where date > '01/01/2013' on repository '"+
         repository + "@localhost:8084'"
        stdout=PIPE, stderr=PIPE)
    print "------------------------------"

First, we import the required library we need to execute subprocesses and redirect output and errors. This is a quite common library. Then we create a child process to execute the command “cm lrep”, to get the list of repositories in that server. The format string is just to cut the output in order to get just the names of the repositories.

See that we are redirecting the standard output and the standard errors to a PIPE. Then we get the output of the command. This is one of the beauties of Python: with just one single line you get and parse in an array the different lines of the output :-).

Now we iterate in the list of repositories we’ve just got, remove the EOL character and execute the proper command to query if there have been new changesets in the last year (this could have been calculated with the current date, or could have been an argument to the script).

We execute the cm find changeset command and get the output, and just print it in the console.

Un relato corto

Año 2072. Como cada mañana, Fred se despertó a las 8 de la mañana. Había tenido una noche revuelta, probablemente debido a que estaban en primavera, el tiempo era bastante inestable y soñaba mucho. Se vistió rápidamente, tomó un café con un par de galletas, agarró su All-In-One y se fue a la calle, camino del trabajo.

Un All-In-One era un dispositivo que hacía las veces de teléfono móvil, radio, dispositivo de audio, cámara, calendario, gestor de eventos, sincronizador de redes sociales y otras mil cosas inútiles más. Más o menos como ahora, pero más pequeño y sin pantalla, ya que funcionaba a base de una especie de hologramas que uno controlaba con los dedos, describiendo ciertos movimientos en el aire.

Fred trabajaba como máximo responsable del departamento de IT en MeetLove, el Meetic de la época. Su creador, John Marrison había patentado hacía diez años una idea innovadora y rompedora: un sistema informático que, en base a parámetros sociológicos muy definidos y sesudos del cliente en cuestión, realizaba una búsqueda de la pareja perfecta, gracias a un algoritmo extremadamente complejo que se ejecutaba en un servidor de máxima capacidad, cuya responsabilidad era en estos momentos la ocupación de Fred.

En el año 2072 el ritmo de vida de las personas había provocado más divorcios y rupturas que nunca antes en la historia; la gente era más independiente, pero también más infeliz. Las personas no estaban dispuestas a aguantar en absoluto las manías o costumbres de otra persona, y a la menor discusión uno de los dos agarraba la puerta y se iba, y a empezar de nuevo. Muchos hombres y mujeres fallecían sólos en residencias privadas, pagadas con mucho dinero, y en muchos casos sin hijos. Precisamente la idea de John, que inteligentemente decidió patentar, cubrió una necesidad que muchos añoraban, o que soñaban cuando las oían contar a los más viejos: el amor, la fidelidad, el respeto, la complicidad de la pareja.

Desde entonces, y gracias al sistema de John, en los últimos diez años el número de divorcios se había reducido a tan sólo un 5% del total de nuevas parejas, o al menos eso decían ellos. “¿Para qué perder el tiempo con personas que no merecen la pena? ¡conoce a tu media naranja sin sufrir despechos y rupturas!”. Tal fue la campaña publicitaria que el mismo John personificó en los paneles publicitarios de los edificios que inundaban las ciudades.

Y la idea cuajó, vaya si cuajó. Los beneficios de John se contaban por millones y la gente era feliz. Pagabas, te sometías al procedimiento, y encontrabas a tu pareja perfecta. La gente acudía a su primera cita con la convicción del que se sabe ganador. Todo era más fácil. Nada podía fallar, el sistema era infalible. Muchos intentaron descifrar el algoritmo, sin éxito: el secretismo que lo protegía era absoluto, el servidor que ejecutaba la búsqueda estaba protegido por las máximas medidas de seguridad.

Aquella mañana, Fred había tenido una pesadilla. Algo no iba bien. Su antecesor en el cargo, el propio John Marrison en persona le había indicado que el sistema era tan estable que su trabajo sería coser y cantar, apenas le daría trabajo y casi nunca tendría que revisar el funcionamiento del sistema. En el sueño, el sistema se caía, el servidor dejaba de funcionar y todo era un auténtico caos, él era despedido fulminantemente y todo el mundo se reía de él.

Es por ello que aquella mañana se vistió rápido y fue casi corriendo a su trabajo, sin tomar el transporte urbano. Cuando llegó a su puesto, tuvo que revisar sus notas para consultar cómo se accedía al sistema central, el sistema más importante, el que ejecutaba el algoritmo de búsqueda. Se autenticó en los distintos niveles de protección del sistema hasta que llegó al núcleo. Entonces, se quedó pálido cuando descubrió que el núcleo estaba caído. Tras varios minutos de desesperada búsqueda, entendió que dicho algoritmo nunca existió, la máquina había operado en vacío los últimos diez años.

Fred comprendió la jugada de su jefe, y sonrió con una mueca de ironía. Miró la foto que colgaba de su despacho; en ella, John aparecía radiante, con los brazos cruzados, en una actitud de confianza y desafiante a la vez. “Qué cabrón”, pensó. Se levantó y se preparó un buen café.