Puremvc, Pipes and taking Paperboy to the next level

The previous iteration was focused on supporting new data sources. From the beginning, we tried to have a clean architecture, to leverage design patterns in order to improve maintainability so on and so forth. Nevertheless, when we tried to tackle the challenge of supporting new data sources, something cried in the code base: lack of isolation.

For the flex part of the application (that is 75% of the code base), we use Puremvc as a micro architecture framework. It is heavily based on G.O.F design patterns : observers, proxy, mediator, command and MVC separation of concerns. It allows you to organize your code in a loosely coupled way with a minimal boiler plate code set up. But it had one drawback: it was based on a singleton facade. The 2 side effects of that:

1) It was hard to test because you can't have a fresh instance for each test

2) Even if separation of concerns between Model / View / controller (let's call it "vertical separation") was great, the code base will have too much intimacy between the different parts of your application as it gets bigger almost inevitably (let's call it lack of 'Horizontal separation'). In our example of generalizing data sources, the model building engine knew too much about Google Spreadsheet, our first data source.

Enter the multi-core version of Puremvc and its pipe utility. Multicore version of Puremvc framework allows you to have several MVC stacks, called cores, in your application. It is not anymore singleton based. This allows you to really isolate the different functions of your applications. In our case study about data sources, we used multicore as a kind of giant strategy pattern: we encapsulated all the knowledge of a data source (including UI like forms) in a core. We can swap between them at runtime. The shell application knows nothing of the data source except its name and a set of message it can send or receive. The same is true for the data source core.

The sequence looks like this:

  • As soon as the shell core needs something from a data source, it sends a notification with the action it wants to perform.
  • This notification will be catched by a command that will look if we need to load the data source core
  • If the core is already loaded, it will pass the notification to a junction Mediator that will send a message to the data source core (using the pipes utility).

This means our data sources’ cores are completely dynamic. They are only loaded when needed. This means faster initialization when starting up. The long term aim is to allow you, our users, to define data sources’ connectors the same way we do. That is, you register a new data source in Paperboy bime, point to an url where you module is hosted and that's it. You have a new data source supported in Paperboy bime. It is like in the old world of database connectivity (JDBC, ODBC etc..) but with the power of the new web world and distributed systems: the source file is hosted on only one computer, it is always up to date, no installation needed...

We really feel good with this plug-in architecture because supporting a new data source is now really easy, can be done in parallel, won't break anything and won’t increase startup time. Moreover, we laid down foundation to let our users define their own custom data sources