Crowdfunding is against the point of open source

Crowdfunding has expanded in recent years to scales unimagined before. Recently Kickstarter – currently the largest crowd-sourcing platform, announced that over 5.7 million people have pledged North of 1 Billion US Dollars to campaigns promoted via the website.

Although the success stories of projects realised via the collective effort of the crowd have altered several industries, particularly game development and publishing, crowdfunding has arguably reached a saturation point, where the bars have been raised so high and competition has become so fierce, that chances for success of smaller projects are falling by the hour.

One field that is enjoying increasing attention in the crowdfunding world is the production of open source software and hardware. At first glance the two seem like a perfect match, after all crowd-sourcing originated as a way for creators to gather the required resources for projects that would benefit communities they were part of. However, with the rising of campaign targets and project quality, the altruistic side of crowd-funded open source development is giving way to the business models governing the field. Creators are investing increasing amounts in advertising and other side-costs, while ‘open source’ is becoming nothing more than a buzzword for attracting a specific target audience and selling it a plain old consumer product.

The decay of truly independent small-scale campaigns

Changing  standards

Take a quick glimpse at Kickstarter’s homepage and you will notice immediately the beautiful photos and cover arts of the featured campaigns – the top crowdfunding efforts handpicked by the website’s staff. Projects that have often taken months to envision and plan, crafted with attention by professionals and properly backed-up by extensive research.

Now compare this to what Linus Torvalds shares to his fellow Usenet users prior to releasing the source-code of Linux – probably the most famous open source software today:

“I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready.

… It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that’s all I have :-(.”

—Linus Torvalds

One of the core values of open source appears extinct on the front-page of Kickstarter – the joy of the discovery and creation of an elegant solution with no other incentive than satisfying your curiosity or reacting to the frustration created by some exiting product.

Hidden costs

At the Wikipedia article about crowdfunding there is not a single mention of the word ‘costs’ or any explanation of the potential overheads in creating a successfully funded campaign.

My recent experience in helping one of my friends in creating his Kickstarter campaign that, at the time of writing, is halfway through its 30 days of funding, shows that 5 months of careful engineering and considerable investments, 2 days of video shooting with an expensive DSLR camera and numerous revisions of campaign marketing material may not be quite enough to make it. He decided recently to invest further in advertising, as none of the technology blogs he had contacted were willing to publish the news about his gadget before accepting a generous fee for the service.

Not to mention the 10% subtracted from the total earnings in order to aid the noble cause undertaken by Kickstarter’s staff.

To put this in an easier to understand form for computer scientists:

Crowdfunding == Business;

Surprisingly low success rates

Last but not least, let’s consider the entrance barrier for the club of the successfully backed crowdfuned projects. This sutdy on the topic states that three quarters of the gaming crowdfunding campaigns fail to meet their backing target, while recently just 7.5% percent of open source projects have been successful.

Last time I checked the success rate of my GitHub open source projects, it was close to 100% with the staggering number of active contributors of 1. A humble victory, but a victory anyway, compared to a dead Kickstarter project.

The problem with large crowdfunding initiatives

Ghost Blogging

Ghost is a NodeJs based blogging platform envisioned by John O’Nolan, a former WordPress (a.k.a. the open source blogging platform) developer, that aims to get blog writing back to its basics –  that is: beautiful typefaces, contemporary design and Markdown editors.

Naturally the platform is intended for the free speakers by being completely open source, with the tiny note that the backers’ funds contribute towards a hosted service that will provide Ghost in all of its beauty for a tiny annual fee alongside the development of the open source project.

In my humble opinion – this is called business, not open-sourcing.

Fast-forward 9 months after the successful funding of Ghost Blogging, exceeding the target by nearly 800% and the HEAD branch of the GitHub repository of ghost has version 0.4.

Fast-forward 9 months after the successful funding of Ghost Blogging, exceeding the target by nearly 800% and the HEAD branch of the GitHub repository of ghost has version 0.4.

During a recent attempt to develop a simple Ghost theme that separates blog posts based on their tag (no categories as of yet) in two content pages: posts and gallery, I discovered that the Handlebars template engine behind Ghost supports only built-in functions such as post.title() and post.content(). In other words the simplest possible extension of the system would require changes in core and trashing simple upgradeability of my web site forever.

In the meantime John O’Nollan’s team has been busy rebranding the Ghost Platform-as-a-Service and advertising it’s growing features.

As I may have previously mentioned:

Crowdfunding == Business;


Since at this point one could easily accuse me of ranting against the single black sheep of the flock, let me tell you the similar story of a hardware open source project I recently backed and received in my mailbox a month ago – Digix – “The ultimate 100% Arduino Due compatible dev board with Wifi and Mesh networking, Audio, USB OTG, microSD, and 99 i/o pins!“ Did I mention that “[t]he DigiX – like everything we produce at Digistump, is Open Source Hardware.”?

My initial excitement after unpacking the beautiful piece of hardware and plugging it in my PC was quickly suppressed after discovering how poor the documentation for the device was. I quickly realised I was sold a fairly expensive device that was advertised as the better alternative to the widely popular Arduino which – you guessed it right, is properly open source and funded by the sales of a truly original hardware platform that has been tested and improved by many and has proven itself as an excellent product worth its price. Yet my new toy was no match to Arduino in terms of support, documentation, lack of bugs and active development effort.

Alternative? (for a lack of better conclusion)

At this point you might be wondering whether I am not defending the view that developers of open source hardware and software don’t deserve to be awarded for their hard work. Or perhaps I dream of some romantic ideal of open source, where every effort is fully rewarded by simply earning the community’s recognition and respect.

This is certainly not the case when we talk about genuinely innovative initiatives that have expanded to become commercial products, but in their core continue to support the independent creators on budget by giving full power without hidden costs.

I believe that the preceding chapters have made my point about the difference between those and crowdfunded projects quite clear.

Response to “BDD: TDD done right”

The author of BDD: TDD done right gives an excellent overview of Behaviour Driven Development and a clear motivation why BDD can be seen as the logical continuation and indeed the better way to do Test Driven Development.

Furthermore, the author gives an interesting topic for discussion, drawing a parallel between BDD and Expert Systems.

In this response I would like to extend the discussion and clarify certain aspects about BDD, as well as challenge the author’s claim that Behaviour Driven Development is, in its core, the same as Test Driven Development.

Benefits of using BDD

Behaviour Driven Design really comes into place when the traditional model of splitting development tasks into independent features is exchanged for the implementing of user stories. This model is particularly common in the field of mobile and web application development, where a developer takes ownership of a flow of business logic and implements that throughout the entire stack (by modifying database models, controller logic and views).

This approach makes good sense, since it encourages organic growth of the entire system (i.e. it is impossible to end up in a situation where the back-end API has expanded immensely, while none of its services are available via the front-end). Furthermore, by enforcing a complete user interaction as the smallest development chunk in a project, it is easier to prioritise feature requests and adjust milestones based on user feedback.

All of that ties together with the idea that BDD does not rely on specific tooling, it is rather a practice that could be adopted in legacy projects by altering the naming conventions of unit tests and the minimum requirements for a pull request. And while this flexibility can clearly improve the development workflow, there are some subtle difficulties that need be addressed before jumping on the BDD bandwagon.

Dangers of using BDD 

In his introduction to BDD, Dan North stated that the idea behind the practice was to address recurring issues in the Test Driven Development teaching. However, the successful implementation of BDD would require knowledge of a greater domain than TDD does. Thorough understanding of the business implications of a given feature, knowledge of the system architecture from front-end to back-end and the potential need to deal with numerous programming languages and interfacing components are just a few of the added overheads when development is driven by behaviour.

All of these make it difficult to recommend BDD to novice programmers that are not familiar with a wide range of TDD topics.

BDD viewed as an Expert System

Apart from being a development practice better suited for expert software developers, BDD may share some common characteristics with Expert Systems as discussed by the author of BDD: TDD done right.

While there are some fundamental differences between a Knowledge-based System (the overarching domain for Expert Systems) and a BDD framework, both encourage the principle of “Five Whys” when reasoning about a design solution or inspecting the output of a complex logical inference. The Five Whys technique is fundamental to BDD as described in the Agile Alliance Description, but furthermore it could be viewed as a pragmatic approach to nesting rules and reasoning about inferences in a Knowledge-based System of higher complexity.


Clearly BDD builds on the foundations of TDD and can be viewed as its logical extension in the correct project setting. However, BDD is rather a set of recommendations and good practices, than a solid approach complete with its tooling, that could be handed to a novice developer.


The New Web: The Tools That Will Change Software Development

When I think back of 2006, the year I added “Freelance Web Developer” to my resume, memories of a simpler and somehow more welcoming Internet come to my mind.

In 2006 one of the most critical design decisions that a web developer had to make was: “Should we target 1024×768 or 1280×800 displays?” The Mobile Web was mainly mentioned in newspaper articles telling the stories of unfortunate men who were to pay hundreds of pounds to their mobile carriers for accidentally pressing a mystical button with a tiny globe icon on their new Nokia.

The iPhone did not exist, neither did Android and it was still O.K. to create an entire website using Adobe Flash.

Don’t get me wrong though, this is not some nostalgic note of times past, neither am I trying to argue that we were happier to design a button with rounded corners consisting of 9 PNG images. Internet Explorer dominated the Internet with a market share of well over 60% and implementing a two-column layout that looks identical in IE6 and IE7 involved more lines of CSS hacks than lines of style definitions.

Fast-forward 8 years and we have transitioned in an entirely new epoch of the Web. Web design is responsive– it reacts “intelligently” to the user’s device and does not make any assumptions about the media it is reproduced on. Websites have turned into Web Apps and the users’ browsers do the majority of the heavy lifting instead of the server. New tools have emerged that manage the entire lifecycle of a web application from dependency management, through provisioning and testing, all the way to user profiling and immediate pushing of hot-fixes.

All of that has allowed software developers to utilize web technologies and build amazingly complex application using just HTML, CSS and JavaScript. In this article I will briefly introduce some of the exciting new tools available to developers and share my humble predictions about the future of the web and software engineering in the next couple of years.

Platform as a Service

Eight years after the launch of Amazon Web Services [AWS], the Infrastructure as a Service model is hardly a new concept for anyone involved in the field of software engineering. Platform as a Service [PaaS] is the logical continuation of the AWS model, but made a whole lot easier. Heroku  and OpenShift by Red Hat  are two excellent examples of the powerful change happening in the field. These providers allow developers to deploy cloud applications in a matter of minutes, giving them the power of elastic scalability, load balancing and continuous deployment. PaaS providers expose APIs and command line tools that make the release of a new application version as easy as doing git push paas_provider.

I have been using the free tier of Heroku for well over a year now and it has allowed me to ship application prototypes and publish hot-fixes in unparalleled times. Its big disadvantage is the price of production deployment compared to Amazon Web Services.

Arguably the future of PaaS is in the further development of technologies such as Docker  that essentially allows a team to host its own PaaS on any server environment they choose.

JavaScript for clients and servers 

NodeJS is one of the server-side technologies supported by all major PaaS providers and the rate of its expansion certainly hints at a bright future for JavaScript on servers.

Node is platform built on Chrome’s V8 JavaScript engine. It has been developed specifically for distributed real-time applications that operate with a lot of data. The power of NodeJS lies in its simplicity and low-level applicability – it is equally suited for creating game servers, chat applications or content management systems. In fact recently a member of the WordPress dev team raised nearly 8 times higher than his goal in a Kickstarter campaign for a blogging platform based on NodeJS.

The reason I am an advocate of JavaScript on the server is the reduced tension between front-end and back-end development. It is ideal for small projects where the developer does not need to switch languages and browse multiple documentation sources. But the real power of NodeJS emerges in large teams where, if the appropriate coding standards have been followed, a developer is no longer responsible of a single side of the coin, but instead works on an entire flow of business logic that may span across both client and server.

A notable companion of Node is ExpressJS. This is a web application framework that builds on top of Node adding services such as routing, user sessions, hooks to database engines and many other essential features for building dynamic websites.

Server-side applications

Ever wondered how Google made applications such as Gmail respond to user actions in real time without refreshing you browser? Here is a hint: they did not use PHP.

In fact with the advancement of JavaScript browser engines and the adoption of techniques such as AJAX, developers have been able to offload the task of rendering views from the server and assign it to the user’s browser. The event-driven nature of JavaScript, on the other hand, allows UI elements to respond and views to be swapped as soon as the underlying data model has changed.

There are a number of technologies that aid the development of similar applications and the main difference between those lies in the amount of assumptions that the given framework has done for the developer.

Backbone is arguably the most powerful and flexible client-side application framework to date. It leaves to the developer the duty to set up data binding between interface elements and data models, as well as to choose engines for templating and routing. It is ideal for cases when performance is of the highest priority and the developers are ready to make the majority of the design decisions.

Angular on the other hand is Google’s own interpretation of the future of the web. Apparently it was developed when a JavaScript engineer at the Silicon Valley giant decided he could cut 6 months of development down to a few weeks if he took the time to complete his free-time project. Now angular powers a number of Google applications and has been supported by an overwhelming amount of open-source projects.

Certainly technologies such as Angular, Backbone or any of the frameworks “benchmarked” at the TodoMVC are overkill for simple websites with a few pages. But for large-scale browser applications, something that is becoming increasingly common lately, where speed makes the difference between keeping and loosing a client, these technologies are simply a must.

A few final words

The web has entered a new era and I find it increasingly surprising when software developers using more “traditional” languages and tools, label development for the web as a job for beginners. The web has truly allowed us to be more efficient, to develop faster and have a bigger impact on our users. Some have even reached as far as to state that JavaScript is the new Assembly.

I believe that the time has come for computer scientists to embrace the change and start making more beautiful, accessible and future-proof applications using the new Web.