“In-house” open source – code reusage can be your friend


A big problem nowadays when dealing with large-scale software projects is deciding upon reusing existing code or designing a fresh chunk of code that would do approximately the same thing. At a first glance the decision should clear and code reuse should be chosen. However, there are a couple of strong arguments that back up both decisions.

In this article we will illustrate some of these drawbacks of both approaches and propose a trade-off solution that would improve code quality and time management within a large software company. We will then present the potential improvements in contrast with the challenges that such a solution would pose.


A large project is usually split into smaller, more manageable chunks, which are able to be developed separately and integrated afterwards, with specific requirements outlined at the beginning. This being said, when developing a sub-task of the project, the algorithms/methods that compose that sub-task are usually not new concepts, but rather, a rearrangement of them in order to produce different results.

This being said, when looking at the two options available (reuse or design from scratch), the first one would be the best solution as it should be less time-consuming and less intellectual effort would be wasted on things that are already developed. However, the code that is to be reused is usually developed by another developer and this represents one of the challenges as we will see next.

Developers are very different when it comes to designing and implementing code. Usually the code of another person is harder to read while understanding the whole extent of its functionality than to redesign new code. This is why, code re-usage is often regarded as a more difficult and messy approach and the second choice (designing of new code) is mostly regarded as preferable.

A solution to this problem would represent software maintenance and well-documented code. However, given the probability of a developer to reuse his own code (a low one), little effort is put in making the structure and line of thought very clear. In most of the cases the code will never be reused and in the best case it will only serve for mild inspirational purposes.

We will now present a comparison between the two approaches together with their advantages and drawbacks in order to get a better idea about what can be improved.

The two Rs: Reusing vs. Redesigning

When considering a task it is very common to divide it into small “atomic” chunks and deal with them separately. Most of the software programs that are currently developed have a lot of these “atomic” chunks in common. Re-usage of existing code would save a lot of time when dealing with familiar or already developed parts. However, the code that is to be reused is usually not in a very friendly form and has to be refactored and adapted to the current set-up.

On the other side, redesigning everything from scratch is viewed as easier and more convenient, given the level of concentration needed to identify potential flaws and inconsistencies in the existing code when trying to integrate it with the rest of the project. A new, clean version of the required code would allow a better overview of it from the developer and can work towards better understanding of the underlying structure and hidden advantages. The down side of this is, as mentioned before, time wastage. Whenever redundant work is performed, time is considered to be wasted.

A mix between the two would significantly improve code quality and time management within a large software company.

In-house open source – better code quality

First we will define the concept of “in-house” open source and then proceed with describing the underlying aspects and additional measures that would be implemented in order to produce quality code and encourage re-usage.

This type of open source refers to the source code that is made available within a large software company. We can look at it as an intranet or a private public code pool. The idea is to focus on a smaller group of developers that can be motivated to create code which not only satisfies the project-wise requirements but also is “friendly” enough to be reused.

A database would have to be setup in order to hold all these reusable chunks. We can look at it as a virtual code library where each “atomic” chunk falls into a category and/or has specific tags that make it identifiable with a given task.

Also, in order to motivate the people involved in creation and re-usage, incentives must be provided. We will consider the initialization of such a system and potential evolution.

In the beginning all code will have to be new, in order to secure the quality of it. The chunks that are identified as being general and reusable would be well documented and structured. At this stage, extra work is required from the developers as they have to perform two tasks instead of one. However, once the code library starts to be populated, the advantages of such an approach would start to show. We now consider that the code library has a considerable size. When dealing with new projects, developers now would have to work less than average, given the re-usability of existing code.

In order to set-up such a system some methods of motivating are required. If extra work would not have any advantages the drive to work towards a common goal would disappear. An internal referencing system would solve the acknowledgement issue while a bonus-driven system would address the incentive problem.

Advantages and challenges of the approach

To sum up the discussion presented above we will identify some of the aspects that the proposed approach aims to address and improve together with eventual challenges that it may encounter.

  • the initial development part is the most important and the most difficult: new code is being created that has to be both functional for the current task and reusable. The second property can prove to be the most challenging part as the code must be read and understood with ease from an objective point of view (another developer)
  • once the library is set-up, it will serve the developers with good quality reusable code and thus will spare them a lot of time that would have been otherwise spent on redundant development.
  • the efficiency of the approach is direct proportional with time passage and size of the company.
  • more time in the long-run would allow developers to focus on the key aspects of an idea/algorithm and deliver better quality code
  • bonuses awarded to developers that are being “cited” means that they are motivated to produce even more reusable content
  • when considering the company’s success there are two possible leads: either keep the library private and thus increase efficiency and delivering rate or make the library available for purchase with the trade-off that the advantage is lost.


Even though code re-usage can be viewed as an improbable action when dealing with new projects, given the right circumstances and set-up, it can prove to be a very powerful tool and count towards improving the efficiency of the whole company that promotes it.

Response article – “Conservatism has no place in project management”

This is a response article to “Conservatism has no place in project management” by s0952140.


In the above mentioned article the author describes some of the drawbacks that can arise from project managers being sceptical in terms of implementing/integrating new technologies in their projects. The focus is mainly on the benefits of updating the tools and support systems of the project at any point such upgrades are available.

However, there are more factors that have to be taken into consideration when arguing about changes in a project structure. This response aims to identify some of the reasons change is not that easy to make and what are the circumstances when such updates are recommended.

Implications and analysis of change

First we will look at the planning and design part of a project in order to illustrate the factors involved in the process and how would change affect these factors. Because the focus of the discussion is around large-scale projects we will make the assumption that the time needed to finish such a project is also quite vast and changes would have to be performed during the development process.

The key elements of a project are without argument the developers and the methods and technologies they use. Their expertise and abilities are the main drivers of the development process. This is why, when considering change within a project, we have to extrapolate this change to include the consequences that it produces. When comparing technologies, the manager should not only look at the final results, but all the ramifications that upgrade/update would imply.

“Project managers (or in worst case: their bosses) often over estimate the cost of learning new technologies and underestimate the benefit of that.”

When looking at possible updates there are two situations that are possible: either such upgrade is pitched by a developer that has extra knowledge in his field or the manager himself considers the alternative. In both cases, if the people involved in this process have necessary expertise. Such notions of underestimating benefits or overestimating costs should not arise except from the situations where precise information cannot be obtained.

Considering the change, the project manager would perform sensitivity analysis on the variables that are to be modified and thus would have a clear idea about the implications of such change. Some further qualitative analysis would be made regarding the adaptability of the developers.

The quantitative and qualitative analysis of the results would enable the managers to illustrate and compare the possible scenarios in order to choose the best option. At this stage it is also important to note that the project coordinator has to have good knowledge about the capabilities of the developers regarding the new approaches and their adaptability to change.

New Technologies: benefits and challenges

The world is in a constant evolution, especially in the software development field where the pace is slightly higher than the average. This is why it’s important to be aware of all the technological advances that are on the market and have a broad view of the alternatives.

However, when considering new platforms or procedures the common mistake that is made is to compare the final results of the existing system and its alternative.

“What you should never do, is stick with the old, because it works. Don’t be conservative that way.”

Sometimes the “old that works” is a better choice in terms of costs for an ongoing project. The idea is not to stick with old technologies indefinitely but rather consider them for the future. Upgrading technologies in the middle of the development process would produce change throughout the project, starting from redesigning the initial structure and ending with integration of those new technologies with the old components.

“When you decide to be conservative, you are losing an immeasurable amount of time, your developers might not be working on the most important aspects of the task, and your users might not have the best user experience they could have.”

Of course that if such upgrade would produce better results the managers would notice that based on their evaluation and would proceed in implementing the new design. The assumption made in this paper is that the “conservative” manager would reject any new proposals based on personal experience and not on a thorough analysis of the outcomes.

Possible outcomes and approaches

Given the situation where new technologies that are applicable to a project are available, there are three main decisions that a manager would have:

1. Consider the upgrades, but postpone the transfer for the next project. In this situation the sensitivity analysis would show that even though new and better technologies are available, the functionality of the project would not present justifiable improvement when considering the costs of implementation.

2. Consider the upgrades, implement part of them and postpone the rest for future projects. This decision would be applied for the cases where only some improvements would justify their integration in the project when considering the cost per earnings (either monetary or in terms of functionality) ratio.

3. Redesign the project in order to implement all the proposed upgrades. In this situation, integrating all the new solutions would significantly increase the final project’s value and would justify all the extra costs that such a change would imply.


In the original article, the author stated that: “Should I keep to what I am doing or should I try something new. Always try something new.”

However, the idea of blindly integrating new technologies into ongoing projects can prove to be detrimental to the final result. This is why analyses have to be performed before making any big decisions with respect to the project structure.

Considering the arguments presented above, we conclude that “Conservatism” has its role in the management of a project, given that it’s backed up with enough empirical evidence.


Duality of Project Management: Objective vs. Subjective factors

1. Introduction

Objective factors = data and procedures regarding the project management process
Subjective factors = human component in the development of the project

Project management is an important part of any large-scale project that requires the coordinator to oversee all the activities in order to synchronize them efficiently. Several techniques and analyses have been developed in order to aid the managers in optimizing costs and time.

So the main purpose of this is to help managers develop and implement complex plans of execution and to make the most out of the available resources. However, it has been viewed from a logistic point of view, rather than a motivational one. This article plans to uncover another side and maybe a potential advantage of such system that has not yet been widely discussed.

Instead of focusing only on the planning and coordinating the sub-tasks of a large-scale project, managers should also consider the human factor involved in such a project. As a developer of a small part of the project (relative to the whole project), it’s easy to lose track of the main goal and only focus on the specific milestones that you have to achieve. This should be the point, right? Do something and not worry about what others are doing. Well, I say there is another part that few people consider. What if the developer would show more motivation if he knows how the project should develop, what is the final goal and how would his work be reflected in the end?

In the next sections I will describe the standard techniques of dealing with a large-scale project from a management point of view and what benefits can be added to that already optimized process.

2. What is Project Management?

In principle, project management is responsible of dividing the task into small, “atomic”, goals and plan them with respect to the availability of resources. In other words, assign people to tasks in a specific order with some given constraints (one or more tasks must finish before another one can commence).

Over the years, this approach has been proved to be very useful, especially for the managers, who can have a broad view over the project and adjust the variables that compose it in order to reach the optimal solution.
Next, we will discuss only those aspects of a project that can be improved by stimulating its “roots”, the individual developers. The classical model is divided into 5 main categories: Planning, Organising, Communication, Control and Evaluation. We will discuss only three of these: planning, organising and communication.

3. Project Management methods: can objective techniques be translated into subjective ones?

Regarding procedures for tackling the project, two main methods stand out: CPM (Critical Path Method) and PERT (Performance Evaluation and Review Technique). CPM uses deterministic estimates of task duration and focuses more on the trade-off of cost and time. PERT, also using estimates of task duration, adopts a more probabilistic approach, to predict the likelihood of on-time project completeness. Usually the two techniques are used together to output more precise data. Specific software programs, currently on the market implement these methods in order to reduce the complexity of the job.

This combined approach is divided into a series of sub-tasks: defining the project, splitting the project into sub-projects, defining dependencies etc. This gives a clear guideline for the manager, telling him what to do at each step. However, the developer is only concerned with following the instructions, without having the whole “picture” in mind. Therefore he doesn’t always know how he contributes to the whole project.

It is important to note at this point that the participants to the project do not need to know every aspect of the management process, but only those parts that relate directly to them. In order to discuss on these aspects, some technical details of the management process need to be defined.

Every process has a start and end state: these represent the terminal nodes of the process network. Each task is defined by: earliest start time, latest start time, earliest finish time, latest finish time and duration

  • Earliest finish time = earliest start time + duration
  • Latest start time = latest finish time – duration
  • Earliest start time = largest earliest finish time of all immediate predecessors
  • Latest finish time = smallest latest start time of all immediate successors

Thus project duration = largest early finish time of all activities
activity block


  • Total float = time by which activity can be delayed without affecting project duration: Late start time – Early start time OR 0 if activity is critical
  • Free float = time by which activity can be delayed without affecting project duration or the early start times of subsequent activities: smallest early start time of immediate successors – early finish time

4. A basic example: variables capable of pushing optimality even further

In the example below the main project has been divided into separate activities, each with its specific time and resource requirements. These are plotted to follow the restrictions or dependencies on a timeline. However, because some conditions are weaker than others or because some activities are estimated to take less time than others, some discrepancies arise. These discrepancies give some tasks a certain degree of freedom in terms of start time.

The activities (marked by the green bars) have the possibility to pivot across the free and total float lines. This means that even though a task is completed ahead of time, the whole process can continue only when the conditions are satisfied (i.e. all other required activities are completed). This can be compared to the bottleneck effect, present in most lines of production (factories, etc): “The system moves as fast as the slowest of its components”.

time activity resources

5. What can be done to drive the optimization even further

A way of overcoming this bottleneck effect, or maybe diminishing its impact, would be to stimulate the developers by adding the competition factor or by allowing redistribution of forces. This can be done only in the case when all the participants in the project are aware of the key variables in the equation. For example, say two activities have to finish in order for the process to continue. The developers involved in those activities, knowing that they are the key components in that moment of the project, will work as if the project would be a personal target that they must achieve.

Also, consider the overall project, at a macro-level. Developers, having this kind of information would not only assign the sub-tasks as being personal targets, but will also know exactly to what extent their contribution will be useful in the end. This gives them a sense of belonging that further stimulates them to work towards the final goal. It’s only by doing the small tasks flawlessly that the final result would be optimal.

6. Conclusion

In conclusion, although project management should be addressed to managers in order to perform better and make the project optimal in terms of time and costs, it can also be beneficial to the employees that would contribute even more to that optimality, altering and stimulating the human factor in the process. This can be done only by a proper communication channel between coordinators and executors.