The right way to develop software in foreign countries

Motivation behind developing software abroad

As more countries open to foreign trade, firms gain an access to a wide pool of opportunities to seek resources abroad. They can export products to new markets, establish relationships with foreign firms or move production to new locations. The trend is reflected in the industry, as 50% of American Fortune 500 companies are using offshore software development in their business[1]. Countries offer different prices for resources, for instance the cost of developing the same software is 50% less in India compared to the US[2]. The savings come mainly from the lower labour cost. Moving software production to another country can also make the firm have access to wider labour pool and therefore skills and expertise.

Software development business models

The software development can either be onshore, meaning that it stays in the same country as the firm or offshore meaning that it is developed abroad. It can also be insourced meaning that the same company needing the software is developing it, or it can be outsourced, meaning that the client company contracts another vendor to do the development. The business model of developing software is therefore divided into four categories: onshore insourcing, onshore outsourcing, offshore insourcing and offshore outsourcing. The complexity of conducting these business models is displayed in Figure 1.

1.png

Figure 1: Process complexity of different software development business models

Source: Prikladnicki, R. et al., 2007. “Distributed Software Development: Practices and challenges in different business strategies of offshoring and onshoring”. International Conference on Global Software Engineering” (ICGSE 2007), pp. 262–274[3]

Challenges of offshore development

Companies that engage in software offshore outsourcing suffer from challenges related to culture of people involved in the process. Because two nationalities with different organisational cultures are exposed to working together, there is a high chance of clash where work practices of each of them do not match. For instance, if a German company sends its development to India it may find itself adjusting to more relaxed and laid back attitude towards being on time. If the company offshores to a country where the language is an issue, it may suffer miscommunication problems such as incorrectly understood requirements specification by the developer. Also, because the company is engaging in a new partnership, initially it may suffer from trust issues. For instance, the objectives of the developer may not be aligned with the objectives of the client. The developer for instance may want to just develop the specification and cash in on that without looking at the long-term code quality and such.

The process of offshoring the development may mean that the overall cost of production will be smaller given lower labour costs. However, there are many transaction costs associated with the production. For instance, the cost of more coordination or management and supervision of the offshore developer. Also, the country’s infrastructure has to be taken into account. For instance, if the educational institutions are weak in a specific state, then the labour pool, even though cheaper, may not be suitable for employment or require more investment into initial training. There is also the risk of miscommunication between companies in the process, which may mean that wrong product will end up being developed and therefore lead to a potential costly project failure. In a report produced by Gartner, the company states that 50% of offshoring software development projects fail due to all the challenges they face[4].

Deciding on the right business model

Because of all the hurdles of global software outsourcing, it is deemed the best to relocate only highly structured work. The creative part such as problem understanding, requirements engineering and specification generation is best to stay in house, whereas the “manual work”, such as highly structured programming, can be relocated abroad. This will ensure that the savings are achieved through workload relocation, while the quality of the product is not negatively impacted, as the client company develops its strict software specification and testing.

In his “In Defense of Not-Invented-Here Syndrome” blog post[5] Joel Spolsky arrives at a similar conclusion. In it, he says that the core of the business, where its competencies are shining has to be developed in house, whereas the functions that are not core and can be substituted, may can be relocated to elsewhere. Companies should not let go of their unique capabilities and not be afraid to relocate substitutable activities. For instance, if the company is great at requirements engineering, whereas mediocre at programing, it should do the former in house, whereas not be afraid to substitute the latter for more beneficial alternatives such as outsourcing. However, if the company is exceptional at programming it should recognise it as its unique capability and not outsource it. This situation is illustrated by Figure 2 below.

2.png

Figure 2: Parties in Outsourced Software Development

Source: Batra, D. 2009. “Modified Agile Practices for Outsourced Software Projects” Communications of the ACM, vol. 52, no. 9, pp.143-148[6]

Having interned in one of the web development vendors in India, I have seen many of our US and UK clients approach outsourcing incorrectly. They have contracted my office to develop their core product, for instance a main web portal that will be used to generate revenue for the client. The specifications we were receiving for these projects were not specific enough and resulted in us making many design mistakes while developing. For instance, we had to invent the database structure ourselves, only to later realise that we did not understand the client’s needs completely and had to redesign them. This instance was one of several where the client along with programming outsourced the requirements engineering parts and involved us in decision-making rather than just coding.

In my opinion the decision whether to develop software abroad is context specific and depends on the nature and the size of the project or capabilities of both the client and vendor. If the company decided to develop abroad to gain cost advantages, it should first acknowledge the cost and benefits of both offshore insourcing and offshore outsourcing. When deciding to use the first model, it should understand that it will be able to control its production, however it has to understand the culture and the infrastructure of the country it relocates to to avoid project failures related to the international expansion risk. If deciding to offshore outsource, it has to understand that it will be able to achieve higher cost savings, as it will not have to worry about running a firm in a foreign environment, at the expense of less control on the product quality. In my opinion the offshore insourcing is a better business model, as the company has more control over the quality of the product, which in software development is the most crucial competitive advantage the company can have. Offshore outsourcing may be more lucrative in terms of savings, but at the same time much more risky, as the outcome of such partnership is more difficult for a client company to fully control. In either scenario, before investing a lot in outsourcing the company has to first gain experience on how to best handle it. To minimise the risk it may want to start small, with possibly onshore outsourcing, seeing how the process develops and then expanding to offshore opportunities.

Conclusion

On the surface seeking cheaper labour in foreign countries may seem like a plausible idea. However, when looked at holistically, offshoring thr software development may introduce many problems and hidden costs that may make the initiative produce a loss. The decision to outsource is therefore context specific and depends on the project and capabilities of the client, the vendor and the countries involved.

References:

[1] Carmel, E and Agarwal, R. 2002. “The maturation of offshore sourcing of information technology” MIS Quarterly Executive, vol. 1, no. 2, pp. 65-78

[2] Carmel, E. 2003b. “The new software exporting nations: success factors” The Electronic Journal on Information Systems in Developing Countries, vol. 13, no. 4, pp. 1-12

[3] Prikladnicki, R. et al., 2007. “Distributed Software Development: Practices and challenges in different business strategies of offshoring and onshoring”. International Conference on Global Software Engineering” (ICGSE 2007), pp. 262–274

[4] “Gartner Says Half Of Outsourcing Projects Doomed To Failure”. URL: http://www.crn.com/news/channel-programs/18822227/gartner-says-half-of-outsourcing-projects-doomed-to-failure.htm. Date Accessed: 10/03/2014

[5] “In Defense of Not-Invented-Here Syndrome” by Joel Spolsky. URL: http://www.joelonsoftware.com/articles/fog0000000007.html. Date Accessed: 10/03/2014

[6] Batra, D. 2009. “Modified Agile Practices for Outsourced Software Projects” Communications of the ACM, vol. 52, no. 9, pp.143-148

Response article to the “New Role of Requirements Engineering in Software Development”

This is a response article to the “New Role of Requirements Engineering in Software Development” article by s1214282.

Summary of the main article

In this article author mentions that the emerging trends coming from the business environment have an impact on software engineering and proposes ways of coping with them. Author states that nowadays stakeholders expect the software to be of good quality, delivered in a timely fashion and within the budget. To meet that, developing companies have to gather software requirements from the project stakeholders using one of the requirements engineering processes. By weighing the pros and cons of traditional requirements-first model with agile iterative requirements elicitation model, author states that iterative requirements elicitation model is the most applicable one for both large and small software development initiatives.

Even though on the surface the author may be by in large right, I tend to disagree with some of the points that are being made in the article.

The need for Requirements Engineering

defect_chart.gif

Figure 1. Source: Applied Software Measurement, Capers Jones, 1996

As displayed by the graph above, the cost of amending the software code increases as the project progresses through its lifecycle. Projects that start developing the right software (meeting customers’ functionality expectations) as soon as possible, will face significant time and budget advantages, as the error fixing is not left until later on in the project when amendments in the code are more expensive. Therefore, I do agree with the author’s statement that it is important to ensure the requirements engineering processes are in place in the business. There are many methods of requirements engineering depending on the development processes in place in the business or the nature of the project developed. The two prominent categories of requirements engineering are requirements-first approach, where requirements are specified before coding begins, and coding-first approach, when coding starts even if the requirements are vague.

Requirements Engineering in the context of large software development

A software product may start with being born-large. For example, a new information system for NHS would have to be considered holistically from the beginning, rather than starting small with only a couple of functions and then expanding. This may be because many functions would be interdependent and have to be included in the product in the initial release. This would be to account for a large number of stakeholders that the system would cater for and environment elements, such as massive databases, that it would interact with.

Large systems such as the NHS information system would be developed for a long period of time. Possibly even several years to embrace all the variables that the system have to cover. It would also have a significant budget and therefore a lot of money on stake should the project fail. The risk management would be of the highest priority for a project this size. The decision-making process would have to take account of many risks of future errors, obstacles in NHS IT infrastructure and such. Therefore, the initial requirements engineering would have to take a long time before the plan for the development of the system is clear and ready to be implemented.

Requirements-first approach would be therefore better in the context of such a large system. It would bring structure to the development process, as engineers would have a clarity on what functionality will be developed and at which point. Clear specification would help their day-to-day decision making and make them less likely to amend the code later on. Clarity would also benefit the investors of the project. They would want to see tangible project plans with clear scope and deadlines to assess their return on them and to justify their investment. Agile methodologies may not provide such clarity. They base on code-first principle and start the development without knowing exactly what the finished product will do or look like. Agile methodologies therefore may find better use in contexts where risk management is less of a challenge.

Requirements Engineering in the context of small software development

Author states that concurrent development is better than requirements-first, as:

  • “Process overheads may be lower. Less time will be spent on analysis and documentation for requirements, since requirements can be captured more precisely and rapidly.”

This is correct however only applicable in the context of small software development. In it, analysis and documentation are not as important as time-to-market therefore limiting the bureaucracy will benefit these projects, as they will have more time to contribute to value-adding activities, such as coding. However, in the context of large systems, it is this bureaucracy that may prove to be highly value-adding, as it will diminish the project’s risk and therefore the likelihood of a costly failure.

  • “Critical requirements can be identified and implemented in the early stage. Communications between software engineers and stakeholders can be more efficient, because of the concurrency of different RE activities.”

That is true in contexts where there are not that many stakeholders. For instance, having a customer of the product on site to advise on the development whenever needed would work for small agile teams with one customer as the main stakeholder. In a bigger context, the communication may not be as efficient, as there are many stakeholders to contact and negotiate the changes with, which agile methodologies do not make time for as much.

  • “Respond quickly to requirement changes. Due to the iterative cycles of requirement identification and documentation, we can respond to requirement changes more quickly with relatively lower costs.”

Responding “quickly” and “with relatively lower cost”  may be again only relevant in small development context. In large and complex systems it may be a case that changing one area of the code will trickle down into many others. Therefore it is crucial to clarify any anticipated obstacles at the requirements stage. This will diminish the likelihood of future, expensive changes.

Conclusion

I tend to agree with the conclusion that agile methodologies bring advantages to software development that improve the time-to-market, budget and quality of finished software products. However, in the context of large scale systems, I believe that traditional requirements engineering works better.

References

[1] Charette, R.N., 2005. Why software fails [software failure]. IEEE Spectrum 42(9). pp. 42-49.

[2] Damian, D. and Chisan, J., 2006. An Empirical Study of the Complex Relationships between Requirements Engineering Processes and Other Processes that Lead to Payoffs in Productivity, Quality, and Risk Management. IEEE Transactions on Software Engineering 32(7), pp. 433-453.

[3] Hofmann, H.F. and Motors, G., 2001. Requirements Engineering as a Success Factor in Software Projects. IEEE Software 18(4), pp. 58-66.

[4] Kolp, M., Mylopoulos, J. and Castro, J., 2002. Towards requirements- driven information systems engineering: the Tropos project. Information Systems, 27(6), pp. 365-389.

[5] Sommerville, I., 2005. Integrated Requirements Engineering: A Tutorial. IEEE Software 22(1), pp. 16-23.

Agile is Better for Knowledge Management in Large Software Development

Software engineering is a knowledge-intensive activity.  Software companies are more likely to beat their competitors by creating novel ideas than by simply using a lot of capital to achieve economies of scale. Because of that, these firms depend proportionally more on intangible assets, such as skills of their workers, rather than tangible assets like offices or machinery. Software companies achieve competitive advantage by leveraging their unique resources, one of the most important being their intellectual capital. Those that know more, for instance have more experience in using software design patterns to solve performance issues, are able to create products of better quality and better return on investment. As a result, knowledge management has been an important consideration for large businesses and now 80% of the large corporations have knowledge management initiatives[4]. In principle it may seem obvious that the more experience the company has the lower the likelihood of developers getting stuck and repeating past errors, but in reality it is not that simple for a large software firm to successfully accumulate its experience and knowledge and achieve sustainable organisational learning.

Why do we need knowledge management?

Advancements in technology and demand for software require software companies to improve their productivity proportionally more to the increase in their resources.  Companies would preferably want their performance to keep improving from project to project. They would want their workers to increment their own experience bank by adding new information that was gathered during the course of their last development and use it to tackle new tasks more easily. In current volatile environment, developers come and go and their knowledge comes and goes with them. When new employees are hired, they are not able to use the experience of the people they are replacing. The knowledge of previous workers used to solve bugs or make design decisions is not accessible. If faced with similar problems to the ones solved by the company in the past, new developers may not know exactly how to approach them. With every worker change there is therefore a knowledge drain and organisational memory loss due to the lack of perfect transition of knowledge from one employee to another. Knowledge management is a currently topical discipline, which aims at diminishing this issue. This practice encourages mechanisms for developers to utilise previous engineers’ work, whether it is code or information stored in a knowledge repository on how to approach certain problems, which leads to less rework and faster development progress. Implementing a knowledge management system involves introducing not only new technology, but also organisational change such as culture or human resources adjustments.  For instance, because the knowledge repository may not benefit the workers now, but few years from now, they may see contributing to knowledge pool as a burden, as it would not directly benefit them and their current situation or performance. They may also feel that technology used today will be obsolete in the future and see little sense in trying to describe learning from using it, as future workers may not even need this information. This calls for  for instance cultural change inside the business whereby leadership of the organisation strongly supports and encourages workers to contribute to the knowledge management initiatives. As much as 50-60% of new knowledge management initiatives still fail, because technological change is not introduced parallel with the process change[4].

How can knowledge be shared?

Knowledge carried by workers can be divided into two types: explicit and tacit. The former refers to the easy to articulate information that can be represented as documents, graphs or tables. This could for instance be a document displaying software duration estimation model developed by previous employees and explaining the reasoning behind it.  The latter covers the intangible, difficult to pin down know-how of a person – their gut feeling, experiences, perceptions or attitudes.  Examples of that could be how to deal with a specific customer or a specific design issue. To make the best use of the knowledge that is being created in the business, it would be in the company’s interest to establish mechanisms that would allow for a knowledge flow between team members. For instance, a company could set up an information system used as a repository for explicit knowledge – an organised catalogue of all the evaluations of previous projects stored in files accessible by workers at any point. Through these repositories developers could find out how to approach a new project based on what worked or did not work in the previous one. They would internalise this explicit knowledge and turn it into a tacit knowledge by increasing their know-how. At the end of their project, they could evaluate their work, for instance by doing a post-mortem analysis, and sustain the acquired knowledge inside the business by publishing the evaluation document into the repository for future colleagues to use and learn from it. Knowledge management systems could also grow larger and span not only project evaluations, but all the files used in the project, which could make them either potential knowledge gold mines filled with interesting past learning or unusable, obsolete files that work against the purpose of achieving competitive advantage through improved decision making.

Would experience database scenario be viable in real life? In many cases it is not.

Knowledge management in traditional software development methodologies

The software methodologies are divided into traditional, which follow the waterfall model-like approach, and agile. Traditional approaches divide the development into a series of independent steps. They start with eliciting requirements for the product to be developed, they analyse them, negotiate them with the stakeholders, clarify them, develop them, test them and deliver the finished product to the customer. The development team in traditional approach is divided into sections, each of which has specialised capabilities. These work on their own responsibilities and do not participate directly in the work of the other group. For instance, when doing requirements engineering at the beginning of the process, the team allocated to this step would construct documents listing all the use cases that need to be captured by the system and then pass it to the development team which will translate these into code. The requirements engineers do not step into coding and vice versa, programmers do not interfere with the product features elicitation. Development that uses traditional software methodologies is a long, as it allocates a lot of effort to pre-production even before any coding starts. Structured approaches allow for more space to analyse each of the steps to be able to make educated decisions on how to best approach them, as later on these decisions will be difficult to amend, as the development process cannot easily iterate back to previous steps. They allow for time to retrieve previously generated knowledge from the repositories and use it to improve current project’s decision making and therefore performance. They also are able to introduce structured and systematic methods of creating knowledge artefacts and sharing them with the organisation. Traditional methodologies may be more suitable for large software development. Large products are complex and one design decision may have a proportionally more impact on future progress of the application than if the system was smaller in scale. The time spent by workers on analysing previous knowledge and the expenditure of the firm on developing and maintaining knowledge repositories may be more justifiable, as it may result in better decision-making and less errors that may have a negative impact on the return on investment in the product. The process if not managed well can become inefficient. If all the employees, let’s say two hundred, are required to explicitly state what they learn and contribute that knowledge to the repository, it may create a database filled with irrelevant information, which is difficult to browse through and use to aid future decision making. The quality of the repository stops the knowledge from being retrieved and therefore stops the workers from contributing knowledge to it as they may feel that it simply does not make sense to spend time in an unusable information system.

Examples of knowledge management in the industry

Large software development company Infosys has for many years pushed an approach to store all the knowledge gained by employees in an organised and monitored for quality repository. Initially after an introduction of the knowledge management information system the company did not see much effort being put from the employees to share their experience and evaluate the work they do in the business. In its first year the information system saw only 5% of developers contributing to the knowledge pool and even less using it to retrieve information[2]. Infosys realised that voluntary contributions to the knowledge management system will not work and that cultural change and incentives mechanism are necessary for the knowledge pool to grow. They have introduced a gamified system whereby developers could gain virtual currency for sharing their experiences, which could then be exchanged for real life products or money. This attracted workers to contribute with 20% of them sharing information[2]. Unsurprisingly however, after several months the company realised that the knowledge inside its database was of low quality and revised the incentive scheme, which had a negative impact on the amount of knowledge shared. Infosys nowadays still runs a common knowledge repository and states that it faces biggest challenges with extracting knowledge from software teams working together across many locations.

DaimlerChrysler involved in developing software for its car electronics, established a unit inside their organisation called Software Experience Center. The SEC was aimed at investigating how the process of software development can best capture knowledge that it creates and codify it for future use and learning. It used the notion of “experience factory” whereby a group of people is working inside the organisation and is specifically assigned to project analysis and evaluation to identify aspects that may improve future projects. This is then shared in a common knowledge repository, which ensures quality of the content inside it. DaimlerChrysler said that this approach is viable, however only when managed properly. They stated organisational and human resource challenges as the biggest obstacles to successful knowledge management. For instance, they said that knowledge leaders tend to overestimate the motivation of workers to contribute to the knowledge initiatives[1].

Both of the companies are large and have established methods that turn experiences of workers into explicit knowledge available for the rest of the organisation. Their strict processes allow for standardisation of what knowledge and of what quality is put into repositories, to contribute to better knowledge retrieval in the future. Today’s business world constraints however require firms to develop products rapidly and efficiently, as time and budget are one of the most prominent customers’ expectations. They also put pressure on development firms to allow for changes to be made throughout their software projects as their product requirements may have changed since the inception of the development. Often, instead of traditional methodologies, companies therefore use agile methodologies such as extreme programming to accommodate for the dynamic software environment and reduce the risk of developing a product that will not satisfy its stakeholders.

Is agile the answer?

Agile methodologies create organisational cultures that encourage cooperation and learning from each other. They have interpersonal communication at their core, which is reflected as “Individuals and interactions over processes and tools” in agile manifesto. Companies using agile design their work practices to include group meetings and constant information sharing between developers.  These practices help with transferring tacit knowledge – the difficult to articulate attitudes and perceptions of workers. These can be shared using methods such as pair programming. Using this development practice, a worker who has been exposed in the past to a particular design issue can instruct his partner on how to best approach it and solve it in current context. Also, through pair rotation the entire team will share similar knowledge, as partners will transfer experience between each other each time they rotate. This increases organisational learning and sustains the knowledge in the business as now if one of the workers would leave the company, the information would still be with the other workers. Agile methodologies diminish the barrier of work processes being separated from knowledge management processes, as knowledge management happens simultaneously with development in agile.

Transferring tacit knowledge between team members may prove beneficial, however to truly sustain all the knowledge in the business, the company would want to transform the tacit knowledge of workers into explicit knowledge available to all the employees in the organisation (for example by publishing documents into previously mentioned repositories). Because of their intensity, agile methodologies however do not leave much space for sustainable knowledge creation and retrieval.  Coding starts from day one, as soon as some initial meetings take place, without much space for analysing previous work to find out patterns or approaches that could be used in the current project. They are aimed at lowering the product’s time-to-market and do not leave much space for workers to articulate their learning throughout their work. Also, after the project is finished and developers have the time for evaluation, the context-specific nature of the software they developed does not allow them to properly share ideas and information for future use, as they would either have to convert these into abstract format applicable in many contexts, which would take a lot of their time, or share it as it is, which may not be applicable in future projects.

Agile methodologies share knowledge well inside small teams located in the common workspace. They allow for weaker knowledge transfer in distributed software development for instance when one company has few agile teams working in separate offices distributed globally. Because of the lack of common repository for knowledge, workers are only able to transfer both explicit and tacit knowledge between workers in their workspace and only explicit knowledge for instance in the form of version control with workers in distributed places.

Conclusion

In my opinion, knowledge management appears promising only on the surface. In theory it can help the company achieve unique competitive advantage over their rivals and utilise intellectual capital resource of the business. However as displayed above, there is no approach that is ideal for facilitating knowledge sharing in the organisation. Traditional approach to software development may seem better in case of large scale projects as it incorporates space for evaluation, formal decision making or sharing explicit knowledge, which sustainably improves the organisational knowledge pool. The bureaucratic fashion of knowledge sharing in these methodologies may cover all the knowledge created by workers in the business, but that may not necessarily reflect itself in improved performance if knowledge repositories are badly managed, never used or obsolete. Because of many challenges to optimal knowledge sharing, leaders of companies have to therefore not necessarily optimise knowledge sharing, but “satisfice” – satisfy some part of it by sacrificing another. Agile methodologies work better in transferring tacit knowledge between workers, which arguably may contribute to the firm’s competitive advantage more than explicit knowledge, as tacit knowledge directly improves the skills and know-how of developers. Agile may be an answer to knowledge management of large projects, as it satisfies tacit knowledge transfer pretty well, while sacrificing risky bureaucratic practices, which output may not result in competitive advantage in the future.

References

[1] Schneider, K., von Hunnius, J.-P., Basili, V.R. 2002. “Experience in Implementing a Learning Software Organization”, IEEE Software, 19(3), pp. 46-49.

[2] Kimble, C. 2013. “What Cost Knowledge Management? The Example of Infosys”, Global Business and Organizational Excellence, 32(3), pp. 6-14.

[3] Schneider, K. 2002. “What to Expect From Software Experience Exploitation”, Journal of Universal Computer Science, 8(6), pp. 570-580.

[4] Rus, I., Lindvall, M. 2002. “Knowledge Management in Software Engineering”, IEEE Software, 19(3), pp. 26-38.

[5] Levy, M., Hazzan, O. 2009. “Knowledge management in practice: The case of agile software development”, ICSE Workshop on Cooperative and Human Aspects on Software Engineering, 2009. CHASE ’09. pp. 60-65.

[6] Dorairaj, S., Noble, J., Malik, P. 2012. “Knowledge Management in Distributed Agile Software Development”, Agile Conference (AGILE) 13-17 Aug. 2012, pp. 64 – 73.

[7] Desouza, K.C. 2003. “Barriers to Effective Use of Knowledge Management Systems in Software Engineering”, Communications of the ACM, 46(1), pp. 99-101.

[8] Bjørnson, F.B., Dingsøyr, T.“A Survey of Perceptions on Knowledge Management Schools in Agile and Traditional Software Development Environments”, Agile Processes in Software Engineering and Extreme Programming Lecture Notes in Business Information Processing, Vol. 31, pp 94-103.