Native & Device-Centric Apps, The wise choice for successful mobile applications

Introduction

Nowadays, a variety of different platforms exist in web, mobile, and desktop forms and have a fast increasing user base. As a result, it has become common practice to design portable systems that can be reused in other platforms with a minimum number of modifications. While recently discussing with a developer of a well known Windows Phone app, I was told that he also uses this practice so that he is able to port his app to other platforms, as it costs a lot to build it from the ground up for each platform. However, when software that is native to a platform is ported to other platforms, of different capabilities and requirements, users end up with an inferior product (eg. Flash for OS X). To my point of view this policy is greedy and ultimately a bad practice. I will give evidence of how device-centric, native applications are of higher quality and attract more users as a result. 

Background

For the purposes of this article, when referring to platforms my focus is on mobile operating systems, for two reasons. The first is that mobile applications have recently become extremely popular, thus profitable, and have attracted the interest of many major software companies. The second reason, is that each of the major mobile platforms is fundamentally different to the rest and comes with their own set of native tools, programming languages and UI style.

These two factors can result in a recipe for disaster, a fact that has been proven multiple times even by large software companies. Consider, for example, an app natively developed for iOS and after becoming successful a decision is made to port is to Android, Windows Phone, a Web version etc. Most of us have seen this happen many times as users and it rarely has any success. The ported version of such an application will often have worse performance than its native counterpart, more limited functionality and a UI that feels out of place, all of which frustrate end users. Users, in turn, will not give a second chance to an app that does not satisfy them as long as there is an alternative, and in a world of mobile platforms with millions of applications there will most likely be a better alternative.

What makes a native application superior?

While it is expensive to have an application built from the ground up for each new platform, there are certain advantages [2] to native applications that the users appreciate. Firstly, native applications can be designed to leverage platform-specific hardware and software [3]. They are able to use all the specific APIs of each platform and work well with the integrated apps of an operating system, such as camera, adress book etc. Additionally, being written in the platform’s native language they can exploit the capabilities offered by it, offer increased security [5] and work around its limitations resulting in better performance. Secondly, each of the modern mobile platforms has certain guidelines [4] for the UI design and other aspects of development, to make sure that usability, appearance and feel of each app is on the same standard as all other native apps. Users have come to expect applications -free or not- to be of such high quality, hence it is imperative to their success that native applications follow these guidelines.

In addition to the above, I trust that the best possible practice is for an application to not just be designed in a platform-centric way, but even more specifically in a device-centric way [7]. Meaning that there should be different versions of the app, for the same platform, depending on the specific device it is going to be used on. This is because even though two devices may technically share the same platform, physical characteristics of a device may set it completely apart. Users for example have more expectations of what a tablet app should be like, both in UI and usability, rather than a mobile app. Mobile apps that have just a scaled UI to fit the bigger screen as their only difference from their mobile version are being badly received by users. Despite this fact, even well known companies, with millions of users, tend to overlook this fact and in return receive poor ratings or lose users [6].

 Conclusion

It is attractive and easy to port a mobile application across different platforms but just because it is so does not mean it is beneficial. On the contrary, we showed it may be damaging for a reputable company to do so. A ported app of lower standards will cost less in the short term but the damaged reputation from disappointed users will cost a lot in the long term. As such it is better to not offer an app on every platform & device rather than offer a bad one. Apps that are tailored to the platforms and devices on which they are intended to be used will result in happier users. Having happy users is translated into good ratings, thus better reputation. Good reputation attracts more users and the larger the user-base the bigger the profit. To conclude, design native apps → profit.

Scripting languages and novice programmers – Responce Article

This article is a response to “On scripting languages and rockstar programmers”  by .

Introduction

The original article describes scripting languages and makes some good points about how the use of them is an advantage when having novice programmers. In my opinion though, scripting languages are more often a dangerous tool in the hands of an inexperienced programmer rather than low level languages. Additionally, I would like to debate the advantages of compilation over interpretation as I think it is a very relevant and overlooked dimension to the topic of language choice in a project.

Scripting is easier right?

The author states that for a novice programmer is that it is easier to write efficient code in higher level scripting languages. However, I find the case to be that a programmer needs to have deep knowledge and understanding of a scripting language before he is able to produce any truly efficient code in one. In such languages a single line of code might raise complexity by an order of magnitude, but a programmer who doesn’t know how each command is implemented under the hood won’t know why his software suddenly became slow. In contrast lower-level languages, in which each line corresponds to a machine code instruction, are more straightforward to work with and thus harder to make such mistakes in.

In the article it is also argued that it is hard to write in a low level language because programmers need to have experience with manual memory allocation and pointers. This is indeed the case with C, but nowadays C is not the standard for system level languages. In fact most modern low level languages are quite different. Java and C# for example take care of memory allocation and garbage collection tasks automatically.

Scripting and not interpreted, how?

It is stated by the author that scripted languages are interpreted. While usually this is the case, the truth is that languages themselves are can be both compiled and interpreted. A great example of this is Java [1]. We usually think of Java as a compiled language but can also be interpreted through the use of bsh (BeanShell). In fact, Java isn’t actually a compiled language, in the same sense that C or C++ are. It is compiled into what is called byte-code and then interpreted by a JVM which can do just-in-time (JIT) compilation to the native machine language. In fact, many modern compiled languages are not completely machine code based and most interpreted languages are actually compiled into byte-code forms before execution. My point here is that the landscape of programming languages has evolved to such an extent that the compiled/interpreted categorisation of a language starts to become irrelevant. That being said, there is a valid topic of whether compilation or interpretation, in general, is more suitable for a large scale task and I trust that this is a very relevant extension to the language choice debate.

But how is this interpretation – compilation thing relevant?

I do believe that the topic of compilation versus interpretation is important and especially when considering the case of large scale projects I believe there are significant advantages to having compiled code over interpreted. Firstly, native applications produced through compilation are more secure as it is usually impossible to generate source code from an executable [2]. This is a security vulnerability of interpretation that one must take into account. Secondly, by using languages that are compiled we pay the cost of compilation only once and in turn get a fast and efficient executable. On the other hand, interpretation comes with a high cost [2] of execution because the program needs to be parsed and interpreted every time it is run. Another disadvantage is that in large complex projects identical code will surely exist and will have to be interpreted and optimised twice if an interpreted language is used. This might not make much difference in a small project, but might be what makes or break a product in a big project.

Conclusion

To conclude, there are cases where scripting languages are a better choice and other cases where system level programming is preferable. Since we are discussing large scale projects however, I believe there are more advantages to be gained with the use of lower level languages and compilation rather than scripted ones and interpretation. While it is true, that as languages evolve the differences between the 2 models have become smaller, I find that it is still safer to use a low level compiled language, even when having to deal with novice programmers on a given team. Moreover, in cases where a high level language must be used, using compilation should remain a priority. As for when the level of the team is high, then the combination of the two approaches would likely produce the best results and eliminate disadvantages [3] of either method.

References

[1] http://stackoverflow.com/questions/3265357/compiled-vs-interpreted-languages

[2] http://www.codeproject.com/Articles/696764/Differences-between-compiled-and-Interpreted-Langu

[3] http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zappldev/zappldev_85.htm

Sharing the Knowledge: The Role of Communication Across Teams

An easy to overlook part of coordinating a large scale project is communication across teams. While it is trivial to ask from people to write some documentation and organize technical meetings, there is a lot more to exploiting the benefits of good communication or avoiding the negative effects of bad communication.. As it is noted by Frederick P. Brooks in [1], while a project may have a clear goal, enough people with suitable experience, knowledge and sufficient resources it may easily fail if communication fails. Communication, and subsequently coordination, across and within teams is a determining factor in large scale projects that can significantly boost [6] or hinder productivity. To investigate the matter we will discuss how teams and processes must be organized to exploit the benefits of communication and avoid the risks associated with the lack of it.

Is communication really that important?

Tasks on a single large project are rarely completely independent and there is need for organization such that efficient communication takes place across relevant teams. Functional defects, low quality buggy products, inability to follow the schedule might all be the result of development teams being unaware of the work done by fellow teams. For example, consider a piece of code that isn’t designed to be fast because is rarely used by the system. A team may choose to use it, as it would fit well for a new feature it is developing, and will in return sacrifice a lot of valuable time trying to figure out why their code isn’t working as intended. In other cases such things may go unnoticed and buggy products may surface. Thin spread of domain knowledge, fluctuating and conflicting requirements are also likely to cause problems. All this means time, money, and quality will be wasted unless there is sufficient care in designing a development process with communication in mind.

On the other hand, it is not only that the lack of communication causes problems, there are also significant benefits to be found in sharing knowledge among teams. Hayward A. suggests in [3] that success in software development is dependent upon knowledge acquisition, information sharing & integration and minimization of communication breakdowns. The first two factors imply that sharing and usage of knowledge acquired by different teams can be key to increasing productivity. In small teams, where group members communicate directly and regularly, this is easy to achieve and it has been proven [2, 7] that productivity and confidence is much higher in these teams rather than in large development teams. In small teams knowledge and ideas spread effortlessly allowing for flexibility, efficient coordination and autonomy. The differentiating factor between small teams of a few people and big teams of hundreds of developers is the communication, or lack of, amongst the members.

Modern organizations that work on large scale products have realized the importance of communication and divide development teams into small agile groups that work on distinct tasks. However, this is not enough. Communication within teams may be excellent but there is still a need to dictate communication and dependencies across different teams.

So how should cross-team communication be managed?

There is no single best way to deal with this issue. Thus, lets start by stating the basic prerequisite methods that have to used in any software development project. Firstly, documentation and code comments should always exist as they are the simplest form of communication. A coding style that is standardised across all development teams and the usage of common tools is another factor that will help make code more readable and easy to modify. Additionally, the concept of interval builds (daily, weekly or similar) introduces the early detection of problems and is a way of encouraging teams to communicate and find the cause of them. Communication both formal (regular meetings, technical briefings) and informal (telephone calls, discussion outside meetings) must be encouraged to make sure that implementation details, decisions, and changes are quickly communicated across relevant teams.

While all the above are important and should be taken care of, the actual way teams are created, assigned and located is the determining factor in order to exploit the benefits offered by efficient communication. It is clear that teams should be as small as possible, since small groups work more efficiently. But since there will always be dependencies across teams, how does their location matter? Andres Hayward concludes in his study at [3] that face to face communication is such a rich form of interaction that teams collaborating through face to face meetings might have as much as double the productivity in comparison to teams that interact without social presence (emails, video conferencing). The latter form of communication was also found to result in low confidence, delays, and communication failures. Moreover, in [5] and [8] it is agreed that no matter the multiple ways that exist for communication of teams that are geographically distributed and even if ideal conditions are considered, the levels of coordination, communication and productivity achievable can never match those offered by teams working on a single site. All this shows how important it is not only to have regular meetings and briefings but also how important it is to have all teams working in a project under the same roof. Bill Gates himself [2] realised the importance of being able to go and find the person who wrote the piece of code you have a problem with and made sure that each of his projects is developed on a single site. Even in cases when this is not possible, teams should be divided to sites in a such a way that the sub-projects of each site are largely independent and use rotation of the teams when dependencies change.

Another approach successfully used by large companies, such as Nokia , described in [4], is to minimize the need for communication across different teams. The idea here is that since small teams have proven to be very productive, especially when working autonomously, dividing a given project to tasks in a way that eliminates dependencies across different teams will minimize the communication overhead. Microsoft is also known to use such techniques, where in [2] it is stated that teams are assigned to features that mirror the structure of the product and are as independent as possible. Of course, it is recognised that communication is needed to share and integrate knowledge and that there will always exist tasks whose completion is dependent on more than a single team. The method Nokia uses [7] to address these facts is very interesting. To make sure that newly acquired knowledge can be exploited by all teams and that issues affecting multiple teams are efficiently solved, cross-team workshops are organised regularly. In these events, people from different teams are gathered to work on specific tasks or solve issues that affect the work of multiple teams. These methods of work, while they may not be applicable to all kinds of software projects, have been found, in practice, to increase both overall productivity and individual team performance.

What is the lesson to be learned here?

To conclude, it should be apparent that the coordination of teams, their interrelationships and the means of communication between them are critical to efficient large scale software development. It can be the single factor that makes or breaks the final product. In practice, the majority of the methods described here could -and should- be combined and applied in any large scale project.

References

[1] Brooks Jr, Frederick P. The Mythical Man-Month, Anniversary Edition: Essays on Software Engineering. Pearson Education (1995): 73-83.

[2] Cusumano, Michael A. “How Microsoft makes large teams work like small teams.” Sloan Management Review 39 (1997): 9-20.

[3] Andres, Hayward P. “A comparison of face-to-face and virtual software development teams.” Team Performance Management 8.1/2 (2002): 39-48.

[4] Lindvall, Mikael, et al. “Agile software development in large organizations.” Computer 37.12 (2004): 26-34.

[5] Espinosa, J. Alberto, et al. “Team knowledge and coordination in geographically distributed software development.” Journal of Management Information Systems 24.1 (2007): 135-169.

[6] Hoegl, Martin, and Hans Georg Gemuenden. “Teamwork quality and the success of innovative projects: A theoretical concept and empirical evidence.” Organization science 12.4 (2001): 435-449.

[7] Kahkonen, Tuomo. “Agile methods for large organizations-building communities of practice.” Agile Development Conference, 2004. IEEE, 2004.

[8] Grinter, Rebecca E., James D. Herbsleb, and Dewayne E. Perry. “The geography of coordination: dealing with distance in R&D work.” Proceedings of the international ACM SIGGROUP conference on Supporting group work. ACM, 1999.