This is an interesting question, and you probably have a personal opinion on this too. But I would argue there are certain things you can and should do to greatly improve the chance of project success. By deploying best practices, you will be less likely to end up with project delays, a malfunctioning product, or a product that do not fulfil user expectations.
If you are a development manager with responsibility for business success of the product being developed, there are a couple of other items you need to consider. Project delay or even failure may become very costly if the wrong decisions are made early in the project. So what are the key factors for project success, and how can you reduce project risk to a minimum?
First of all, you will need to have a clear requirements definition. One of the most common problems in terms of delay or failure in software development projects are vague or insufficient requirement specifications. Requirements have been changed or added during development in almost every software project I have had insight into, increasing costs and causing delays. This makes software development a moving target, which is not a good foundation for solid project success.
Spend enough time early in the project and create a clear and detailed requirements definition, such that stakeholders and everyone in the project know exactly what you are building. You as the software development organization may not be the main problem here. If you are developing software for an external customer, the customer often do not know in sufficient detail what product they need or are ordering. In such case, help the customer and explain why it is important to have a detailed requirements specification before you start to write code.
Secondly, you need to manage the development project professionally. This is not only about traditional project management, such as using Gantt diagrams in Microsoft Project, or financial tracking. This is of course still important, but in terms of successful software development, it is perhaps even more about managing the source code, features, defects and work tasks.
I argue (you may disagree, but I will not change my mind here) that any software development project being run for a commercial purpose need to use version control systems and bug/feature tracking databases.
Version control systems like Subversion or Git are free of charge and provide so many benefits; change tracking is an obvious one, but there are more, such as reverting to old more stable versions, comparing versions, or using feature or release branches to aid in parallel development threads. If you develop software for any other purpose than student or hobby reasons, you do need to manage your source code in a version control system. Even if you are the only developer in the project.
Similarly, bug or issue management databases are free too, also offering great benefits for single- or multi-developer teams alike. Using issue management systems like Trac, Mantis or Bugzilla, you can track bugs, feature requests and other to-do items. You can track their status, and tag them for inclusion in certain releases. It is then easy to see what changes went into a particular release, or what bug fixes or feature requests are planned for what coming release. The issue management system is also a great work planning tool, and is often the basis for the weekly team meeting in many projects. In my mind, there is no valid reason not to use an issue management system to organize your software development.
With the requirements definition and software development infrastructure out of the way, what more can software development projects do to improve project success? By using automated static source code analysis, you can check your source code versus best practice coding standards, reducing the risk of introducing bugs, and improve readability and portability. MISRA-C is the most popular coding standard for embedded systems these days, and I strongly recommend you check your source code complies with the MISRA-C coding standard.
Likewise, you can greatly reduce the risk of introducing software errors by measuring and managing your code complexity. This is often done by analysing the source code and measure the number of independent potential execution paths in every C function. This is called the cyclomatic value of code complexity, or the McCabe index, giving tribute to its 1976 inventor.
If you continuously measure the code complexity of each C function, and make sure it do not pass 10 on the McCabe index, you are very likely to have bug-free and easily understood and maintained code. If your C functions have a higher code complexity, you are likely to introduce more bugs, and the code is more difficult to understand and maintain. If your code complexity is 20 or above, the C function is more or less untestable and almost impossible to maintain.
Measure the code complexity of your C functions every day, and rewrite or refactor any C function with a code complexity over 10. That may be one of the single most effective strategies to improve your software quality!
In addition to coding standards compliance and code complexity measurement, that both can be automated using software tools, you can use manual source code reviews. In a source code review, colleagues study each other’s code and try to find problems, such as logic errors, portability problems, efficiency problems or other types of bugs or potential problems. Once the code has been subject to the peer review, the reviews meet in a code review meeting and discuss the potential problems that have been found. The meeting will then decide what review comments need to be addressed by changing the code.
What about debugging? If you are using popular Cortex-M devices like STM32, Kinetis, LPC or EFM32, you should make sure you are aware of what debugger capabilities are available to shorten development time and find really tricky bugs.
If you develop on Cortex-M, make sure you are fully aware of the debugging benefits this CPU core offers. You may for example need to record and analyse execution history using ETM/ETB instruction tracing, or perform real-time system analysis using SWV/SWO/ITM event- and data tracing. And one of the most valuable debugger feature for really tricky bugs are the crash analyser that works out why a Cortex-M system crashed due to a hard fault exception. Common reasons for this can be division by zero, pointer errors, execution of illegal instructions, or memory errors like accessing a word on a misaligned memory address, etc.
What other factors for success can be found? Development managers should be careful to select well-proven quality tools with commercial support. It can be enormously expensive to halt the almost finished development project due to a tool problem no-one takes responsibility for.
So, what are the key factors for project success in Cortex-M development projects?
Have a clear and defined requirements specification that is not a moving target
Manage bugs, features and work-tasks using an issue management system
Manage source code and its change history using a version control system
Write the source code to comply with the MISRA-C best practice coding standard
Keep the code complexity of your C functions under 10 on the McCabe index
Try to find code problems using manual source code reviews (peer review)
Make sure the debugger supports ETM/ETB and SWV/SWO/ITM tracing
Make sure the debugger have a Cortex-M hard fault crash analyser
Use a commercially supported tool
By following the above recommendations, you are in a much better position to deliver a high-quality product that fulfils its requirements, on time and on budget. That is project success!
When developing Cortex-M projects, you should look for tools that support these important capabilities. A great choice is Atollic TrueSTUDIO with built-in support for the above features. TrueSTUDIO is an ECLIPSE/GNU-based IDE for Cortex-M development.
Read more on ARM Cortex development and debugging in this free white paper: