What is cyclomatic value of code complexity and why should you care?

Posted by Magnus Unemyr on Apr 22, 2015 11:08:52 AM

Do you know the cyclomatic value of code complexity for your code? Or what your McCabe index is? If you are like most embedded developers, you probably answered no to these questions. And why would you care in the first place? As it turns out, there are extremely good reasons why every embedded developer should care a lot about this; and it doesn’t matter if you are developing for STM32, Kinetis, EFM32, LPC or any other popular Cortex-M device. Or non-ARM devices either, for that matter.

As embedded systems contain a lot more software today than only a couple of years ago, projects are exposed to an increased likelihood of distributing products with software problems. A good strategy to improve the situation is to avoid problems in the first place. This can be done by measuring and managing code complexity, as software with a high complexity level is likely to contain more bugs than software of lower complexity. By deploying methods and tools outlined in this blog article, you and your team can deliver higher quality software with less effort, by measuring and managing the software complexity in an effective manner.

Code complexity is usually presented using a standard measure called the cyclomatic value of code complexity. This measure was introduced by Thomas McCabe in 1976, and is also called the McCabe index. It calculates the amount of decision logic in a software block, typically a C function. It is therefore an excellent predictor of bug probability, and can also be used to provide judgments on the soundness and confidence for a software implementation in general.

The cyclomatic value of code complexity measures the number of independent execution paths (the structural complexity) through a function, assigning a numerical value to the complexity of each C function. The more iterations and conditional code a function has; the higher its complexity level.

A complex function is likely to include more errors, and is more difficult to understand, test and maintain. Too complex functions should be simplified by rewriting or splitting into several smaller functions, thus creating less error-prone functions of reduced complexity.


In addition to aid in decisions on rewriting and refactoring source code, code complexity analysis can also be used to guide in manual source code inspection operations. For a number of reasons, many companies do not perform source code reviews, which is one of the most cost-effective and best ways of improving software quality quickly. But code complexity analysis can be used to flag certain risky functions, which would benefit from a code inspection even if most other functions are not inspected manually.

The cyclomatic complexity level also serves as the minimum number of test cases that must be executed to cover all possible execution paths through a C function (i.e. full branch coverage) during software testing. Therefore, code complexity analysis helps creating or assessing the test procedures as well. And in general, it is advisable to carefully test functions with a high complexity level.

And so, how can a Cortex-M developer measure the code complexity of the C functions? Modern software tools, like the Atollic TrueSTUDIO C/C++ IDE for ARM Cortex development include this feature, offering a code complexity analysis view in the IDE:


With code complexity measurement integrated into the ARM Cortex IDE, developers get a very simple and efficient tool solution that enables the development of software of higher quality.

To learn more on measuring and managing the code complexity of embedded software, read this whitepaper:

Read our version control whitepaper!

To learn more on ARM Cortex development using the Atollic TrueSTUDIO IDE, read this whitepaper:

Read our ARM development whitepaper!



Topics: Software quality, Atollic TrueSTUDIO, Embedded Software Development