To test or not to test, that is not the question (part 2)

 

In part 1 of this blog post, we looked that the coverage metrics that are highly recommended in ISO 26262 for software unit testing of ASIL A through D applications. We continue on this theme by looking at object code coverage and its acceptance compared to source code coverage metrics. In the last section of part 1, we noted that we can’t just sweep the differences between our testing environments and reality under the carpet. This is highlighted in Section 9.4.6 of ISO 26262 that requires that “…the differences in the source and object code, and differences between the test environment and target environment, shall be analysed…” with the aim of specifying further tests that can ensure the target environment is being fully tested.

An example of how we can end up on a situation where our source and object code don't match each other in in day-to-day code development is discussed here. A full winIDEA project is available under the “Oven Controller” project over on BitBucket. In the original code from Lisa Simone’s book the new PWM controller value is dependent on some pre-defined values, such as ‘GAIN’.

delta_temp = actual_temp - ref_temp;

newPwmValue = NOMINAL_TEMP_PW + (GAIN * delta_temp);

If ‘GAIN’ were to be defined with the value 2, an integer value, the resulting binary code is easily to link back to the source code:

Assembler output for calculation of "newPwmValue" with GAIN defined as 2

Here the multiplication by two is implemented as a simple "lsl" or shift-left instruction (address 0x416A).

However, if ‘GAIN’ was defined with the value 2.7 (as it was originally in the book’s text chapter If I only changed the software, why is the phone on fire?), and since our target Cortex M0+ MCU does not natively support floating point numbers, the GNU GCC tool chain links in some standard library code to support the multiplication of floating point numbers in this section of code. 

Assembler output for calculation of "newPwmValue" with GAIN defined as 2.7

As a result, the code now at address 0x416A prepares the CPU's registers for the subsequent call to a library supporting floating point multiplication. 

If our tools are only analysing our application code at source-code level, how can we actually prove to a certifying body, such as the TÜV, that we have actually checked all our code? One method that we include within our winIDEA development environment is Original Binary Code Coverage (OBCC) which, in the majority of cases, performs as well as the source code based coverage methods mentioned earlier, with the exception of MC/DC – we’ll provide more details on this later.

OBCC uses the binary code generated by the MCU’s tool chain, e.g. GNU GCC, to determine the coverage of the application code. With trace enabled during code execution it is possible to prove that all MCU instructions were executed and, when a branch instruction is executed, the path that the branch took. Because OBCC uses the in-built code trace capability of the MCU (if present) there is no need to instrument the code or make a new build just to perform coverage analysis when running your tests – you simply analyse your production release binaries on your target MCU at full-speed, in your target system environment. In addition, the relationship between your binaries and the original source code is ensured through the ELF (Executable and Linkable Format) file format. This file type stores the names of your code’s functions and lists them against the addresses of the code sections to which they pertain, thus proving the relationship between the C source code and assembly instructions in the MCU’s memory.

Looking back at our Oven Controller example, we can trace the code executed that calculates the “newPwmValue”. The result in the source code window looks fine. The calculation of “newPwmValue” is marked green in the gutter indicating that this line of code was exercised during testing. So everything is OK…isn’t it?

The "newPwmValue" calculation is highlighted green by our coverage metrics...but did we really test everything behind this line of code?

The result in the winIDEA disassembly window tells a different story. We can see, through the icons in the gutter on the left-hand side, the assembler code that has been executed as they are marked with a green square. The four “bl” instructions used to call the library functions needed to perform a floating point calculation are all marked green. Therefore, we can prove that the library functions were actually executed and called as part of the calculation of “newPwmValue”.

The disassembly listing shows that several library functions were called...

Anyone checking this application for certification purposes would then, of course, want to know if the library code had been fully tested as well. In winIDEA we can view the code area called from within the disassembly window and check to see what parts of the library were actually used, as is shown below (the library code in question starts at address 0x6F54). In the gutter we can see that some code has been executed (marked green), whilst other instructions were not reached (marked red). In addition, the “beq” instruction at address 0x6F58 was only observed to have been exercised for one of its two possible outcomes. With this method we have a way of proving to a certifying authority that we have (or in this example, haven’t finished) checked that all our code has been exercised as part of our testing activities.

...but in reality, only part of the included library was actually exercised during testing.

Now of course, most of us would do our best to avoid any sort of floating point mathematics on an embedded system. This example does, however, show the potential pitfalls that have to be tackled when testing code, especially in languages like C++ where the compiler inserts various entry and exit code around constructors and destructors.

Since most engineering teams are still currently more aware of source-code level coverage, what is the acceptance of the object code coverage method for certification? Well, the Certification Authorities Software Team (CAST), a body of software specialists of aerospace certification authorities from the United States, Europe, and Canada, have developed guidelines for approving source code to object code traceability. In addition, the Federal Aviation Administration undertook a study to compare object code and source code structural coverage for object-oriented technology verification and came to the conclusion that, with the exception of the MC/DC metric, object code analysis was as good as source code analysis. In fact, some types of analysis, such as constructors and initialisers, were only possible with object coverage analysis techniques.

If you are looking out for a way to improve code quality through testing and code coverage analysis, but are not sure about the investment required for source-code coverage analysis tools, then it might be worthwhile checking out the coverage capabilities provided by winIDEA and the BlueBox family of On-Chip Analysers. And, by using testIDEA, you can even develop a battery of tests that perform the coverage analysis for regular testing of your application code on your hardware target using the same binary code output by your compiler that will go into your end product.

If you wish to try-out some of the techniques mentioned here, feel free to download our free development IDE winIDEA Open and give it a try. But in the mean time, Happy BugHunting!

The content of this post would not have been possible without the help of Anja Visnikar, iSYSTEM's Test and Qualification Manager.