For embedded TDD, don't worry so much about testing on the target

As an embedded software developer, you're used to downloading and running your code on your target processor. This is useful for testing your complete application or for testing how it interacts with the hardware.

If you want to do embedded test-driven development (TDD) though, running your automated unit tests on the target is too slow. When you're test-driving, you're running the tests very frequently. You will not want to wait for the tests to download to the target. It will disrupt your flow and you'll get more easily distracted.

It's much more effective to compile and run your unit tests on your host PC.

Yes, some of your application software will need touch the hardware. You will not want to compile this for code for the host. But if you can keep these hardware dependencies contained to a few clearly defined software modules, then you should be able to test-drive most of your code on the host.

Why do you think you need to test on the target?

I think there's a sense that we have to use the target to run our code, because that's just what we do with embedded software. It runs on some particular hardware.

But embedded software is still just software. Yes it can be compiled for your embedded target, but you can also compile it for your host computer.

What about my hardware?

Okay, so your embedded software probably has some unique hardware dependencies. But you can still test a lot of your embedded software by 1) designing your application to isolate hardware dependencies and 2) mocking the hardware dependencies during your tests.

You should not be accessing hardware registers from most of your code. For example, if you want to turn on an LED from some code that you're test driving, instead of directly accessing some register with PORTE |= 0x0800, use a function like led_turn_on(POWER_LED). This has the added benefit of being much more descriptive.

When most of your code doesn't access the hardware, then you can test most of it on the host.

But I need interrupts, right?

Again yes, your application probably has to handle some interrupts. But you don't necessarily need to simulate interrupts to test it. This gets back to hardware isolation.

For example if you're getting a new character over the serial port on an interrupt, you don't actually need to simulate the ISR being called. To test your application you can just call the function that gets called to accept a new character. Or, depending on you application, you could simulate a new character being available in some receive queue.

The getting of the character isn't likely to be as interesting as what you do with it after it's received. And what happens if you get a bad character... or a thousand characters instead of one? Those are the sorts of tests that are really helpful.

What about my RTOS?

Similarly to hardware accesses, calls into the RTOS can permeate your code making it difficult to test. What you want to do is to try to isolate your application logic from the RTOS so it's easier to test.

For example, consider a task responsible for controlling some LEDs. It consumes events and calls functions to set the LEDs appropriately. It might look something like:

void led_task(void)
{
  event_t next_event;

  while (true)
  {
    rtos_dequeue(&next_event);      // Wait for next event.
    led_process_event(&next_event); // Process next event.
  }
}

The task just waits for the next event and then calls a function to process it. Inside led_process_event we'd expect to call functions that set the LEDs.

Without involving the RTOS, our unit tests can test the LED behavior by testing led_process_event and pass it any event (or sequence of events) that we want. We'd mock out the LED control functions to inspect that the correct ones are called (or not called) in each case.

TDD helps a lot here, because it gets you thinking about how you're going to test before you write any code.

How do I know my code will work on the target?

It is possible that your target compiler has a limited set of features. You don't want your code to use these features. But you still don't need to run tests on the target to detect this sort of problem.

You can test for this simply by compiling your application code for the target. You don't need to compile the test code, and you don't need to run anything on the target. Your code just won't compile if this is problem.

There are a few reasons to test on the target

There are some real reasons why you would want to test on the target:

  1. To test the target compiler.
  2. To test for target-only logic errors (i.e. endianess problems).
  3. To test the actual hardware and hardware interfaces.

The first two reasons, while possible, are relatively unlikely or insignificant compared to errors in your application logic.

The third reason -- to test the actual hardware and drivers -- is the real reason to test on the target.

One problem though is that these tests are difficult to automate because they likely require some external hardware or human intervention. For example, if you're testing an I2C interface, you'd need some hardware on the other end of the I2C to send and receive test data... and someone to manually check that things are working correctly.

Since this type of test is difficult to automate, it doesn't make much sense to use in an automated unit test while test driving. Instead, you might create some integration or system tests that run on the target. Because there is special set up required, you run these tests manually and less frequently.

It is possible to test some of these hardware interfaces on the host, but I'm not convinced this is worth the effort. Consider this approach:

You can mock out the interfaces to the hardware registers, and then use your unit tests to verify that you've configured and accessed the hardware registers correctly. For example when writing a byte to I2C, you might need to set up some address and control registers and then write your byte to the data register. You could have a test that verifies that these register accesses are done correctly.

There are two problems with this host-based approach though. First, the register accesses specified by the test might not be correct (like if the datasheet isn't clear). Secondly, these tests are tightly coupled to your driver code. In fact, they probably mirror your driver code with a complicated series of function call expectations. This is where I'd prefer to use a manual integration test instead.

Where to go from here

In the context of test-driving, it's important to point out that the purpose of the unit tests created are not to test the entire system at once. The point of an automated unit test is to verify a specific behavior of a single software unit.

These sorts of tests don't need hardware, interrupts or the RTOS.

There is certainly value in running tests on the target, but testing on the host is much more important to set up first -- especially for TDD -- where you need the tests to run quickly.

If you find yourself struggling to write tests that will run on the host, try investing your time in making your code less dependent on the hardware (or the RTOS)... instead of trying to run tests on the target.

When you start to figure it out, you'll realize that you can test-drive plenty of your application without the hardware.

If you think this is some good advice, please get the word out by sharing this article. Thanks!

-- Matt