Time Intervals#

The need to measure time intervals occurs frequently in microcontroller programming. Examples include detecting the duration of pulses in a Morse Code like signal as those emitted from infrared controllers, or measuring the frequency of button presses for debouncing.

In principal microcontrollers are very well equipped for the task. Thanks to their high clock speed, they can programmatically resolve microsecond or even nanosecond intervals even without resorting to special hardware features. Conversely, using counters, they can easily track years or even millenia, provided that the power stays on and someone is there to care.

This large dynamic range, nanoseconds to millenia, poses some challenges that, if not properly dealt with, can result in erratic behavior.


In principle, representing such intervals is quite easy. A 64-bit integer can represent 584,942 years with microsecond resolution. The problem is that most microcontrollers use 32-bit integers. MicroPython “looses” a bit or two to represent the datatype. An additional bit is used for the sign, hence we are left with 29 bits. That’s sufficient to represent just 9 minutes with microsecond resolution. With millisecond resolution we get a little over 6 days.

Measuring time is typically accomplished with a counter initialized to zero on powerup and then incremented every millisecond or so. MicroPython has a function time.ticks_ms() that returns the time in microseconds since the device was turned on.

For example, to measure the duration of button presses (or some sensor input, let’s say the duration of a flash of light), we take the time when the button was pressed and when it was released and compute the difference.

from time import ticks_ms

# wait for button press
# ...
start = ticks_ms()

# wait for button released
# ...
stop = ticks_ms()

# calculate the duration of the button press
duration = stop - start    # BUG - see below

This mostly works. Except that every six days the measured duration may be incorrect results if the timer “overflows”. Needless to say that such bugs are extremely difficult to diagnose since they occur so infrequently. Because of this they often escape testing, showing up only after the product has been relased to customers with possibly disastrous results.


C-Python has a function time.monotonic() that returns time (with arbitrary offset) in seconds as a 64-bit floating point number. With floating point the resolution decreases for long durations to less than microseconds after half a million years, but most of us won’t be around to notice.

CircuitPython, a fork of MicroPython, also offers time.monotonic() but returns a 32-bit float rather than 64-bits. Now the resolution drops quite quickly, in just days or minutes. Many applications keep microcontrollers busy for much longer than that.

%connect huzzah32

# simulate `time.monotonic()` just after powerup:
start = 0

# simulate 1ms interval:
stop = start+0.001

print("duration =", stop-start)
Connected to huzzah32 @ serial:///dev/ttyUSB0
duration = 0.001

The measured duration is 1ms, as it should be.

Now let’s repeat the same experiment but after the micrcontroller has been running for a day:

# simulate `time.monotonic()` just after a day:
start = 3600*24.0

# simulate 1ms interval:
stop = start+0.001

print("duration =", stop-start)
duration = 0.0

Zero? That’s clearly wrong and the consequence of the limited resolution of 32-bit floats. If you play around with the above code you find that after one day, the smallest interval that can be resolved is 4ms, albeit the duration is reported as 7ms.

Perhaps you do not need millisecond resolution. But even 10ms cannot be resolved after 4 days.


There are several solutions for this. We could allocate more bits for representing time, for example using Python bigints. In fact, (Circuit)Python’s time.monotonic_ns() does just this.

This works correctly and has the great benefit of being standard Python (i.e. code works also in C-Python) but has potential drawbacks. First, bigints are less efficient than (small) ints or floats. Secondly, and more importantly, they are allocated on the heap. Because of this they cannot be used in interrupt service routines. Since interrupts are precisely a situation where measuring time is often needed, that’s a quite significant gotcha.

Note that CircuitPython does not implement interrupts in user code. That’s also a solution, if you don’t need them.


MicroPython chooses a different route. time.ticks_ms() and time.ticks_us() both return time as small ints. Because of this they “roll-over” after a few minutes or days. The special function time.ticks_diff() takes into account the roll over.


start = time.ticks_us()  # or ticks_ms

# do whatever you want to measure the duration of
# ...

stop = time.ticks_us()

To get the duration

duration = time.ticks_diff(stop, start)  # CORRECT

Rather than the (incorrect)

duration = stop - start  # WRONG in case of roll-over

The “correct” option works properly as long as no more than one roll-over occurs during the measurement interval. This limits the maximum duriation that can be measured with time.ticks_ms() to about six days. With time.ticks_us() the maximum is about nine minutes.

Admittedly using time.ticks_diff() to compute time differences is a bit clumsy. But it works correctly, is efficient, and can be used even in interrupt service routines. After pondering the alternatives (and even implementing custom helpers in C), I always come back to using ticks_XXX.