Overview
PyPi module | N/A |
git repository | https://bitbucket.org/arrizza-public/one-millisecond-loop |
git command | git clone git@bitbucket.org:arrizza-public/one-millisecond-loop.git |
Verification Report | https://arrizza.com/web-ver/one-millisecond-loop-report.html |
Version Info |
|
- installation: see https://arrizza.com/setup-common
Summary
Running a loop at a precise frequency can be very useful. What is a simple and accurate way to do this on a full non-real-time OS like Ubuntu or OSX? In embedded devices where you have control over all the resources and timer capabilities, it is fairly straightforward to do.
The following shows one technique that is quite accurate and the code is minimal.
Scripts
- See Quick Start for information on using scripts.
- See xplat-utils submodule for information on the submodule.
How it works
The main part of the code is the function MillisecondPerLoop::do_loop(). It captures the current time with a nanosecond clock and then a delay is calculated from the current time in nanoseconds to that next time tick, see get_current_clock_ns(). After that delay, the time should be exactly the expected time.
Results
The results vary from run to run, but a typical run looks like this:
Per trial statistics:
Avg (nS) mS err(mS) err(uS)
[ 0] 1000029101 1000.029 0.029 29.101
[ 1] 1000052793 1000.053 0.053 52.793
[ 2] 1000029546 1000.030 0.030 29.546
[ 3] 999962249 999.962 -0.038 -37.751
[ 4] 999969750 999.970 -0.030 -30.250
[ 5] 1000024419 1000.024 0.024 24.419
[ 6] 999989385 999.989 -0.011 -10.615
[ 7] 1000009707 1000.010 0.010 9.707
[ 8] 1000013002 1000.013 0.013 13.002
[ 9] 999952672 999.953 -0.047 -47.328
Overall statistics:
Expected total time: 10000.000 mS
Actual total time : 10000.033 mS
loop StdDev : 55.501 uS
loop Mean : 999.970 uS
loop Overall : 999.970 uS +/- 111.001 uS (with 95% confidence)
average error : 3.262 uS
The first table is a set of statistics per trial (in this case 10 trials of 1000 loops each). Each trial takes 1000 * 1ms = 1 second. Each line shows:
- the average time per loop in nanoseconds. The expected value is 1,000,000 nS (i.e. 1 ms)
- that same value in ms rounded to the closed millisecond
- the difference between the average and the expected value in mS
- the difference between the average and the expected value in uS
Overall Statistics
The next section shows the overall statistics:
- The expected total time is 10 trials with 1000 1ms loops per trial is 10 seconds.
- The actual total time shows there is a slight difference across multiple loops (33 uS)
- The standard deviation 55.501uS shows that each individual loop varies quite a bit.
- The average duration per tick shows 999.970uS which is just 30 uS under, on average
- The overall stats, then is 999.970 +/- 111 uS. This means that the actual loop times varied from 889 to 1111 uS over 95% of the loops that were tried (10 trials * 1000 loops = 10,000 loops in all).
Notes:
-
The standard deviation is calculated on all the available per-loop times on an ongoing basis and so shows the variability of the loop times more clearly. See https://www.johndcook.com/blog/standard_deviation for the background on the ongoing calculation. The Per Trial values are averaged over 1000 loops and so are much more consistent.
-
The testing here is not 100% accurate since it is using the same timing mechanisms (my PC's crystal) for both the loop and for testing purposes. It would be a much more accurate test if there were a way to get a signal (quickly!) out of the PC and use an oscilloscope or some other tool to measure the actual timing.
-
The on_loop() function is using a widely ranging (from 1 to 900 microseconds) sleep to simulate very erratic loop behavior. Even with that, the loop timing is very consistent!
Variability
After running this many times, the StdDev is between 50 and 60uS fairly consistently, but it can sometimes rise above that range, e.g.
Overall statistics:
Expected total time: 10000.000 mS
Actual total time : 9999.991 mS
loop StdDev : 131.593 uS
loop Mean : 999.959 uS
loop Overall : 999.959 uS +/- 263.186 uS (with 95% confidence)
average error : -0.918 uS
The average loop time can be slightly above or slightly below 1000.
I have tried to run background processes (e.g. did an apt-get update & upgrade) to see if they would change the results, but there seems to be no consistent side effect.
The Loop
MillisecondPerLoop has two callbacks: done() and on_tick(). See main.cpp for an examples. When done() returns true, then the loop is ended. on_tick() is the code you want to execute every millisecond.
The do_loop function
- initializes the current time (in nanoseconds)
- starts checking done() every loop
- calls on_tick()
- calculates how long to wait to the next time interval i.e. the next mS interval
Note that since the loop time is consistent (1mS) then it is possible to know the exact expected loop time. And it is also possible to know what the current time is. And therefore it is possible to calculate the precise time to sleep.
The variability comes from the OS sleep() function. It is a request to the OS to suspend for at least that amount of time and to return to this executable as quickly as possible afterward. But the OS usually takes an extra bit of time every loop to return.
Unfortunately that "extra bit of time" in not predictable. It depends on all other system level behaviors.
To counteract this, I've added a periodically updated error average. It is not exactly a moving average. It calculates the error over the last 100 entries and then applies that new average. It repeats this continually and so it should accommodate some variance in the OS behavior.
This and other feedback mechanisms I've tried, do not have significant impact to the StdDev. But does make the overall time more accurate over thousands of loops. For example, I ran it for 3 minutes i.e. 180,000mS and the total loop time was 179999.418 mS i.e. it was off 0.6 mS in 3 minutes. The loop mean was 1000.047 uS.
The negative side effect is that some loops are less than 1mS. These compensate for the loops that are naturally over. That may be a problem in some applications.
As an aside, here's what I've tried:
- hardcoded values that I tweaked manually
- using the StdDev value as the offset or multiples of it
- using the Variance value as the offset
- using a Kalman filter https://github.com/simondlevy/TinyEKF