As part of the OSIE project (Open Source Industrial Edge) we decided to provide a real-time linux image (with PREEMPT_RT kernel and isolated CPUs) suited for Time Sensitive Networking (TSN) - a set of standards to guarante packet transport with bounded latency, low packet delay variation and without packet loss - for an A20-OLinuXino-LIME2 board.
We have already explained how to create the image we provide in this blog post, here we will present the different real-time tests and results concerning this image.
The first test to do when setting up a PREEMPT_RT image is a thread wake-up latency test with cyclictest. cyclictest will create a certain amount of threads, will periodically wake them up, and measure the difference in time between the scheduled wake-up and the effective wake-up.
The following options were used for the test:
-p 98 : set thread priority to 98 (maximum is 99)
-a 1 : pin thread to CPU1 (CPU actually means core in this case)
-t 1 : create one thread
-n : use high precision nanosecond clock
-h 200 : log latencies in a histogram with values from 0 to 200us
-q : don't show real-time output
-i 200 : set thread wake-up interval to 200us
To compare, the same test was ran on a Shuttle (8 core x86_64 Intel CPU) with similar setup (PREEMPT_RT, isolated CPU's, etc..)
Wake-up latency measure
This means we can expect at least 73us of jitter for all our real-time triggers, unless some kind of hardware offloading is used.
Synchronization of all clocks is necessary to have a real-time network. The standard way to do this is using PTP (Precision Time Protocol), with linuxPTP, which uses transmission and reception timestamps to synchronize clocks.
linuxPTP logs two important information:
- master offset: estimated offset between clocks
- path delay: delay between the two timestamps
We setup PTP between two Lime2 boards and a Shuttle, with each board connected directly to the Shuttle ethernet interface.
This tells us all three devices have a common time reference down to a precision of around 70us.
Once devices are synchronized, we only care about sending packets before a deadline, since the common time reference will be used to specify at which time we want events to trigger.
Therefore we need to measure with what delay we can send packets. One way to measure this is to synchronize the boards and generate a timestamp before sending a packet and after receiving it, which will give us results with an imprecision of 70us. We did tests with and without AF_XDP sockets.
For the tests, we connected a Lime2 board to one of our Shuttle. The shuttle had a PREEMPT_RT kernel and had low latencies compared to the board as can be seen in the first test.
All previous tests relied on software measures, which is not very reliable. Therefore we did a final test using GPIO pulses and a logic analyzer to test both the synchronization of the boards and their ability to respect deadlines.
GPIO pulses were choosen as they were simple to trigger and as triggering pulses didn't have too much overhead.
For the setup, we connected two Lime2 boards to a Shuttle (the Shuttle has two ethernet interfaces), and connected a logic analyzer (High precision Logic 8 Saleae) directly to a GPIO pin on each board. The server was sending packets at a regular interval with a timestamp inside them. The timestamp was equal to the time at which the packet was scheduled to be sent incremented by a pre-defined delay. Upon reception of the packet, the board scheduled to send a pulse on their GPIO pin at the timestamp contained in the packet.
The test consisted in measuring the shift between the pulses sent by both cards. The shift tells us how well they are synchronized.
The final results shows the boards are able to trigger events with a worst case precision of ~100us at a cycle of 500us, where each event is scheduled by a central server only 250us in advance. No packets were lost and no deadlines were missed during the 15h test.