Using Connext Pro with Linux RT_PREEMPT Kernel

Connext applications can enjoy consistent "me-first" execution by running at an elevated priority on the Linux RT_PREEMPT kernel.  RT_PREEMPT is a kernel enhancement that enables real-time preemption and settable priorities for processes and threads, similar to an RTOS.  This can be put to use for Connext applications, running them at a higher priority to reduce the chance that an intervening task may cause increased latency in the Connext application.

You can patch or update the linux kernel in a machine to RT_PREEMPT in at least 2 ways: build a new kernel, or patch using apt.

Building a new kernel
There are several guides online covering the steps needed to build an RT_PREEMPT kernel to be patched into your system; be sure to note the current kernel version already installed in your target machine (using 'uname -a'), and select an RT_PREEMPT patch for that kernel version or higher, this may help avoid library incompatibilities when the new kernel is installed.

Patching a kernel using apt
If running on a Debian linux, there are some prebuilt RT_PREEMPT kernel packages available for installation using apt.   Check the Debian package repository to see if there's a close match to the linux kernel version used in your Debian linux system.

A patched kernel will have "RT PREEMPT" in the results of the uname -a command.

Running Connext Applications at Higher Priority 

The Linux chrt (get or change real-time characteristics) command can be used to set the priority and scheduling of a process, for example:

    sudo chrt -r 98 ./my_connext_application 

This example will set the priority of "my_connext_application" to 98 (where 1=min / 99=max) and use a round-robin scheduler.
This ensures that your Connext application will get execution priority over other lower-priority processes and threads.

Let's talk about Threads
The Connext libraries will spawn threads to support your application; the priority of these threads can be controlled independently of the application, enabling greater flexibility in system scheduling.
When elevating the priority of your application, be sure to also elevate the priority of the Connext receiver_pool and event threads.
This can be done either in the XML QoS file, or in the application code.

XML QoS file implementation example:
Place the following into the <participant_qos> section of your USER_QOS_PROFILES.xml (or equivalent):

  <receiver_pool>
    <buffer_size> 65536 </buffer_size>
    <thread>
      <priority> 98 </priority>
      <mask> REALTIME_PRIORITY | PRIORITY_ENFORCE </mask>
    </thread>
  </receiver_pool>
  <event>
    <thread>
      <mask> REALTIME_PRIORITY | PRIORITY_ENFORCE </mask>
      <priority> 98 </priority>
      <stack_size> THREAD_STACK_SIZE_DEFAULT </stack_size>
    </thread>
    <initial_count>256</initial_count>
    <max_count>LENGTH_UNLIMITED</max_count>
  </event>

Edit the above settings to match the needs of your application.

 

C++11 source code implementation example:
Add a DomainParticiapantQos element to your source code, just prior to creating the DDS domain participant:


  // Qos with thread priority settings for the domain participant
  dds::domain::qos::DomainParticipantQos domainQos = dds::domain::qos::DomainParticipantQos();
  domainQos.extensions().receiver_pool.thread().priority(98);
  domainQos.extensions().receiver_pool.thread().mask(rti::core::ThreadSettingsKindMask::realtime_priority());
  domainQos.extensions().event.thread().priority(98);
  domainQos.extensions().event.thread().mask(rti::core::ThreadSettingsKindMask::realtime_priority());
  // create the domain participant
  dds::domain::DomainParticipant participant(domain_id, domainQos);

Edit the above settings to match the needs of your application.

 

How to determine current priorities
Using the Linux command "top -H -p <pid>" (where PID is the process ID of your Connext application) should reveal the current priorities of the application and its threads, similar to this:

   PID USER  PR NI   VIRT   RES   SHR S %CPU %MEM   TIME+ COMMAND
  21059 root -99  0 636168 23860 15152 S  0.0  0.3 0:00.11 my_connext_app
  21060 root -99  0 636168 23860 15152 S  0.0  0.3 0:00.00 rCo21640####Dtb
  21061 root -99  0 636168 23860 15152 S  0.0  0.3 0:00.00 rCo21640####Evt
  21062 root  20  0 636168 23860 15152 S  0.0  0.3 0:00.01 rTr21640UDP4ITr
  21063 root -99  0 636168 23860 15152 S  0.0  0.3 0:00.00 rCo21640##00Rcv
  21064 root -99  0 636168 23860 15152 S  0.0  0.3 0:00.00 rCo21640##01Rcv
  21065 root -99  0 636168 23860 15152 S  0.0  0.3 0:00.00 rCo21640##02Rcv
  21066 root -99  0 636168 23860 15152 S  0.0  0.3 0:00.00 rCo21640##03Rcv
  21067 root -99  0 636168 23860 15152 S  0.0  0.3 0:00.00 rCo21640##04Rcv

Note the PR (priority) column might not show the same numerical value used when setting the thread priority.
Note also the suffix characters on the elements in the COMMAND column; these are used by Connext to indicate the purpose of the threads -- Receive_pool, Event, Database, etc., as documented in the Connext Core Libraries User's Manual.

 

Connext Performance under RT_PREEMPT and Elevated Priority

When running a simple pub-sub latency test application at normal and elevated (98) priority, the improvement in consistency is dramatic.
The following table is a histogram of the results of the latency test run 50,000 times.    Each column represents a 100uS bin.

Configuration0-99uS100-199200-299300-399400-499500-599600-699700-799800-899Over 900uS
No Priority Setting223401538839583957323101-
Priority set to 987454765915879------

 Note that this is the same test application; the difference in latency is due to the scheduling by the operating system.

The elevated priority produces faster and more consistent results than running with no priority elevation.

 

RTI PerfTest
The RTI PerfTest application also support the options of setting the thread priorities, and produces similar results to the above.

 

Platform:
Programming Language: