RTI Connext C++ API  Version 5.0.0
DDS_ReceiverPoolQosPolicy Struct Reference

Configures threads used by RTI Connext to receive and process data from transports (for example, UDP sockets). More...

Public Attributes

struct DDS_ThreadSettings_t thread
 Receiver pool thread(s).
 
DDS_Long buffer_size
 The receive buffer size.
 
DDS_Long buffer_alignment
 The receive buffer alignment.
 

Detailed Description

Configures threads used by RTI Connext to receive and process data from transports (for example, UDP sockets).

This QoS policy is an extension to the DDS standard.

Entity:
DDSDomainParticipant
Properties:
RxO = N/A
Changeable = NO
See Also
Controlling CPU Core Affinity for RTI Threads

Usage

This QoS policy sets the thread properties such as priority level and stack size for the threads used by the middleware to receive and process data from transports.

RTI uses a separate receive thread per port per transport plug-in. To force RTI Connext to use a separate thread to process the data for a DDSDataReader, set a unique port for the DDS_TransportUnicastQosPolicy or DDS_TransportMulticastQosPolicy for the DDSDataReader.

This QoS policy also sets the size of the buffer used to store packets received from a transport. This buffer size will limit the largest single packet of data that a DDSDomainParticipant will accept from a transport. Users will often set this size to the largest packet that any of the transports used by their application will deliver. For many applications, the value 65,536 (64 K) is a good choice; this value is the largest packet that can be sent/received via UDP.

Member Data Documentation

struct DDS_ThreadSettings_t DDS_ReceiverPoolQosPolicy::thread

Receiver pool thread(s).

There is at least one receive thread, possibly more.

[default] priority above normal.
The actual value depends on your architecture:

For Windows: 2
For Solaris: OS default priority
For Linux: OS default priority
For LynxOS: 29
For INTEGRITY: 100
For VxWorks: 71

For all others: OS default priority.

[default] The actual value depends on your architecture:

For Windows: OS default stack size
For Solaris: OS default stack size
For Linux: OS default stack size
For LynxOS: 4*16*1024
For INTEGRITY: 4*20*1024
For VxWorks: 4*16*1024

For all others: OS default stack size.

[default] mask DDS_THREAD_SETTINGS_FLOATING_POINT | DDS_THREAD_SETTINGS_STDIO

DDS_Long DDS_ReceiverPoolQosPolicy::buffer_size

The receive buffer size.

The receive buffer is used by the receive thread to store the raw data that arrives over the transport.

In many applications, users will change the configuration of the built-in transport ::NDDS_Transport_Property_t::message_size_max to increase the size of the largest data packet that can be sent or received through the transport. Typically, users will change the UDPv4 transport plugin's ::NDDS_Transport_Property_t::message_size_max to 65536 (64 K), which is the largest packet that can be sent/received via UDP.

Ihe ReceiverPoolQosPolicy's buffer_size should be set to be the same value as the maximum ::NDDS_Transport_Property_t::message_size_max across all of the transports being used.

If you are using the default configuration of the built-in transports, you should not need to change this buffer size.

In addition, if your application only uses transports that support zero-copy, then you do not need to modify the value of buffer_size, even if the ::NDDS_Transport_Property_t::message_size_max of the transport is changed. Transports that support zero-copy do not copy their data into the buffer provided by the receive thread. Instead, they provide the receive thread data in a buffer allocated by the transport itself. The only built-in transport that supports zero-copy is the UDPv4 transport on VxWorks platforms.

[default] 9216

[range] [1, 1 GB]

DDS_Long DDS_ReceiverPoolQosPolicy::buffer_alignment

The receive buffer alignment.

Most users will not need to change this alignment.

[default] 16

[range] [1,1024] Value must be a power of 2.


RTI Connext C++ API Version 5.0.0 Copyright © Thu Aug 30 2012 Real-Time Innovations, Inc