RTI Connext
Core Libraries and Utilities
User’s Manual
Part 2 — Core Concepts
Chapters
Version 5.0
© 2012
All rights reserved.
Printed in U.S.A. First printing.
August 2012.
Trademarks
Copy and Use Restrictions
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form (including electronic, mechanical, photocopy, and facsimile) without the prior written permission of Real- Time Innovations, Inc. The software described in this document is furnished under and subject to the RTI software license agreement. The software may be used or copied only under the terms of the license agreement.
Note: In this section, "the Software" refers to
This product implements the DCPS layer of the Data Distribution Service (DDS) specification version 1.2 and the DDS Interoperability Wire Protocol specification version 2.1, both of which are owned by the Object Management, Inc. Copyright
Portions of this product were developed using ANTLR (www.ANTLR.org). This product includes software developed by the University of California, Berkeley and its contributors.
Portions of this product were developed using AspectJ, which is distributed per the CPL license. AspectJ source code may be obtained from Eclipse. This product includes software developed by the University of California, Berkeley and its contributors.
Portions of this product were developed using MD5 from Aladdin Enterprises.
Portions of this product include software derived from Fnmatch, (c) 1989, 1993, 1994 The Regents of the University of California. All rights reserved. The Regents and contributors provide this software "as is" without warranty.
Portions of this product were developed using EXPAT from Thai Open Source Software Center Ltd and Clark Cooper Copyright (c) 1998, 1999, 2000 Thai Open Source Software Center Ltd and Clark Cooper Copyright (c) 2001, 2002 Expat maintainers. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
Technical Support
232 E. Java Drive
Sunnyvale, CA 94089
Phone: |
(408) |
Email: |
support@rti.com |
Website: |
Contents, Part 2
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
3.2.8.1 |
||
|
|||
|
|||
iii
|
|||
|
|||
|
|||
|
|||
3.4 Creating User Data Types with Extensible Markup Language (XML) |
|||
|
|||
|
|||
|
||
|
||
|
4.1.7.1 Changing the QoS Defaults Used to Create Entities: set_default_*_qos() |
|
|
||
|
||
|
||
4.2.1 QoS Requested vs. Offered |
||
iv
|
||||
|
||||
|
|
|||
|
|
|||
|
||||
|
||||
|
||||
|
4.4.2 Creating and Deleting Listeners |
|||
|
||||
|
||||
|
|
|||
|
||||
|
||||
|
4.6.1 Creating and Deleting WaitSets |
|||
|
||||
|
||||
|
|
|||
|
4.6.4 Processing Triggered |
|||
|
||||
|
||||
|
||||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
|
|||
|
|
|||
|
||||
|
||||
|
||||
|
|
|||
|
|
5.1.3.2 Changing QoS Settings After the Topic Has Been Created |
||
|
5.1.4 Copying QoS From a Topic to a DataWriter or DataReader |
|||
|
||||
|
||||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
v
|
|||||
|
|
||||
|
|||||
|
|
||||
|
|
5.4.2 Where Filtering is |
|||
|
|
||||
|
|
||||
|
|
||||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
||||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
||||
|
|
|
|||
|
|
|
|||
|
|
||||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|||||
|
|||||
|
|
||||
|
|
vi
|
|||
|
|||
|
|||
|
Getting and Setting the Publisher’s Default QoS Profile and Library |
||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
6.3.14 Managing Data Instances (Working with Keyed Data Types) |
vii
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
viii
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
Propagating Serialized Keys with |
||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
ix
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
x
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
xi
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
||||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
||||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|||||
|
|
||||
|
|
||||
|
|
||||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
||||
|
|
|
|||
|
|
|
|||
|
|
6.6.5 Creating and Configuring Custom FlowControllers with Property QoS |
|||
|
|
|
|||
|
|
||||
|
|
||||
|
|
6.6.8 Getting/Setting Properties for a Specific FlowController |
|||
|
|
||||
|
|
||||
|
|||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
|
|||
|
|
||||
|
|
|
7.2.4.1 Configuring QoS Settings when the Subscriber is Created |
||
|
|
|
|||
|
|
|
Getting and Settings the Subscriber’s Default QoS Profile and Library |
||
|
|
|
|||
|
|
|
|||
|
|
||||
|
|
xii
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
7.3.8.1 Configuring QoS Settings when the DataReader is Created |
||
|
7.3.8.2 Changing QoS Settings After DataReader Has Been Created |
||
|
7.3.8.3 Using a Topic’s QoS to Initialize a DataWriter’s QoS |
||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
read_next_instance_w_condition and take_next_instance_w_condition |
||
|
|||
|
|||
|
xiii
|
|
|||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
|
|
8.2.1.1Getting and Setting the DomainParticipantFactory’s Default QoS Profile and
xiv
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
Configuring QoS Settings when the DomainParticipant is Created |
||
|
|
Changing QoS Settings After the DomainParticipant Has Been Created |
||
|
|
Getting and Setting the DomainParticipant’s Default QoS Profile and Library 8- |
||
|
|
|
19 |
|
|
|
|||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
xv
8.5.3.3Controlling the Reliable Protocol Used by
8.5.4 DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) |
||
8.5.4.1 Configuring Resource Limits for Asynchronous DataWriters |
||
xvi
|
|
|||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
||||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
||||
|
||||
|
||||
|
|||
|
|||
|
|||
|
9.4.2 Using Visual Studio .NET, Visual Studio .NET 2003, or Visual Studio 2005 |
||
|
|||
|
xvii
Chapter 3 Data Types and Data Samples
How data is stored or laid out in memory can vary from language to language, compiler to compiler, operating system to operating system, and processor to processor. This combination of language/compiler/operating system/processor is called a platform. Any modern middleware must be able to take data from one specific platform (say C/gcc.3.2.2/Solaris/Sparc) and transparently deliver it to another (for example, Java/JDK 1.6/Windows XP/Pentium). This process is commonly called serialization/deserialization, or marshalling/demarshalling.
Messaging products have typically taken one of two approaches to this problem:
1.Do nothing. Messages consist only of opaque streams of bytes. The JMS BytesMessage is an example of this approach.
2.Send everything, every time.
The “do nothing” approach is lightweight on its surface but forces you, the user of the middleware API, to consider all data encoding, alignment, and padding issues. The “send everything” alternative results in large amounts of redundant information being sent with every packet, impacting performance.
Connext takes an intermediate approach. Just as objects in your application program belong to some data type, data samples sent on the same Connext topic share a data type. This type defines the fields that exist in the data samples and what their constituent types are. The middleware stores and propagates this
To publish and/or subscribe to data with Connext, you will carry out the following steps:
1.Select a type to describe your data.
You have a number of choices. You can choose one of these options, or you can mix and match them.
•Use a
This option may be sufficient if your data typing needs are very simple. If your data is highly structured, or you need to be able to examine fields within that data for filtering or other purposes, this option may not be appropriate. The
•Use the RTI code generator, rtiddsgen, to define a type at
Code generation offers two strong benefits not available with dynamic type definition: (1) it allows you to share type definitions across programming languages, and (2) because the structure of the type is known at compile time, it provides rigorous static type safety.
The code generator accepts input in a number of formats to make it easy to integrate Connext with your development processes and IT infrastructure:
•OMG IDL. This format is a standard component of both the DDS and CORBA specifications. It describes data types with a
•XML schema (XSD), either independent or embedded in a WSDL file. XSD should be the format of choice for those using Connext alongside or connected to a web- services infrastructure. This format is described in Creating User Data Types with XML Schemas (XSD) (Section 3.5).
•XML in a
•Define a type programmatically at run time.
This method may be appropriate for applications with dynamic data description needs: applications for which types change frequently or cannot be known ahead of time. It is described in Defining New Types (Section 3.8.2).
2.Register your type with a logical name.
If you've chosen to use a
This step is described in the Defining New Types (Section 3.8.2).
3.Create a Topic using the type name you previously registered.
If you've chosen to use a
Creating and working with Topics is discussed in Chapter 5: Topics.
4.Create one or more DataWriters to publish your data and one or more DataReaders to subscribe to it.
The concrete types of these objects depend on the concrete data type you've selected, in order to provide you with a measure of type safety.
Creating and working with DataWriters and DataReaders are described in Chapter 6: Sending Data and Chapter 7: Receiving Data, respectively.
Whether publishing or subscribing to data, you will need to know how to create and delete data samples and how to get and set their fields. These tasks are described in Working with Data Samples (Section 3.9).
This chapter describes:
❏Introduction to the Type System (Section 3.1 on Page
❏
❏Creating User Data Types with IDL (Section 3.3 on Page
❏Creating User Data Types with Extensible Markup Language (XML) (Section 3.4 on Page
❏Creating User Data Types with XML Schemas (XSD) (Section 3.5 on Page
❏Using rtiddsgen (Section 3.6 on Page
❏Using Generated Types without Connext (Standalone) (Section 3.7 on Page
❏Interacting Dynamically with User Data Types (Section 3.8 on Page
❏Working with Data Samples (Section 3.9 on Page
3.1Introduction to the Type System
A user data type is any custom type that your application defines for use with Connext. It may be a structure, a union, a value type, an enumeration, or a typedef (or language equivalents).
Your application can have any number of user data types. They can be composed of any of the primitive data types listed below or of other user data types.
Only structures, unions, and value types may be read and written directly by Connext; enums, typedefs, and primitive types must be contained within a structure, union, or value type. In order for a DataReader and DataWriter to communicate with each other, the data types associated with their respective Topic definitions must be identical.
❏octet, char, wchar
❏short, unsigned short
❏long, unsigned long
❏long long, unsigned long long
❏float
❏double, long double
❏boolean
❏enum (with or without explicit values)
❏bounded and unbounded string and wstring
The following
❏module (also called a package or namespace)
❏pointer
❏array of primitive or user type elements
❏bounded/unbounded sequence of
❏typedef
❏bitfield2
❏union
❏struct
❏value type, a complex type that supports inheritance and other
1.Sequences of sequences are not supported directly. To work around this constraint, typedef the inner sequence and form a sequence of that new type.
2.Data types containing bitfield members are not supported by DynamicData.
To use a data type with Connext, you must define that type in a way the middleware understands and then register the type with the middleware. These steps allow Connext to serialize, deserialize, and otherwise operate on specific types. They will be described in detail in the following sections.
3.1.1Sequences
A sequence contains an ordered collection of elements that are all of the same type. The operations supported in the sequence are documented in the API Reference HTML documentation, which is available for all supported programming languages (select Modules, DDS API Reference, Infrastructure Module, Sequence Support).
Java sequences implement the java.util.List interface from the standard Collections framework.
C++ users will find sequences conceptually similar to the deque class in the Standard Template Library (STL).
Elements in a sequence are accessed with their index, just like elements in an array. Indices start from zero. Unlike arrays, however, sequences can grow in size. A sequence has two sizes associated with it: a physical size (the "maximum") and a logical size (the "length"). The physical size indicates how many elements are currently allocated by the sequence to hold; the logical size indicates how many valid elements the sequence actually holds. The length can vary from zero up to the maximum. Elements cannot be accessed at indices beyond the current length.
A sequence may be declared as bounded or unbounded. A sequence's "bound" is the maximum number of elements tha tthe sequence can contain at any one time. The bound is very important because it allows Connext to preallocate buffers to hold serialized and deserialized samples of your types; these buffers are used when communicating with other nodes in your distributed system. If a sequence had no bound, Connext would not know how large to allocate its buffers and would therefore have to allocate them on the fly as individual samples were read and
3.1.2Strings and Wide Strings
Connext supports both strings consisting of
Like sequences, strings may be bounded or unbounded. A string's "bound" is its maximum length (not counting the trailing NULL character in C and C++).
3.1.3Introduction to TypeCode
Type
enum TCKind { TK_NULL, TK_SHORT, TK_LONG, TK_USHORT,
TK_ULONG,
TK_FLOAT,
TK_DOUBLE,
TK_BOOLEAN, TK_CHAR, TK_OCTET, TK_STRUCT, TK_UNION, TK_ENUM, TK_STRING, TK_SEQUENCE, TK_ARRAY, TK_ALIAS, TK_LONGLONG, TK_ULONGLONG, TK_LONGDOUBLE, TK_WCHAR, TK_WSTRING, TK_VALUE, TK_SPARSE
}
Type codes unambiguously match type representations and provide a more reliable test than comparing the string type names.
The TypeCode class, modeled after the corresponding CORBA API, provides access to type- code information. For details on the available operations for the TypeCode class, see the API Reference HTML documentation, which is available for all supported programming languages (select Modules, DDS API Reference, Topic Module, Type Code Support).
3.1.3.1Sending TypeCodes on the Network
In addition to being used locally, serialized type codes are typically published automatically during discovery as part of the
Note: Type codes are not cached by Connext upon receipt and are therefore not available from the
DataReader's get_matched_publication_data() operation.
If your data type has an especially complex type code, you may need to increase the value of the type_code_max_serialized_length field in the DomainParticipant's
DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4). Or, to prevent the propagation of type codes altogether, you can set this value to zero (0). Be aware that some features of monitoring tools, as well as some features of the middleware itself (such as ContentFilteredTopics) will not work correctly if you disable TypeCode propagation.
3.2
Connext provides a set of standard types that are built into the middleware. These types can be used immediately; they do not require writing IDL, invoking the rtiddsgen utility (see Section 3.6), or using the dynamic type API (see Section 3.2.8).
The supported
The
❏Registering
❏Creating Topics for
❏Creating ContentFilteredTopics for
❏String
❏KeyedString
❏Octets
❏KeyedOctets
❏Type Codes for
3.2.1Registering
By default, the
3.2.2Creating Topics for
To create a topic for a
Note: In the following examples, you will see the sentinel "<BuiltinType>."
For C and C++: <BuiltinType> = String, KeyedString, Octets or KeyedOctets For Java and .NET1: <BuiltinType> = String, KeyedString, Bytes or KeyedBytes
C API:
const char* DDS_<BuiltinType>TypeSupport_get_type_name();
C++ API with namespace:
const char* DDS::<BuiltinType>TypeSupport::get_type_name();
C++ API without namespace:
const char* DDS<BuiltinType>TypeSupport::get_type_name();
C++/CLI API:
System::String^ DDS:<BuiltinType>TypeSupport::get_type_name();
C# API:
System.String DDS.<BuiltinType>TypeSupport.get_type_name();
1. RTI Connext .NET language binding is currently supported for C# and C++/CLI.
Java API:
String com.rti.dds.type.builtin.<BuiltinType>TypeSupport.get_type_name();
3.2.2.1Topic Creation Examples
For simplicity, error handling is not shown in the following examples.
C Example:
DDS_Topic * topic = NULL;
/* Create a builtin type Topic */
topic = DDS_DomainParticipant_create_topic( participant, "StringTopic",
DDS_StringTypeSupport_get_type_name(), &DDS_TOPIC_QOS_DEFAULT, NULL, DDS_STATUS_MASK_NONE);
C++ Example with Namespaces:
using namespace DDS;
...
/* Create a String builtin type Topic */ Topic * topic =
"StringTopic", StringTypeSupport::get_type_name(),
DDS_TOPIC_QOS_DEFAULT, NULL, DDS_STATUS_MASK_NONE);
C++/CLI Example:
using namespace DDS;
...
/* Create a builtin type Topic */
Topic^ topic =
"StringTopic", StringTypeSupport::get_type_name(), DomainParticipant::TOPIC_QOS_DEFAULT,
nullptr, StatusMask::STATUS_MASK_NONE);
C# Example:
using namespace DDS;
...
/* Create a builtin type Topic */
Topic topic = participant.create_topic(
"StringTopic", StringTypeSupport.get_type_name(), DomainParticipant.TOPIC_QOS_DEFAULT,
null, StatusMask.STATUS_MASK_NONE);
Java Example:
import com.rti.dds.type.builtin.*;
...
/* Create a builtin type Topic */
Topic topic = participant.create_topic(
"StringTopic", StringTypeSupport.get_type_name(), DomainParticipant.TOPIC_QOS_DEFAULT,
null, StatusKind.STATUS_MASK_NONE);
3.2.3Creating ContentFilteredTopics for
To create a ContentFilteredTopic for a
The field names used in the filter expressions for the
3.2.3.1ContentFilteredTopic Creation Examples
For simplicity, error handling is not shown in the following examples.
C Example:
DDS_Topic * topic = NULL;
DDS_ContentFilteredTopic * contentFilteredTopic = NULL; struct DDS_StringSeq parameters = DDS_SEQUENCE_INITIALIZER;
/* Create a string ContentFilteredTopic */ topic = DDS_DomainParticipant_create_topic(
participant, "StringTopic", DDS_StringTypeSupport_get_type_name(), &DDS_TOPIC_QOS_DEFAULT,NULL, DDS_STATUS_MASK_NONE);
contentFilteredTopic = DDS_DomainParticipant_create_contentfilteredtopic( participant, "StringContentFilteredTopic",
topic, "value = 'Hello World!'", & parameters);
C++ Example with Namespaces:
using namespace DDS;
...
/* Create a String ContentFilteredTopic */ Topic * topic =
"StringTopic", StringTypeSupport::get_type_name(), TOPIC_QOS_DEFAULT, NULL, STATUS_MASK_NONE);
StringSeq parameters;
ContentFilteredTopic * contentFilteredTopic =
"StringContentFilteredTopic", topic, "value = 'Hello World!'", parameters);
C++/CLI Example:
using namespace DDS;
...
/* Create a String ContentFilteredTopic */ Topic^ topic =
"StringTopic", StringTypeSupport::get_type_name(), DomainParticipant::TOPIC_QOS_DEFAULT,
nullptr, StatusMask::STATUS_MASK_NONE);
StringSeq^ parameters = gcnew StringSeq();
ContentFilteredTopic^ contentFilteredTopic =
"StringContentFilteredTopic", topic, "value = 'Hello World!'", parameters);
C# Example:
using namespace DDS;
...
/* Create a String ContentFilteredTopic */ Topic topic = participant.create_topic(
"StringTopic", StringTypeSupport.get_type_name(), DomainParticipant.TOPIC_QOS_DEFAULT,
null, StatusMask.STATUS_MASK_NONE);
StringSeq parameters = new StringSeq();
ContentFilteredTopic contentFilteredTopic = participant.create_contentfilteredtopic(
"StringContentFilteredTopic", topic, "value = 'Hello World!'", parameters);
Java Example:
import com.rti.dds.type.builtin.*;
...
/* Create a String ContentFilteredTopic */ Topic topic = participant.create_topic(
"StringTopic", StringTypeSupport.get_type_name(), DomainParticipant.TOPIC_QOS_DEFAULT,
null, StatusKind.STATUS_MASK_NONE);
StringSeq parameters = new StringSeq();
ContentFilteredTopic contentFilteredTopic = participant.create_contentfilteredtopic(
"StringContentFilteredTopic", topic, "value = 'Hello World!'", parameters);
3.2.4String
The String
3.2.4.1Creating and Deleting Strings
In C and C++, Connext provides a set of operations to create (DDS::String_alloc()), destroy (DDS::String_free()), and clone strings (DDS::String_dup()). Select Modules, DDS API Reference, Infrastructure Module, String support in the API Reference HTML documentation, which is available for all supported programming languages.
1. RTI Connext .NET language binding is currently supported for C# and C++/CLI.
Memory Considerations in Copy Operations:
When the read/take operations that take a sequence of strings as a parameter are used in copy mode, Connext allocates the memory for the string elements in the sequence if they are initialized to NULL.
If the elements are not initialized to NULL, the behavior depends on the language:
•In Java and .NET, the memory associated with the elements is reallocated with every sample, because strings are immutable objects.
•In C and C++, the memory associated with the elements must be large enough to hold the received data. Insufficient memory may result in crashes.
When take_next_sample() and read_next_sample() are called in C and C++, you must make sure that the input string has enough memory to hold the received data. Insufficient memory may result in crashes.
3.2.4.2String DataWriter
The string DataWriter API matches the standard DataWriter API (see Using a
The following examples show how to write simple strings with a string
C Example:
DDS_StringDataWriter * stringWriter = ... ; DDS_ReturnCode_t retCode;
char * str = NULL;
/* Write some data */
retCode = DDS_StringDataWriter_write(
stringWriter, "Hello World!", &DDS_HANDLE_NIL);
str = DDS_String_dup("Hello World!");
retCode = DDS_StringDataWriter_write(stringWriter, str, &DDS_HANDLE_NIL); DDS_String_free(str);
C++ Example with Namespaces:
#include "ndds/ndds_namespace_cpp.h" using namespace DDS;
...
StringDataWriter * stringWriter = ... ;
/* Write some data */
ReturnCode_t retCode =
retCode =
DDS::String_free(str);
C++/CLI Example:
using namespace System; using namespace DDS;
...
StringDataWriter^ stringWriter = ... ;
/* Write some data */
C# Example:
using System; using DDS;
...
StringDataWriter stringWriter = ... ;
/* Write some data */
stringWriter.write("Hello World!", InstanceHandle_t.HANDLE_NIL); String str = "Hello World!";
stringWriter.write(str, InstanceHandle_t.HANDLE_NIL);
Java Example:
import com.rti.dds.publication.*; import com.rti.dds.type.builtin.*; import com.rti.dds.infrastructure.*;
...
StringDataWriter stringWriter = ... ;
/* Write some data */
stringWriter.write("Hello World!", InstanceHandle_t.HANDLE_NIL); String str = "Hello World!";
stringWriter.write(str, InstanceHandle_t.HANDLE_NIL);
3.2.4.3String DataReader
The string DataReader API matches the standard DataReader API (see Using a
The following examples show how to read simple strings with a string
C Example:
struct DDS_StringSeq dataSeq = DDS_SEQUENCE_INITIALIZER; struct DDS_SampleInfoSeq infoSeq = DDS_SEQUENCE_INITIALIZER; DDS_StringDataReader * stringReader = ... ; DDS_ReturnCode_t retCode;
int i;
/* Take and print the data */
retCode = DDS_StringDataReader_take(stringReader, &dataSeq, &infoSeq, DDS_LENGTH_UNLIMITED, DDS_ANY_SAMPLE_STATE, DDS_ANY_VIEW_STATE, DDS_ANY_INSTANCE_STATE);
for (i = 0; i < DDS_StringSeq_get_length(&data_seq); ++i) {
if (DDS_SampleInfoSeq_get_reference(&info_seq,
DDS_StringSeq_get(&data_seq, i));
}
}
/* Return loan */
retCode = DDS_StringDataReader_return_loan(stringReader, &data_seq, &info_seq);
C++ Example with Namespaces:
#include "ndds/ndds_namespace_cpp.h" using namespace DDS;
...
StringSeq dataSeq;
SampleInfoSeq infoSeq;
StringDataReader * stringReader = ... ;
/* Take a print the data */
ReturnCode_t retCode =
for (int i = 0; i < data_seq.length(); ++i) { if (infoSeq[i].valid_data) {
StringTypeSupport::print_data(dataSeq[i]);
}
}
/* Return loan */
retCode =
C++/CLI Example:
using namespace System; using namespace DDS;
...
StringSeq^ dataSeq = gcnew StringSeq(); SampleInfoSeq^ infoSeq = gcnew SampleInfoSeq(); StringDataReader^ stringReader = ... ;
/* Take and print the data */
ResourceLimitsQosPolicy::LENGTH_UNLIMITED, SampleStateKind::ANY_SAMPLE_STATE, ViewStateKind::ANY_VIEW_STATE, InstanceStateKind::ANY_INSTANCE_STATE);
for (int i = 0; i < data_seq.length(); ++i) { if
}
}
/* Return loan */
C# Example:
using System; using DDS;
...
StringSeq dataSeq = new StringSeq();
SampleInfoSeq infoSeq = new SampleInfoSeq();
StringDataReader stringReader = ... ;
/* Take and print the data */ stringReader.take(dataSeq, infoSeq,
ResourceLimitsQosPolicy.LENGTH_UNLIMITED, SampleStateKind.ANY_SAMPLE_STATE, ViewStateKind.ANY_VIEW_STATE, InstanceStateKind.ANY_INSTANCE_STATE);
for (int i = 0; i < data_seq.length(); ++i) { if (infoSeq.get_at(i)).valid_data) {
StringTypeSupport.print_data(dataSeq.get_at(i));
}
}
Java Example:
import com.rti.dds.infrastructure.*; import com.rti.dds.subscription.*; import com.rti.dds.type.builtin.*;
...
StringSeq dataSeq = new StringSeq();
SampleInfoSeq infoSeq = new SampleInfoSeq();
StringDataReader stringReader = ... ;
/* Take and print the data */ stringReader.take(dataSeq, infoSeq,
ResourceLimitsQosPolicy.LENGTH_UNLIMITED, SampleStateKind.ANY_SAMPLE_STATE, ViewStateKind.ANY_VIEW_STATE, InstanceStateKind.ANY_INSTANCE_STATE);
for (int i = 0; i < data_seq.length(); ++i) {
if (((SampleInfo)infoSeq.get(i)).valid_data) { System.out.println((String)dataSeq.get(i));
}
}
/* Return loan */ stringReader.return_loan(dataSeq, infoSeq);
3.2.5KeyedString
The Keyed String
C/C++ Representation (without namespaces):
struct DDS_KeyedString { char * key;
char * value;
};
C++/CLI Representation:
namespace DDS {
public ref struct KeyedString: { public:
System::String^ key; System::String^ value;
...
};
};
C# Representation:
namespace DDS {
public class KeyedString { public System.String key; public System.String value;
};
};
Java Representation:
namespace DDS {
public class KeyedString { public System.String key; public System.String value;
};
};
3.2.5.1Creating and Deleting Keyed Strings
Connext provides a set of constructors/destructors to create/destroy Keyed Strings. For details, see the API Reference HTML documentation, which is available for all supported programming languages (select Modules, DDS API Reference, Topic Module,
If you want to manipulate the memory of the fields 'value' and 'key' in the KeyedString struct in C/C++, use the operations DDS::String_alloc(), DDS::String_dup(), and DDS::String_free(), as described in the API Reference HTML documentation (select Modules, DDS API Reference, Infrastructure Module, String Support).
3.2.5.2Keyed String DataWriter
The keyed string DataWriter API is extended with the following methods (in addition to the standard methods described in Using a
DDS::ReturnCode_t DDS::KeyedStringDataWriter::dispose( const char* key,
const DDS::InstanceHandle_t* instance_handle);
DDS::ReturnCode_t DDS::KeyedStringDataWriter::dispose_w_timestamp( const char* key,
const DDS::InstanceHandle_t* instance_handle, const struct DDS::Time_t* source_timestamp);
DDS::ReturnCode_t DDS::KeyedStringDataWriter::get_key_value( char * key,
const DDS::InstanceHandle_t* handle);
DDS::InstanceHandle_t DDS::KeyedStringDataWriter::lookup_instance(
const char * key);
DDS::InstanceHandle_t DDS::KeyedStringDataWriter::register_instance(
const char* key);
DDS::InstanceHandle_t
DDS_KeyedStringDataWriter::register_instance_w_timestamp(
const char * key,
const struct DDS_Time_t* source_timestamp);
DDS::ReturnCode_t DDS::KeyedStringDataWriter::unregister_instance( const char * key,
const DDS::InstanceHandle_t* handle);
DDS::ReturnCode_t DDS::KeyedStringDataWriter::unregister_instance_w_timestamp(
const char* key,
const DDS::InstanceHandle_t* handle,
const struct DDS::Time_t* source_timestamp);
DDS::ReturnCode_t DDS::KeyedStringDataWriter::write ( const char * key,
const char * str,
const DDS::InstanceHandle_t* handle);
DDS::ReturnCode_t DDS::KeyedStringDataWriter::write_w_timestamp( const char * key,
const char * str,
const DDS::InstanceHandle_t* handle,
const struct DDS::Time_t* source_timestamp);
These operations are introduced to provide maximum flexibility in the format of the input parameters for the write and instance management operations. For additional information and a complete description of the operations, see the API Reference HTML documentation, which is available for all supported programming languages.
The following examples show how to write keyed strings using a keyed string
C Example:
DDS_KeyedStringDataWriter * stringWriter = ... ; DDS_ReturnCode_t retCode;
struct DDS_KeyedString * keyedStr = NULL; char * str = NULL;
/* Write some data using the KeyedString structure */ keyedStr = DDS_KeyedString_new(255, 255);
retCode = DDS_KeyedStringDataWriter_write_string_w_key( stringWriter, keyedStr, &DDS_HANDLE_NIL);
DDS_KeyedString_delete(keyedStr);
/* Write some data using individual strings */
retCode = DDS_KeyedStringDataWriter_write_string_w_key( stringWriter, "Key 1", "Value 1", &DDS_HANDLE_NIL);
str = DDS_String_dup("Value 2");
retCode = DDS_KeyedStringDataWriter_write_string_w_key( stringWriter, "Key 1", str, &DDS_HANDLE_NIL);
DDS_String_free(str);
C++ Example with Namespaces:
#include "ndds/ndds_namespace_cpp.h" using namespace DDS;
...
KeyedStringDataWriter * stringWriter = ... ;
/* Write some data using the KeyedString */ KeyedString * keyedStr = new KeyedString(255, 255);
ReturnCode_t retCode =
delete keyedStr;
#include "ndds/ndds_namespace_cpp.h" using namespace DDS;
...
KeyedStringDataWriter * stringWriter = ... ;
/* Write some data using the KeyedString */ KeyedString * keyedStr = new KeyedString(255, 255);
ReturnCode_t retCode =
delete keyedStr;
C++/CLI Example:
using namespace System; using namespace DDS;
...
KeyedStringDataWriter^ stringWriter = ... ;
/* Write some data using the KeyedString */ KeyedString^ keyedStr = gcnew KeyedString();
/* Write some data using individual strings */
String^ str = "Value 2";
C# Example
using System; using DDS;
...
KeyedStringDataWriter stringWriter = ... ;
/* Write some data using the KeyedString */
KeyedString keyedStr = new KeyedString(); keyedStr.key = "Key 1";
keyedStr.value = "Value 1";
stringWriter.write(keyedStr, InstanceHandle_t.HANDLE_NIL);
/* Write some data using individual strings */ stringWriter.write("Key 1", "Value 1", InstanceHandle_t.HANDLE_NIL);
String str = "Value 2";
stringWriter.write("Key 1", str, InstanceHandle_t.HANDLE_NIL);
Java Example :
import com.rti.dds.publication.*; import com.rti.dds.type.builtin.*; import com.rti.dds.infrastructure.*;
...
KeyedStringDataWriter stringWriter = ... ;
/* Write some data using the KeyedString */ KeyedString keyedStr = new KeyedString(); keyedStr.key = "Key 1";
keyedStr.value = "Value 1";
stringWriter.write(keyedStr, InstanceHandle_t.HANDLE_NIL);
/* Write some data using individual strings */ stringWriter.write("Key 1", "Value 1", InstanceHandle_t.HANDLE_NIL);
String str = "Value 2";
stringWriter.write("Key 1", str, InstanceHandle_t.HANDLE_NIL);
3.2.5.3Keyed String DataReader
The KeyedString DataReader API is extended with the following operations (in addition to the standard methods described in Using a
DDS::ReturnCode_t DDS::KeyedStringDataReader::get_key_value(
char * key, const DDS::InstanceHandle_t* handle);
DDS::InstanceHandle_t DDS::KeyedStringDataReader::lookup_instance(
const char * key);
For additional information and a complete description of these operations in all supported languages, see the API Reference HTML documentation, which is available for all supported programming languages.
Memory considerations in copy operations:
For read/take operations with copy semantics, such as read_next_sample() and take_next_sample(), Connext allocates memory for the fields 'value' and 'key' if they are initialized to NULL.
If the fields are not initialized to NULL, the behavior depends on the language:
•In Java and .NET, the memory associated to the fields 'value' and 'key' will be reallocated with every sample.
•In C and C++, the memory associated with the fields 'value' and 'key' must be large enough to hold the received data. Insufficient memory may result in crashes.
The following examples show how to read keyed strings with a keyed string
C Example:
struct DDS_KeyedStringSeq dataSeq = DDS_SEQUENCE_INITIALIZER; struct DDS_SampleInfoSeq infoSeq = DDS_SEQUENCE_INITIALIZER; DDS_KeyedKeyedStringDataReader * stringReader = ... ; DDS_ReturnCode_t retCode;
int i;
/* Take and print the data */
retCode = DDS_KeyedStringDataReader_take(stringReader, &dataSeq, &infoSeq, DDS_LENGTH_UNLIMITED, DDS_ANY_SAMPLE_STATE, DDS_ANY_VIEW_STATE, DDS_ANY_INSTANCE_STATE);
for (i = 0; i < DDS_KeyedStringSeq_get_length(&data_seq); ++i) {
if (DDS_SampleInfoSeq_get_reference(&info_seq,
DDS_KeyedStringSeq_get_reference(&data_seq, i));
}
}
/* Return loan */
retCode = DDS_KeyedStringDataReader_return_loan( stringReader, &data_seq, &info_seq);
C++ Example with Namespaces:
#include "ndds/ndds_namespace_cpp.h" using namespace DDS;
...
KeyedStringSeq dataSeq;
SampleInfoSeq infoSeq;
KeyedStringDataReader * stringReader = ... ;
/* Take a print the data */
ReturnCode_t retCode =
for (int i = 0; i < data_seq.length(); ++i) { if (infoSeq[i].valid_data) {
KeyedStringTypeSupport::print_data(&dataSeq[i]);
}
}
/* Return loan */
retCode =
C++/CLI Example:
using namespace System; using namespace DDS;
...
KeyedStringSeq^ dataSeq = gcnew KeyedStringSeq();
SampleInfoSeq^ infoSeq = gcnew SampleInfoSeq();
KeyedStringDataReader^ stringReader = ... ;
/* Take and print the data */
ResourceLimitsQosPolicy::LENGTH_UNLIMITED, SampleStateKind::ANY_SAMPLE_STATE, ViewStateKind::ANY_VIEW_STATE, InstanceStateKind::ANY_INSTANCE_STATE);
for (int i = 0; i < data_seq.length(); ++i) { if
}
}
/* Return loan */
C# Example:
using System; using DDS;
...
KeyedStringSeq dataSeq = new KeyedStringSeq();
SampleInfoSeq infoSeq = new SampleInfoSeq();
KeyedStringDataReader stringReader = ... ;
/* Take and print the data */ stringReader.take(dataSeq, infoSeq,
ResourceLimitsQosPolicy.LENGTH_UNLIMITED, SampleStateKind.ANY_SAMPLE_STATE, ViewStateKind.ANY_VIEW_STATE, InstanceStateKind.ANY_INSTANCE_STATE);
for (int i = 0; i < data_seq.length(); ++i) { if (infoSeq.get_at(i)).valid_data) {
KeyedStringTypeSupport.print_data(dataSeq.get_at(i));
}
}
/* Return loan */ stringReader.return_loan(dataSeq, infoSeq);
Java Example:
import com.rti.dds.infrastructure.*; import com.rti.dds.subscription.*; import com.rti.dds.type.builtin.*;
...
KeyedStringSeq dataSeq = new KeyedStringSeq();
SampleInfoSeq infoSeq = new SampleInfoSeq();
KeyedStringDataReader stringReader = ... ;
/* Take and print the data */ stringReader.take(dataSeq, infoSeq,
ResourceLimitsQosPolicy.LENGTH_UNLIMITED, SampleStateKind.ANY_SAMPLE_STATE, ViewStateKind.ANY_VIEW_STATE, InstanceStateKind.ANY_INSTANCE_STATE);
for (int i = 0; i < data_seq.length(); ++i) {
if (((SampleInfo)infoSeq.get(i)).valid_data) { System.out.println((
(KeyedString)dataSeq.get(i)).toString());
}
}
/* Return loan */ stringReader.return_loan(dataSeq, infoSeq);
3.2.6Octets
The octets
C/C++ Representation (without Namespaces):
struct DDS_Octets { int length;
unsigned char * value;
};
C++/CLI Representation:
namespace DDS {
public ref struct Bytes: { public:
System::Int32 length; System::Int32 offset; array<System::Byte>^ value;
...
};
};
C# Representation:
namespace DDS {
public class Bytes {
public System.Int32 length; public System.Int32 offset; public System.Byte[] value;
...
};
};
Java Representation:
package com.rti.dds.type.builtin;
public class Bytes implements Copyable { public int length;
public int offset; public byte[] value;
...
};
3.2.6.1Creating and Deleting Octets
Connext provides a set of constructors/destructors to create and destroy Octet objects. For details, see the API Reference HTML documentation, which is available for all supported programming languages (select Modules, DDS API Reference, Topic Module,
If you want to manipulate the memory of the value field inside the Octets struct in C/C++, use the operations DDS::OctetBuffer_alloc(), DDS::OctetBuffer_dup(), and
DDS::OctetBuffer_free(), described in the API Reference HTML documentation (select
Modules, DDS API Reference, Infrastructure Module, Octet Buffer Support).
3.2.6.2Octets DataWriter
In addition to the standard methods (see Using a
DDS::ReturnCode_t DDS::OctetsDataWriter::write(
const DDS::OctetSeq & octets,
const DDS::InstanceHandle_t & handle);
DDS::ReturnCode_t DDS::OctetsDataWriter::write(
const unsigned char * octets, int length,
const DDS::InstanceHandle_t& handle);
DDS::ReturnCode_t DDS::OctetsDataWriter::write_w_timestamp( const DDS::OctetSeq & octets,
const DDS::InstanceHandle_t & handle, const DDS::Time_t & source_timestamp);
DDS::ReturnCode_t DDS::OctetsDataWriter::write_w_timestamp( const unsigned char * octets, int length,
const DDS::InstanceHandle_t& handle, const DDS::Time_t& source_timestamp);
These methods are introduced to provide maximum flexibility in the format of the input parameters for the write operations. For additional information and a complete description of these operations in all supported languages, see the API Reference HTML documentation.
The following examples show how to write an array of octets using an octets
C Example:
DDS_OctetsDataWriter * octetsWriter = ... ; DDS_ReturnCode_t retCode;
struct DDS_Octets * octets = NULL; char * octetArray = NULL;
/* Write some data using the Octets structure */ octets = DDS_Octets_new_w_size(1024);
retCode = DDS_OctetsDataWriter_write(octetsWriter, octets, &DDS_HANDLE_NIL);
DDS_Octets_delete(octets);
/* Write some data using an octets array */
octetArray = (unsigned char *)malloc(1024); octetArray[0] = 46;
octetArray[1] = 47;
retCode = DDS_OctetsDataWriter_write_octets (octetsWriter, octetArray, 2, &DDS_HANDLE_NIL);
free(octetArray);
C++ Example with Namespaces:
#include "ndds/ndds_namespace_cpp.h" using namespace DDS;
...
OctetsDataWriter * octetsWriter = ... ;
/* Write some data using the Octets structure */ Octets * octets = new Octets(1024);
ReturnCode_t retCode =
delete octets;
/* Write some data using an octet array */
unsigned char * octetArray = new unsigned char[1024]; octetArray[0] = 46;
octetArray[1] = 47;
retCode =
delete []octetArray;
C++/CLI Example:
using namespace System; using namespace DDS;
...
BytesDataWriter^ octetsWriter = ...;
/* Write some data using Bytes */ Bytes^ octets = gcnew Bytes(1024);
octets.offset = 0;
/* Write some data using individual strings */ array<Byte>^ octetAray = gcnew array<Byte>(1024); octetArray[0] = 46;
octetArray[1] = 47;
C# Example:
using System; using DDS;
...
BytesDataWriter stringWriter = ...;
/* Write some data using the Bytes */ Bytes octets = new Bytes(1024); octets.value[0] = 46; octets.value[1] = 47;
octets.length = 2; octets.offset = 0;
octetWriter.write(octets, InstanceHandle_t.HANDLE_NIL);
/* Write some data using individual strings */ byte[] octetArray = new byte[1024]; octetArray[0] = 46;
octetArray[1] = 47;
octetsWriter.write(octetArray, 0, 2, InstanceHandle_t.HANDLE_NIL);
Java Example:
import com.rti.dds.publication.*; import com.rti.dds.type.builtin.*; import com.rti.dds.infrastructure.*;
...
BytesDataWriter octetsWriter = ... ;
/* Write some data using the Bytes class*/ Bytes octets = new Bytes(1024); octets.length = 2;
octets.offset = 0; octets.value[0] = 46; octets.value[1] = 47;
octetsWriter.write(octets, InstanceHandle_t.HANDLE_NIL);
/* Write some data using a byte array */ byte[] octetArray = new byte[1024]; octetArray[0] = 46;
octetArray[1] = 47;
octetsWriter.write(octetArray, 0, 2, InstanceHandle_t.HANDLE_NIL);
3.2.6.3Octets DataReader
The octets DataReader API matches the standard DataReader API (see Using a
Memory considerations in copy operations:
For read/take operations with copy semantics, such as read_next_sample() and take_next_sample(), Connext allocates memory for the field 'value' if it is initialized to NULL.
If the field 'value' is not initialized to NULL, the behavior depends on the language:
•In Java and .NET, the memory for the field 'value' will be reallocated if the current size is not large enough to hold the received data.
•In C and C++, the memory associated with the field 'value' must be big enough to hold the received data. Insufficient memory may result in crashes.
The following examples show how to read octets with an octets
C Example:
struct DDS_OctetsSeq dataSeq = DDS_SEQUENCE_INITIALIZER; struct DDS_SampleInfoSeq infoSeq = DDS_SEQUENCE_INITIALIZER; DDS_OctetsDataReader * octetsReader = ... ; DDS_ReturnCode_t retCode;
int i;
/* Take and print the data */
retCode = DDS_OctetsDataReader_take(octetsReader, &dataSeq, &infoSeq, DDS_LENGTH_UNLIMITED, DDS_ANY_SAMPLE_STATE, DDS_ANY_VIEW_STATE, DDS_ANY_INSTANCE_STATE);
for (i = 0; i < DDS_OctetsSeq_get_length(&dataSeq); ++i) {
if (DDS_SampleInfoSeq_get_reference(&infoSeq,
DDS_OctetsSeq_get_reference(&dataSeq, i));
}
}
/* Return loan */ retCode =
DDS_OctetsDataReader_return_loan(octetsReader, &dataSeq, &infoSeq);
C++ Example with Namespaces:
#include "ndds/ndds_namespace_cpp.h" using namespace DDS;
...
OctetsSeq dataSeq; SampleInfoSeq infoSeq;
OctetsDataReader * octetsReader = ... ;
/* Take a print the data */
ReturnCode_t retCode =
for (int i = 0; i < data_seq.length(); ++i) { if (infoSeq[i].valid_data) {
OctetsTypeSupport::print_data(&dataSeq[i]);
}
}
/* Return loan */
retCode =
C++/CLI Example:
using namespace System; using namespace DDS;
...
BytesSeq^ dataSeq = gcnew BytesSeq();
SampleInfoSeq^ infoSeq = gcnew SampleInfoSeq();
BytesDataReader^ octetsReader = ... ;
/* Take and print the data */
for (int i = 0; i < data_seq.length(); ++i) { if
}
}
/* Return loan */
C# Example:
using System; using DDS;
...
BytesSeq dataSeq = new BytesSeq(); SampleInfoSeq infoSeq = new SampleInfoSeq(); BytesDataReader octetsReader = ... ;
/* Take and print the data */ octetsReader.take(dataSeq, infoSeq,
ResourceLimitsQosPolicy.LENGTH_UNLIMITED, SampleStateKind.ANY_SAMPLE_STATE, ViewStateKind.ANY_VIEW_STATE, InstanceStateKind.ANY_INSTANCE_STATE);
for (int i = 0; i < data_seq.length(); ++i) { if (infoSeq.get_at(i)).valid_data) {
BytesTypeSupport.print_data(dataSeq.get_at(i));
}
}
/* Return loan */ octetsReader.return_loan(dataSeq, infoSeq);
Java Example:
import com.rti.dds.infrastructure.*; import com.rti.dds.subscription.*; import com.rti.dds.type.builtin.*;
...
BytesSeq dataSeq = new BytesSeq(); SampleInfoSeq infoSeq = new SampleInfoSeq(); BytesDataReader octetsReader = ... ;
/* Take and print the data */ octetsReader.take(dataSeq, infoSeq,
ResourceLimitsQosPolicy.LENGTH_UNLIMITED, SampleStateKind.ANY_SAMPLE_STATE, ViewStateKind.ANY_VIEW_STATE, InstanceStateKind.ANY_INSTANCE_STATE);
for (int i = 0; i < data_seq.length(); ++i) {
if (((SampleInfo)infoSeq.get(i)).valid_data) { System.out.println(((Bytes)dataSeq.get(i)).toString());
}
}
/* Return loan */ octetsReader.return_loan(dataSeq, infoSeq);
3.2.7KeyedOctets
The keyed octets
C/C++ Representation (without Namespaces):
struct DDS_KeyedOctets { char * key;
int length;
unsigned char * value;
};
C++/CLI Representation:
namespace DDS {
public ref struct KeyedBytes { public:
System::String^ key; System::Int32 length; System::Int32 offset; array<System::Byte>^ value;
...
};
};
C# Representation:
namespace DDS {
public class KeyedBytes { public System.String key; public System.Int32 length; public System.Int32 offset; public System.Byte[] value;
…
};
};
Java Representation:
package com.rti.dds.type.builtin; public class KeyedBytes {
public String key; public int length; public int offset; public byte[] value;
...
};
3.2.7.1Creating and Deleting KeyedOctets
Connext provides a set of constructors/destructors to create/destroy KeyedOctets objects. For details, see the API Reference HTML documentation, which is available for all supported programming languages (select Modules, DDS API Reference, Topic Module,
To manipulate the memory of the value field in the KeyedOctets struct in C/C++: use
DDS::OctetBuffer_alloc(), DDS::OctetBuffer_dup(), and DDS::OctetBuffer_free(). See the API Reference HTML documentation (select Modules, DDS API Reference, Infrastructure Module, Octet Buffer Support).
To manipulate the memory of the key field in the KeyedOctets struct in C/C++: use
DDS::String_alloc(), DDS::String_dup(), and DDS::String_free(). See the API Reference HTML documentation (select Modules, DDS API Reference, Infrastructure Module, String Support).
3.2.7.2Keyed Octets DataWriter
In addition to the standard methods (see Using a
DDS::ReturnCode_t DDS::KeyedOctetsDataWriter::dispose( const char* key,
const DDS::InstanceHandle_t & instance_handle);
DDS::ReturnCode_t DDS::KeyedOctetsDataWriter::dispose_w_timestamp( const char* key,
const DDS::InstanceHandle_t & instance_handle, const DDS::Time_t & source_timestamp);
DDS::ReturnCode_t DDS::KeyedOctetsDataWriter::get_key_value( char * key,
const DDS::InstanceHandle_t& handle);
DDS::InstanceHandle_t DDS::KeyedOctetsDataWriter::lookup_instance(
const char * key);
DDS::InstanceHandle_t DDS::KeyedOctetsDataWriter::register_instance(
const char* key);
DDS::InstanceHandle_t
DDS::KeyedOctetsDataWriter::register_instance_w_timestamp(
const char * key,
const DDS::Time_t & source_timestamp);
DDS::ReturnCode_t DDS::KeyedOctetsDataWriter::unregister_instance( const char * key,
const DDS::InstanceHandle_t & handle);
DDS::ReturnCode_t DDS::KeyedOctetsDataWriter::unregister_instance_w_timestamp(
const char* key,
const DDS::InstanceHandle_t & handle, const DDS::Time_t & source_timestamp);
DDS::ReturnCode_t DDS::KeyedOctetsDataWriter::write( const char * key,
const unsigned char * octets, int length,
const DDS::InstanceHandle_t& handle);
DDS::ReturnCode_t DDS::KeyedOctetsDataWriter::write( const char * key,
const DDS::OctetSeq & octets,
const DDS::InstanceHandle_t & handle);
DDS::ReturnCode_t DDS::KeyedOctetsDataWriter::write_w_timestamp( const char * key,
const unsigned char * octets, int length,
const DDS::InstanceHandle_t& handle, const DDS::Time_t& source_timestamp);
DDS::ReturnCode_t DDS::KeyedOctetsDataWriter::write_w_timestamp( const char * key,
const DDS::OctetSeq & octets,
const DDS::InstanceHandle_t & handle, const DDS::Time_t & source_timestamp);
These methods are introduced to provide maximum flexibility in the format of the input parameters for the write and instance management operations. For more information and a complete description of these operations in all supported languages, see the API Reference HTML documentation.
The following examples show how to write keyed octets using a keyed octets
C Example:
DDS_KeyedOctetsDataWriter * octetsWriter = ... ; DDS_ReturnCode_t retCode;
struct DDS_KeyedOctets * octets = NULL; char * octetArray = NULL;
/* Write some data using the KeyedOctets structure */ octets = DDS_KeyedOctets_new_w_size(128,1024);
retCode = DDS_KeyedOctetsDataWriter_write(
octetsWriter, octets, &DDS_HANDLE_NIL);
DDS_KeyedOctets_delete(octets);
/* Write some data using an octets array */ octetArray = (unsigned char *)malloc(1024); octetArray[0] = 46;
octetArray[1] = 47;
retCode = DDS_KeyedOctetsDataWriter_write_octets_w_key ( octetsWriter, "Key 1", octetArray, 2, &DDS_HANDLE_NIL);
free(octetArray);
C++ Example with Namespaces:
#include "ndds/ndds_namespace_cpp.h" using namespace DDS;
...
KeyedOctetsDataWriter * octetsWriter = ... ;
/* Write some data using the KeyedOctets structure */ KeyedOctets * octets = new KeyedOctets(128,1024);
ReturnCode_t |
retCode = |
delete octets; |
|
/* Write some |
data using an octet array */ |
unsigned char |
* octetArray = new unsigned char[1024]; |
octetArray[0] |
= 46; |
octetArray[1] |
= 47; |
retCode =
delete []octetArray;
C++/CLI Example:
using namespace System; using namespace DDS;
...
KeyedOctetsDataWriter^ octetsWriter = ... ;
/* Write some data using KeyedBytes */ KeyedBytes^ octets = gcnew KeyedBytes(1024);
/* Write some data using individual strings */ array<Byte>^ octetAray = gcnew array<Byte>(1024); octetArray[0] = 46;
octetArray[1] = 47;
"Key 1", octetArray, 0, 2, InstanceHandle_t::HANDLE_NIL);
C# Example:
using System; using DDS;
...
KeyedBytesDataWriter stringWriter = ... ;
/* Write some data using the KeyedBytes */ KeyedBytes octets = new KeyedBytes(1024); octets.key = "Key 1";
octets.value[0] = 46; octets.value[1] = 47; octets.length = 2; octets.offset = 0;
octetWriter.write(octets, InstanceHandle_t.HANDLE_NIL);
/* Write some data using individual strings */ byte[] octetArray = new byte[1024]; octetArray[0] = 46;
octetArray[1] = 47;
octetsWriter.write(
"Key 1", octetArray, 0, 2, InstanceHandle_t.HANDLE_NIL);
Java Example:
import com.rti.dds.publication.*; import com.rti.dds.type.builtin.*; import com.rti.dds.infrastructure.*;
...
KeyedBytesDataWriter octetsWriter = ... ;
/* Write some data using the KeyedBytes class*/ KeyedBytes octets = new KeyedBytes(1024); octets.key = "Key 1";
octets.length = 2; octets.offset = 0; octets.value[0] = 46; octets.value[1] = 47;
octetsWriter.write(octets, InstanceHandle_t.HANDLE_NIL);
/* Write some data using a byte array */ byte[] octetArray = new byte[1024]; octetArray[0] = 46;
octetArray[1] = 47; octetsWriter.write(
"Key 1", octetArray, 0, 2, InstanceHandle_t.HANDLE_NIL);
3.2.7.3Keyed Octets DataReader
The KeyedOctets DataReader API is extended with the following methods (in addition to the standard methods described in Using a
DDS::ReturnCode_t DDS::KeyedOctetsDataReader::get_key_value( char * key,
const DDS::InstanceHandle_t* handle);
DDS::InstanceHandle_t DDS::KeyedOctetsDataReader::lookup_instance(
const char * key);
For more information and a complete description of these operations in all supported languages, see the API Reference HTML documentation.
Memory considerations in copy operations:
For read/take operations with copy semantics, such as read_next_sample() and take_next_sample(), Connext allocates memory for the fields 'value' and 'key' if they are initialized to NULL.
If the fields are not initialized to NULL, the behavior depends on the language:
•In Java and .NET, the memory of the field 'value' will be reallocated if the current size is not large enough to hold the received data. The memory associated with the field 'key' will be reallocated with every sample (the key is an immutable object).
•In C and C++, the memory associated with the fields 'value' and 'key' must be large enough to hold the received data. Insufficient memory may result in crashes.
The following examples show how to read keyed octets with a keyed octets
C Example:
struct DDS_KeyedOctetsSeq dataSeq = DDS_SEQUENCE_INITIALIZER; struct DDS_SampleInfoSeq infoSeq = DDS_SEQUENCE_INITIALIZER; DDS_KeyedOctetsDataReader * octetsReader = ... ; DDS_ReturnCode_t retCode;
int i;
/* Take and print the data */
retCode = DDS_KeyedOctetsDataReader_take( octetsReader,
&dataSeq, &infoSeq, DDS_LENGTH_UNLIMITED, DDS_ANY_SAMPLE_STATE, DDS_ANY_VIEW_STATE, DDS_ANY_INSTANCE_STATE);
for (i = 0; i < DDS_KeyedOctetsSeq_get_length(&data_seq); ++i) {
if (DDS_SampleInfoSeq_get_reference(&info_seq,
DDS_KeyedOctetsSeq_get_reference(&data_seq, i));
}
}
/* Return loan */
retCode = DDS_KeyedOctetsDataReader_return_loan( octetsReader, &data_seq, &info_seq);
C++ Example with Namespaces:
#include "ndds/ndds_namespace_cpp.h" using namespace DDS;
...
KeyedOctetsSeq dataSeq; SampleInfoSeq infoSeq;
KeyedOctetsDataReader * octetsReader = ... ;
/* Take a print the data */
ReturnCode_t retCode = octetsReader->take( dataSeq, infoSeq, LENGTH_UNLIMITED,
ANY_SAMPLE_STATE, ANY_VIEW_STATE, ANY_INSTANCE_STATE); for (int i = 0; i < data_seq.length(); ++i) {
if (infoSeq[i].valid_data) { KeyedOctetsTypeSupport::print_data(&dataSeq[i]);
}
}
/* Return loan */
retCode =
C++/CLI Example:
using namespace System; using namespace DDS;
...
KeyedBytesSeq^ dataSeq = gcnew KeyedBytesSeq(); SampleInfoSeq^ infoSeq = gcnew SampleInfoSeq(); KeyedBytesDataReader^ octetsReader = ... ;
/* Take and print the data */
ResourceLimitsQosPolicy::LENGTH_UNLIMITED, SampleStateKind::ANY_SAMPLE_STATE,
ViewStateKind::ANY_VIEW_STATE,
InstanceStateKind::ANY_INSTANCE_STATE);
for (int i = 0; i < data_seq.length(); ++i) { if
}
}
/* Return loan */
C# Example:
using System; using DDS;
...
KeyedBytesSeq dataSeq = new KeyedButesSeq(); SampleInfoSeq infoSeq = new SampleInfoSeq(); KeyedBytesDataReader octetsReader = ... ;
/* Take and print the data */ octetsReader.take(dataSeq, infoSeq,
ResourceLimitsQosPolicy.LENGTH_UNLIMITED, SampleStateKind.ANY_SAMPLE_STATE, ViewStateKind.ANY_VIEW_STATE, InstanceStateKind.ANY_INSTANCE_STATE);
for (int i = 0; i < data_seq.length(); ++i) { if (infoSeq.get_at(i)).valid_data) {
KeyedBytesTypeSupport.print_data(dataSeq.get_at(i));
}
}
/* Return loan */ octetsReader.return_loan(dataSeq, infoSeq);
Java Example:
import com.rti.dds.infrastructure.*; import com.rti.dds.subscription.*; import com.rti.dds.type.builtin.*;
...
KeyedBytesSeq dataSeq = new KeyedBytesSeq(); SampleInfoSeq infoSeq = new SampleInfoSeq(); KeyedBytesDataReader octetsReader = ... ;
/* Take and print the data */ octetsReader.take(dataSeq, infoSeq,
ResourceLimitsQosPolicy.LENGTH_UNLIMITED, SampleStateKind.ANY_SAMPLE_STATE, ViewStateKind.ANY_VIEW_STATE, InstanceStateKind.ANY_INSTANCE_STATE);
for (int i = 0; i < data_seq.length(); ++i) {
if (((SampleInfo)infoSeq.get(i)).valid_data) { System.out.println(((KeyedBytes)dataSeq.get(i)).toString());
}
}
/* Return loan */ octetsReader.return_loan(dataSeq, infoSeq);
3.2.8Managing Memory for
When a sample is written, the DataWriter serializes it and stores the result in a buffer obtained from a pool of preallocated buffers. In the same way, when a sample is received, the DataReader deserializes it and stores the result in a sample coming from a pool of preallocated samples.
For data types generated by rtiddsgen, the size of the buffers and samples in both pools is known based on the IDL or XML description of the type.
For example:
struct MyString { string<128> value;
};
This
However, for
For example, a video surveillance application that is using the keyed octets
To accommodate both kinds of applications and optimize memory usage, you can configure the maximum size of the
Note: These properties must be set consistently with respect to the corresponding *.max_size properties in the DomainParticipant (see Table 3.14 on page
DomainParticipant.
Section 3.2.8.1 includes examples of how to set the maximum size of a string
<dds>
<qos_library name="BuiltinExampleLibrary"> <qos_profile name="BuiltinExampleProfile">
<datawriter_qos> <property>
<value>
<element> <name>dds.builtin_type.string.alloc_size</name> <value>2048</value>
</element>
</value>
</property> </datawriter_qos> <datareader_qos>
<property>
<value>
<element>
<name>dds.builtin_type.string.alloc_size</name> <value>2048</value>
</element>
</value>
</property> </datareader_qos>
</qos_profile> </qos_library>
</dds>
Table 3.1 Properties for Allocating Size of
Property |
|
|
Description |
|
|
|
||
Type |
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
||||||
|
|
Maximum size of the strings published by the DataWriter |
||||||
|
|
or received by the DataReader (includes the NULL- |
||||||
string |
dds.builtin_type.string.alloc_size |
terminated character). |
|
|
|
|
||
|
|
Default: dds.builtin_type.string.max_size if defined (see |
||||||
|
|
Table 3.14 on page |
|
|
|
|||
|
|
|
||||||
|
|
Maximum size of the keys used by the DataWriter or |
||||||
|
dds.builtin_type.keyed_string. |
DataReader (includes the |
|
|||||
|
alloc_key_size |
Default: dds.builtin_type.keyed_string.max_key_size if |
||||||
|
|
defined (see Table 3.14 on page |
|
|||||
keyedstring |
|
|
||||||
|
Maximum size of the strings published by the DataWriter |
|||||||
|
dds.builtin_type.keyed_string. |
or received |
by the DataReader |
(includes |
the |
NULL- |
||
|
terminated character). |
|
|
|
|
|||
|
alloc_size |
|
|
|
|
|||
|
Default: |
dds.builtin_type.keyed_string.max_size |
if |
|||||
|
|
|||||||
|
|
defined (see Table 3.14 on page |
|
|||||
|
|
|
||||||
|
|
Maximum size of the octet sequences published by the |
||||||
octets |
dds.builtin_type.octets.alloc_size |
DataWriter or DataReader. |
|
|
|
|
||
Default: dds.builtin_type.octets.max_size if defined (see |
||||||||
|
|
Table 3.14 on page |
|
|
|
|||
|
|
|
||||||
|
|
Maximum size of the key published by the DataWriter or |
||||||
|
dds.builtin_type.keyed_octets. |
received |
by |
the DataReader |
(includes |
the |
NULL- |
|
|
terminated character). |
|
|
|
|
|||
|
alloc_key_size |
|
|
|
|
|||
|
Default: |
dds.builtin_type.keyed_octets.max_key_size if |
||||||
|
|
|||||||
|
defined (see Table 3.14 on page |
|
||||||
|
|
Maximum size of the octet sequences published by the |
||||||
|
dds.builtin_type.keyed_octets. |
DataWriter or DataReader. |
|
|
|
|
||
|
alloc_size |
Default: |
dds.builtin_type.keyed_octets.max_size |
if |
||||
|
|
defined (see Table 3.14 on page |
|
|||||
|
|
|
|
|
|
|
|
|
3.2.8.1
For simplicity, error handling is not shown in the following examples.
C Example:
DDS_DataWriter * writer = NULL; DDS_StringDataWriter * stringWriter = NULL; DDS_Publisher * publisher = ... ;
DDS_Topic * stringTopic = ... ;
struct DDS_DataWriterQos writerQos = DDS_DataWriterQos_INITIALIZER; DDS_ReturnCode_t retCode;
retCode = DDS_DomainParticipant_get_default_datawriter_qos ( participant, &writerQos);
retCode = DDS_PropertyQosPolicyHelper_add_property ( &writerQos.property, "dds.builtin_type.string.alloc_size", "1000", DDS_BOOLEAN_FALSE);
writer = DDS_Publisher_create_datawriter(
publisher, stringTopic, &writerQos, NULL, DDS_STATUS_MASK_NONE);
stringWriter = DDS_StringDataWriter_narrow(writer); DDS_DataWriterQos_finalize(&writerQos);
C++ Example with Namespaces:
#include "ndds/ndds_namespace_cpp.h" using namespace DDS;
...
Publisher * publisher = ... ;
Topic * stringTopic = ... ;
DataWriterQos writerQos;
ReturnCode_t retCode =
retCode = PropertyQosPolicyHelper::add_property ( &writerQos.property, dds.builtin_type.string.alloc_size",
"1000",
BOOLEAN_FALSE);
DataWriter * writer = publisher->create_datawriter( stringTopic, writerQos, NULL, STATUS_MASK_NONE);
StringDataWriter * stringWriter = StringDataWriter::narrow(writer);
C++/CLI Example:
using namespace DDS;
...
Topic^ stringTopic = ... ;
Publisher^ publisher = ... ;
DataWriterQos^ writerQos = gcnew DataWriterQos();
"dds.builtin_type.string.alloc_size","1000", false);
DataWriter^ writer =
StringDataWriter^ stringWriter = safe_cast<StringDataWriter^>(writer);
C# Example:
using DDS;
...
Topic stringTopic = ... ;
Publisher publisher = ... ;
DataWriterQos writerQos = new DataWriterQos();
participant.get_default_datawriter_qos(writerQos);
PropertyQosPolicyHelper.add_property (writerQos.property_qos,
"dds.builtin_type.string.alloc_size", "1000", false);
StringDataWriter stringWriter =
(StringDataWriter) publisher.create_datawriter(stringTopic, writerQos, null, StatusMask.STATUS_MASK_NONE);
Java Example:
import com.rti.dds.publication.*; import com.rti.dds.type.builtin.*; import com.rti.dds.infrastructure.*;
...
Topic stringTopic = ... ;
Publisher publisher = ... ;
DataWriterQos writerQos = new DataWriterQos();
participant.get_default_datawriter_qos(writerQos);
PropertyQosPolicyHelper.add_property (writerQos.property,
"dds.builtin_type.string.alloc_size", "1000", false);
StringDataWriter stringWriter =
(StringDataWriter) publisher.create_datawriter(stringTopic, writerQos, null, StatusKind.STATUS_MASK_NONE);
3.2.9Type Codes for
The type codes associated with the
module DDS {
/* String */ struct String {
string<max_size> value;
};
/* KeyedString */ struct KeyedString {
string<max_size> key; //@key string<max_size> value;
};
/* Octets */ struct Octets {
sequence<octet, max_size> value;
};
/* KeyedOctets */ struct KeyedOctets {
string<max_size> key; //@key sequence<octet, max_size> value;
};
};
The maximum size (max_size) of the strings and sequences that will be included in the type code definitions can be configured on a
Table 3.2 Properties for Allocating Size of
Property |
|
|
Description |
|
|
|
|
|||
Type |
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||||
|
|
|
||||||||
|
|
Maximum size of the strings published by the DataWriters |
||||||||
|
|
and received by the DataReaders belonging to a |
||||||||
String |
dds.builtin_type.string.max_size |
DomainParticipant (includes the |
||||||||
|
|
character). |
|
|
|
|
|
|
|
|
|
|
Default: 1024 |
|
|
|
|
|
|
|
|
|
|
|
||||||||
|
|
Maximum size of the keys used by the DataWriters and |
||||||||
|
dds.builtin_type.keyed_string. |
DataReaders belonging to a DomainParticipant (includes the |
||||||||
|
max_key_size |
|
|
|
|
|
||||
|
|
Default: 1024 |
|
|
|
|
|
|
|
|
KeyedString |
|
|
||||||||
|
Maximum size of the strings published by the DataWriters |
|||||||||
|
dds.builtin_type.keyed_string. |
and received |
by |
the |
DataReaders |
belonging to |
a |
|||
|
DomainParticipant using |
the |
type (includes |
the |
||||||
|
max_size |
|||||||||
|
|
|
|
|
|
|||||
|
|
|
|
|
|
|
||||
|
|
Default: 1024 |
|
|
|
|
|
|
|
|
|
|
|
||||||||
|
|
Maximum size of the octet sequences published by the |
||||||||
Octets |
dds.builtin_type.octets.max_size |
DataWriters |
and |
DataReaders |
belonging |
to |
a |
|||
DomainParticipant. |
|
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
||
|
|
Default: 2048 |
|
|
|
|
|
|
|
|
|
|
|
||||||||
|
|
Maximum size of the key published by the DataWriter and |
||||||||
|
dds.builtin_type.keyed_octets. |
received by |
the |
DataReaders |
belonging |
to |
the |
|||
|
DomainParticipant (includes |
the |
|
|||||||
|
max_key_size |
|
||||||||
|
character). |
|
|
|
|
|
|
|
|
|
Keyed- |
|
|
|
|
|
|
|
|
|
|
|
Default:1024. |
|
|
|
|
|
|
|
|
|
Octets |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Maximum size of the octet sequences published by the |
|||||||||
|
|
|||||||||
|
dds.builtin_type.keyed_octets. |
DataWriters and DataReaders belonging to a |
||||||||
|
max_size |
DomainParticipant. |
|
|
|
|
|
|
|
|
|
|
Default: 2048 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3.3Creating User Data Types with IDL
You can create user data types in a text file using IDL (Interface Description Language). IDL is
Connext only uses a subset of the IDL syntax. IDL was originally defined by the OMG for the use of CORBA client/server applications in an enterprise setting. Not all of the constructs that can be described by the language are as useful in the context of
The rtiddsgen utility will parse any file that follows version 3.0.3 of the IDL specification. It will quietly ignore all syntax that is not recognized by Connext. In addition, even though “anonymous sequences” (sequences of sequences with no intervening typedef) are currently
legal in IDL, they have been deprecated by the specification, and thus rtiddsgen does not support them.
Certain keywords are considered reserved by the IDL specification; see Table 3.3.
Table 3.3 Reserved IDL Keywords
abstract |
emits |
local |
pseudo |
typeid |
|
|
|
|
|
alias |
enum |
long |
public |
typename |
|
|
|
|
|
any |
eventtype |
mirrorport |
publishes |
typeprefix |
|
|
|
|
|
attribute |
exception |
module |
raises |
union |
|
|
|
|
|
boolean |
factory |
multiple |
readonly |
unsigned |
|
|
|
|
|
case |
FALSE |
native |
sequence |
uses |
|
|
|
|
|
char |
finder |
object |
setraises |
valuebase |
|
|
|
|
|
component |
fixed |
octet |
short |
valuetype |
|
|
|
|
|
connector |
float |
oneway |
string |
void |
|
|
|
|
|
const |
getraises |
out |
struct |
wchar |
|
|
|
|
|
consumes |
home |
port |
supports |
wstring |
|
|
|
|
|
context |
import |
porttype |
switch |
|
|
|
|
|
|
custom |
in |
primarykey |
TRUE |
|
|
|
|
|
|
default |
inout |
private |
truncatable |
|
|
|
|
|
|
double |
interface |
provides |
typedef |
|
|
|
|
|
|
The IDL constructs supported by rtiddsgen are described in Table 3.5, “Specifying Data Types in IDL for C and C++,” on page
For C and C++, rtiddsgen uses typedefs instead of the language keywords for primitive types. For example, DDS_Long instead of long or DDS_Double instead of double. This ensures that the types are of the same size regardless of the platform.1
The remainder of this section includes:
❏
❏TypeCode and rtiddsgen (Section 3.3.3)
❏rtiddsgen Translations for IDL Types (Section 3.3.4)
❏Escaped Identifiers (Section 3.3.5)
❏Referring to Other IDL Files (Section 3.3.6)
❏Preprocessor Directives (Section 3.3.7)
❏Using Custom Directives (Section 3.3.8)
1. The number of bytes sent on the wire for each data type is determined by the Common Data Representation (CDR) standard. For details on CDR, please see the Common Object Request Broker Architecture (CORBA) Specification, Version 3.1, Part 2: CORBA Interoperability, Section 9.3, CDR Transfer Syntax (http://www.omg.org/ technology/documents/corba_spec_catalog.htm).
3.3.1
When rtiddsgen generates code for data structures with
For
3.3.1.1Sequences
C, C++, C++/CLI, and C# users can allocate memory from a number of sources: from the heap, the stack, or from a custom allocator of some kind. In those languages, sequences provide the concept of memory "ownership." A sequence may own the memory allocated to it or be loaned memory from another source. If a sequence owns its memory, it will manage its underlying memory storage buffer itself. When a sequence's maximum size is changed, the sequence will free and reallocate its buffer as needed. However, if a sequence was created with loaned memory by user code, then its memory is not its own to free or reallocate. Therefore, you cannot set the maximum size of a sequence whose memory is loaned. See the API Reference HTML documentation, which is available for all supported programming languages (select Modules, DDS API Reference, Infrastructure Module, Sequence Support) for more information about how to loan and unloan memory for sequence.
In IDL, as described above, a sequence may be declared as bounded or unbounded. A sequence's "bound" is the greatest value its maximum may take. If you use the initializer functions rtiddsgen provides for your types, all sequences will have their maximums set to their declared bounds. However, the amount of data transmitted on the wire when the sample is written will vary.
3.3.1.2Strings and Wide Strings
The initialization functions that rtiddsgen provides for your types will allocate all of the memory for strings in a type to their declared bounds. Take
To Java and .NET users, an IDL string is a String object: it is immutable and knows its own length. C and C++ users must take care, however, as there is no way to determine how much memory is allocated to a character pointer "string"; all that can be determined is the string's current logical length. In some cases, Connext may need to copy a string into a structure that user code has provided. Connext does not free the memory of the string provided to it, as it cannot know from where that memory was allocated.
In the C and C++ APIs, Connext therefore uses the following conventions:
❏A string's memory is "owned" by the structure that contains that string. Calling the finalization function provided for a type will free all recursively contained strings. If you have allocated a contained string in a special way, you must be careful to clean up your own memory and assign the pointer to NULL before calling the type’s finalize() method, so that Connext will skip over that string.
❏You must provide a
❏When you provide a
Connext provides a small set of C functions for dealing with strings. These functions simplify common tasks, avoid some
3.3.2Value Types
A value type is like a structure, but with support for additional
Readers familiar with value types in the context of CORBA should consult Table 3.4 to see which value
Connext.
Table 3.4 Value Type Support
Aspect |
Level of Support in rtiddsgen |
|
|
|
|
Inheritance |
Single inheritance from other value types |
|
|
|
|
Public state members |
Supported |
|
|
|
|
Private state members |
Become public when code is generated |
|
|
|
|
Custom keyword |
Ignored (the value type is parsed without the keyword and code is generated to |
|
work with it) |
||
|
||
|
|
|
Abstract value types |
No code generated (the value type is parsed, but no code is generated) |
|
|
|
|
Operations |
No code generated (the value type is parsed, but no code is generated) |
|
|
|
|
Truncatable keyword |
Ignored (the value type is parsed without the keyword and code is generated to |
|
work with it) |
||
|
||
|
|
3.3.3TypeCode and rtiddsgen
Type codes are enabled by default when you run rtiddsgen. The
(The
Locally, your application can access the type code for a generated type "Foo" by calling the Foo::get_typecode() operation in the code for the type generated by rtiddsgen (unless
Note:
3.3.4rtiddsgen Translations for IDL Types
This section describes how to specify your data types in an IDL file. The rtiddsgen utility supports all the types listed in the following tables:
❏Table 3.5, “Specifying Data Types in IDL for C and C++,” on page
❏Table 3.6, “Specifying Data Types in IDL for C++/CLI,” on page
❏Table 3.7, “Specifying Data Types in IDL for Java,” on page
In each table, the middle column shows the syntax for an IDL data type in the IDL file. The rightmost column shows the corresponding language mapping created by rtiddsgen.
Table 3.5 Specifying Data Types in IDL for C and C++
IDL Type |
Sample Entry in IDL File |
Sample Output Generated by rtiddsgen |
|
|
|
|
|
char |
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
{ |
|||
(see Note 1 |
char char_member; |
||
DDS_Char char_member; |
|||
below) |
}; |
||
} PrimitiveStruct; |
|||
|
|
||
|
|
|
|
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
wchar |
{ |
||
wchar wchar_member; |
|||
DDS_Wchar wchar_member; |
|||
|
}; |
||
|
} PrimitiveStruct; |
||
|
|
||
|
|
|
|
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
|
{ |
||
octet |
octet octet_member; |
||
DDS_Octet octect_member; |
|||
|
}; |
||
|
} PrimitiveStruct; |
||
|
|
||
|
|
|
|
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
short |
{ |
||
short short_member; |
|||
DDS_Short short_member; |
|||
|
}; |
||
|
} PrimitiveStruct; |
||
|
|
||
|
|
|
|
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
unsigned |
{ |
||
unsigned short |
|||
DDS_UnsignedShort |
|||
short |
unsigned_short_member; |
||
unsigned_short_member; |
|||
|
}; |
||
|
} PrimitiveStruct; |
||
|
|
||
|
|
|
|
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
long |
{ |
||
long long_member; |
|||
DDS_Long long_member; |
|||
|
}; |
||
|
} PrimitiveStruct; |
||
|
|
||
|
|
|
|
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
unsigned |
{ |
||
unsigned long |
|||
DDS_UnsignedLong |
|||
long |
unsigned_long_member; |
||
unsigned_long_member; |
|||
|
}; |
||
|
} PrimitiveStruct; |
||
|
|
||
|
|
|
|
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
|
{ |
||
long long |
long long long_long_member; |
||
DDS_LongLong long_long_member; |
|||
|
}; |
||
|
} PrimitiveStruct; |
||
|
|
||
|
|
|
|
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
unsigned |
{ |
||
unsigned long long |
|||
DDS_UnsignedLongLong |
|||
long long |
unsigned_long_long_member; |
||
unsigned_long_long_member; |
|||
|
}; |
||
|
} PrimitiveStruct; |
||
|
|
||
|
|
|
|
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
float |
{ |
||
float float_member; |
|||
DDS_Float float_member; |
|||
|
}; |
||
|
} PrimitiveStruct; |
||
|
|
||
|
|
|
Table 3.5 Specifying Data Types in IDL for C and C++
IDL Type |
Sample Entry in IDL File |
Sample Output Generated by rtiddsgen |
||
|
|
|
|
|
|
|
|
|
|
|
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
double |
{ |
|||
double double_member; |
||||
DDS_Double double_member; |
||||
|
|
}; |
||
|
|
} PrimitiveStruct; |
||
|
|
|
||
|
|
|
|
|
long |
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
double |
{ |
|||
long double long_double_member; |
||||
(see Note 2 |
DDS_LongDouble long_double_member; |
|||
}; |
||||
} PrimitiveStruct; |
||||
below) |
|
|||
|
|
|
||
pointer |
struct MyStruct { |
typedef struct MyStruct { |
||
(see Note 9 |
long * member; |
DDS_Long * member; |
||
below) |
}; |
} MyStruct; |
||
|
|
|
|
|
|
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
boolean |
{ |
|||
boolean boolean_member; |
||||
DDS_Boolean boolean_member; |
||||
|
|
}; |
||
|
|
} PrimitiveStruct; |
||
|
|
|
||
|
|
|
|
|
|
|
enum PrimitiveEnum { |
typedef enum PrimitiveEnum |
|
|
|
{ |
||
|
|
ENUM1, |
||
|
|
ENUM1, |
||
|
|
ENUM2, |
||
|
|
ENUM2, |
||
|
|
ENUM3 |
||
|
|
ENUM3 |
||
|
|
}; |
||
|
|
} PrimitiveEnum; |
||
|
|
|
||
enum |
|
|
|
|
|
|
enum PrimitiveEnum { |
typedef enum PrimitiveEnum |
|
|
|
{ |
||
|
|
ENUM1 = 10, |
||
|
|
ENUM1 = 10, |
||
|
|
ENUM2 = 20, |
||
|
|
ENUM2 = 20, |
||
|
|
ENUM3 = 30 |
||
|
|
ENUM3 = 30 |
||
|
|
}; |
||
|
|
} PrimitiveEnum; |
||
|
|
|
||
|
|
|
|
|
|
|
|
C: #define SIZE 5 |
|
constant |
const short SIZE = 5; |
C++: static const DDS_Short size = 5; |
||
|
|
|
|
|
|
|
|
typedef struct BitfieldType |
|
|
|
struct BitfieldType { |
{ |
|
|
|
short myShort_1 : 1; |
DDS_Short myShort_1 : 1; |
|
|
|
unsigned short myUnsignedShort_1: |
DDS_UnsignedShort myUnsignedShort_1 |
|
|
|
1; |
: 1; |
|
|
|
long myLong_1: 1; |
DDS_Long myLong_1 : 1; |
|
|
|
unsigned long myUnsignedLong_1 :1; |
DDS_UnsignedLong myUnsignedLong_1 : |
|
bitfield |
char myChar_1 : 1; |
1; |
||
wchar myWChar_1 : 1; |
DDS_Char myChar_1 : 1; |
|||
|
|
|||
|
|
octet myOctet_1 : 1; |
DDS_Wchar myWChar_1 : 1; |
|
(see |
short : 0; |
DDS_Octet myOctet_1 : 1; |
||
12 below) |
long myLong_5 : 5; |
DDS_Short : 0; |
||
long myLong_30 : 30; |
DDS_Long myLong_5 : 5; |
|||
|
|
|||
|
|
short myShort_6 : 6; |
DDS_Long myLong_30 : 30; |
|
|
|
short myShort_3and4 : 3+4; |
DDS_Short myShort_6 : 6; |
|
|
|
short myShort; |
DDS_Short myShort_3and4 : 3+4; |
|
|
|
short myShort_8 : 8; |
DDS_Short myShort; |
|
|
|
long myLong_32: 32; |
DDS_Short myShort_8 : 8; |
|
|
|
}; |
DDS_Long myLong_32 : 32; |
|
|
|
|
} BitfieldType; |
|
|
|
|
|
|
struct |
|
struct PrimitiveStruct { |
typedef struct PrimitiveStruct |
|
|
|
|||
|
|
{ |
||
|
|
char char_member; |
||
(see |
char char_member; |
|||
}; |
||||
} PrimitiveStruct; |
||||
10 below) |
|
|||
|
|
|||
|
|
|
|
Table 3.5 Specifying Data Types in IDL for C and C++
IDL Type |
Sample Entry in IDL File |
Sample Output Generated by rtiddsgen |
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
typedef struct PrimitiveUnion |
|
|
union |
|
union PrimitiveUnion switch (long){ |
{ |
|
|
|
|
case 1: |
DDS_Long _d; |
|
|
(see Note 3 |
short short_member; |
struct { |
|
|
|
default: |
DDS_Short short_member; |
|
|||
and |
|
||||
long long_member; |
DDS_Long long_member; |
|
|||
10 below) |
}; |
} _u; |
|
|
|
|
|
|
} PrimitiveUnion; |
|
|
|
|
|
|
||
typedef |
typedef short TypedefShort; |
typedef DDS_Short TypedefShort; |
|
||
|
|
|
|
|
|
|
|
struct OneDArrayStruct { |
typedef struct OneDArrayStruct |
|
|
|
|
{ |
|
|
|
|
|
short short_array[2]; |
|
|
|
|
|
DDS_Short short_array[2]; |
|
||
array |
of |
}; |
|
||
} OneDArrayStruct; |
|
|
|||
above |
|
struct TwoDArrayStruct { |
|
|
|
types |
|
typedef struct TwoDArrayStruct |
|
||
|
short short_array[1][2]; |
|
|||
|
|
{ |
|
|
|
|
|
}; |
|
|
|
|
|
DDS_Short short_array[1][2]; |
|
||
|
|
|
|
||
|
|
|
} TwoDArrayStruct; |
|
|
|
|
|
|
|
|
bounded |
|
typedef struct SequenceStruct |
|
||
sequence of |
|
|
|||
|
{ |
|
|
||
above |
|
|
|
|
|
|
struct SequenceStruct { |
DDSShortSeq short_sequence; |
|
||
types |
|
sequence<short,4> short_sequence; |
} SequenceStruct; |
|
|
|
|
}; |
Note: Sequences of primitive types have been |
||
(see |
|
||||
|
predefined by Connext. |
|
|
||
11 below) |
|
|
|
||
|
|
|
|
||
|
|
|
|
|
|
unbounded |
|
typedef struct SequenceStruct |
|
||
|
{ |
|
|
||
sequence of |
|
|
|
||
|
DDSShortSeq short_sequence; |
|
|||
above |
|
struct SequenceStruct { |
} SequenceStruct; |
|
|
types |
|
sequence<short> short_sequence; |
Note: rtiddsgen will supply a default bound. |
||
|
|
}; |
|||
(see |
|
You can specify that bound |
with the |
“- |
|
|
sequenceSize” |
option; |
see |
||
11 below) |
|
||||
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
array |
of |
struct ArraysOfSequences{ |
typedef struct ArraysOfSequences |
|
|
sequence<short,4> |
{ |
|
|
||
sequences |
sequences_array[2]; |
DDS_ShortSeq sequences_array[2]; |
|
||
|
|
}; |
} ArraysOfSequences; |
|
|
|
|
|
|
|
|
|
|
|
typedef DDS_Short ShortArray[2]; |
|
|
|
|
|
DDS_SEQUENCE_NO_GET(ShortArraySeq, |
|
|
|
|
|
ShortArray); |
|
|
sequence of |
typedef short ShortArray[2]; |
typedef struct SequenceOfArrays |
|
||
arrays |
|
|
|
||
|
struct SequenceofArrays { |
{ |
|
|
|
|
|
|
|
||
(see |
sequence<ShortArray,2> |
ShortArraySeq arrays_sequence; |
|
||
arrays_sequence; |
} SequenceOfArrays; |
|
|
||
11 below) |
}; |
DDS_SEQUENCE_NO_GET is a Connext |
|||
|
|
|
|||
|
|
|
macro that defines a new sequence type for a |
||
|
|
|
user data type. In this case, the user data type is |
||
|
|
|
ShortArray. |
|
|
|
|
|
|
|
|
Table 3.5 Specifying Data Types in IDL for C and C++
IDL Type |
Sample Entry in IDL File |
Sample Output Generated by rtiddsgen |
|||
|
|
|
|
||
|
|
|
|
||
|
typedef sequence<short,4> |
typedef DDS_ShortSeq ShortSequence; |
|
||
sequence of |
|
|
|
|
|
sequences |
ShortSequence; |
DDS_SEQUENCE(ShortSequenceSeq, |
|
||
|
|
||||
|
struct SequencesOfSequences{ |
|
ShortSequence); |
|
|
|
|
|
|
|
|
(see Note 4 |
sequence<ShortSequence,2> |
typedef struct SequencesOfSequences{ |
|
||
sequences_sequence; |
|
||||
and Note |
ShortSequenceSeq |
|
|
||
}; |
|
|
|||
11 below) |
sequences_sequence; |
|
|
||
|
|
|
|||
|
|
} SequencesOfSequences; |
|
|
|
|
|
|
|
||
bounded |
struct PrimitiveStruct { |
typedef struct PrimitiveStruct { |
|
||
char* string_member; |
|
|
|||
string<20> string_member; |
|
|
|||
string |
|
/* maximum length = (20) */ |
|||
}; |
|
||||
|
} PrimitiveStruct; |
|
|
||
|
|
|
|
||
|
|
|
|
||
|
|
typedef struct PrimitiveStruct { |
|
||
|
|
char* string_member; |
|
|
|
|
struct PrimitiveStruct { |
|
/* maximum length = (255) */ |
||
unbounded |
} PrimitiveStruct; |
|
|
||
string string_member; |
|
|
|||
|
|
|
|
||
string |
}; |
Note: rtiddsgen will supply a default bound. |
|||
|
|
||||
|
|
You can specify that bound with the - |
|||
|
|
stringSize |
option, |
see |
|
|
|
|
|
||
|
|
|
|
||
|
|
typedef struct PrimitiveStruct { |
|
||
bounded |
struct PrimitiveStruct { |
DDS_Wchar * wstring_member; |
|
||
wstring<20> wstring_member; |
|
/* maximum length = (20) |
|||
wstring |
|
||||
}; |
*/ |
|
|
|
|
|
|
|
|
||
|
|
} PrimitiveStruct; |
|
|
|
|
|
|
|
||
|
|
typedef struct PrimitiveStruct { |
|
||
unbounded |
struct PrimitiveStruct { |
DDS_Wchar * wstring_member; |
|
||
|
/* maximum length = (255) */ |
||||
wstring |
wstring wstring_member; |
} PrimitiveStruct; |
|
|
|
}; |
|
|
|||
|
|
|
|
|
|
|
|
Note: rtiddsgen will supply a default bound. |
|
||
|
|
|
|
||
|
|
With the |
(only available |
||
|
|
for C++): |
|
|
|
|
|
namespace PackageName{ |
|
||
|
module PackageName { |
|
typedef struct Foo { |
|
|
module |
struct Foo { |
|
DDS_Long field; |
|
|
long field; |
|
} Foo; |
|
|
|
|
}; |
}; |
|
|
|
|
}; |
Without the |
|
||
|
|
typedef struct PackageName_Foo { |
|||
|
|
|
DDS_Long field; |
|
|
|
|
} PackageName_Foo; |
|
|
|
|
|
|
|
|
|
Table 3.5 Specifying Data Types in IDL for C and C++
|
IDL Type |
Sample Entry in IDL File |
Sample Output Generated by rtiddsgen |
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
C++: |
class MyValueType { |
|
|
|
|
|
public: |
|
|
|
|
|
MyValueType2 * member; |
|
|
|
|
|
}; |
|
|
|
|
|
class MyValueType { |
|
|
|
|
|
public: |
|
|
|
valuetype MyValueType { |
|
MyValueType2 |
member; |
|
|
|
}; |
|
|
|
|
public MyValueType2 * member; |
|
|
|
|
|
|
|
|
|
|
|
}; |
|
class MyValueType : public MyBa- |
|
|
|
|
|
||
|
valuetype |
|
|
seValueType |
|
|
valuetype MyValueType { |
|
{ |
|
|
|
|
|
public: |
|
|
|
|
public MyValueType2 member; |
|
|
|
|
(see Note 9 |
|
MyValueType2 * member; |
||
|
}; |
|
|||
|
|
}; |
|
||
|
and Note |
|
|
|
|
|
|
|
|
|
|
|
|
C: |
|
|
|
|
10 below) |
|
typedef struct MyValueType { |
||
|
|
valuetype MyValueType: MyBaseValueType |
|
MyValueType2 * member; |
|
|
|
{ |
|
} MyValueType; |
|
|
|
public MyValueType2 * member; |
|
|
|
|
|
}; |
|
typedef struct MyValueType { |
|
|
|
|
|
MyValueType2 |
member; |
|
|
|
|
} MyValueType; |
|
|
|
|
|
typedef struct MyValueType |
|
|
|
|
|
{ |
|
|
|
|
|
MyBaseValueType parent; |
|
|
|
|
|
MyValueType2 * member; |
|
|
|
|
|
} MyValueType; |
|
|
|
|
|
|
|
Table 3.6 Specifying Data Types in IDL for C++/CLI |
|
|
|
||
|
|
|
|
||
|
IDL Type |
Sample Entry in IDL File |
Sample Output Generated by rtiddsgen |
||
|
|
|
|
||
|
|
|
|
||
|
char |
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
||
|
(see Note 1 |
char char_member; |
System::Char char_member; |
||
|
below) |
}; |
}; |
|
|
|
|
|
|
||
|
wchar |
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
||
|
wchar wchar_member; |
System::Char wchar_member; |
|||
|
|
}; |
}; |
|
|
|
|
|
|
||
|
|
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
||
|
octet |
octet octet_member; |
System::Byte octet_member; |
||
|
|
}; |
}; |
|
|
|
|
|
|
||
|
short |
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
||
|
short short_member; |
System::Int16 short_member; |
|||
|
|
}; |
}; |
|
|
|
|
|
|
|
|
|
unsigned |
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
||
|
unsigned short |
||||
|
System::UInt16 unsigned_short_member; |
||||
|
short |
unsigned_short_member; |
|||
|
}; |
|
|
||
|
|
}; |
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
long |
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
||
|
long long_member; |
System::Int32 long_member; |
|||
|
|
}; |
}; |
|
|
|
|
|
|
||
|
unsigned |
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
||
|
unsigned long unsigned_long_member; |
System::UInt32 unsigned_long_member; |
|||
|
long |
||||
|
}; |
}; |
|
|
|
|
|
|
|
||
|
|
|
|
|
|
Table 3.6 Specifying Data Types in IDL for C++/CLI
IDL Type |
Sample Entry in IDL File |
Sample Output Generated by rtiddsgen |
||
|
|
|
|
|
|
|
|
|
|
long long |
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
||
long long long_long_member; |
System::Int64 long_long_member; |
|||
|
|
}; |
}; |
|
|
|
|
|
|
unsigned |
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
||
unsigned long long |
System::UInt64 |
|||
long long |
unsigned_long_long_member; |
unsigned_long_long_member; |
||
|
|
}; |
}; |
|
|
|
|
|
|
float |
|
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
|
|
float float_member; |
System::Single float_member; |
||
|
|
}; |
}; |
|
|
|
|
|
|
|
|
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
|
double |
double double_member; |
System::Double double_member; |
||
|
|
}; |
} PrimitiveStruct; |
|
|
|
|
|
|
long |
|
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
|
double |
||||
long double long_double_member; |
DDS::LongDouble long_double_member; |
|||
(see Note 2 |
||||
}; |
} PrimitiveStruct; |
|||
below) |
|
|
|
|
|
|
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
|
boolean |
boolean boolean_member; |
System::Boolean boolean_member; |
||
|
|
}; |
}; |
|
|
|
|
|
|
|
|
|
public enum class |
|
|
|
enum PrimitiveEnum { |
PrimitiveEnum : System::Int32 { |
|
|
|
ENUM1, |
ENUM1, |
|
|
|
ENUM2, |
ENUM2, |
|
|
|
ENUM3 |
ENUM3 |
|
enum |
|
}; |
}; |
|
|
|
|
||
|
|
enum PrimitiveEnum { |
public enum class |
|
|
|
ENUM1 = 10, |
PrimitiveEnum : System::Int32 { |
|
|
|
ENUM2 = 20, |
ENUM1 = 10, |
|
|
|
ENUM3 = 30 |
ENUM2 = 20, |
|
|
|
}; |
ENUM3 = 30 |
|
|
|
|
}; |
|
|
|
|
|
|
|
|
|
public ref class SIZE { |
|
constant |
const short SIZE = 5; |
public: |
||
static System::Int16 VALUE = 5; |
||||
|
|
|
||
|
|
|
}; |
|
|
|
|
|
|
struct |
|
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
|
|
|
|||
|
|
|
||
(see |
char char_member; |
System::Char char_member; |
||
}; |
}; |
|||
10 below) |
|
|||
|
|
|||
|
|
|
|
|
|
|
|
public ref class PrimitiveUnion |
|
union |
|
union PrimitiveUnion switch (long){ |
{ |
|
|
|
case 1: |
System::Int32 _d; |
|
(see Note 3 |
short short_member; |
struct PrimitiveUnion_u { |
||
default: |
System::Int16 short_member; |
|||
and |
||||
long long_member; |
System::Int32 long_member; |
|||
10 below) |
}; |
} _u; |
||
|
|
|
}; |
|
|
|
|
|
|
array |
of |
struct OneDArrayStruct { |
public ref class OneDArrayStruct { |
|
array<System::Int16>^ short_array; |
||||
above |
|
short short_array[2]; |
||
|
/*length == 2*/ |
|||
types |
|
}; |
||
|
}; |
|||
|
|
|
||
|
|
|
|
Table 3.6 Specifying Data Types in IDL for C++/CLI
IDL Type |
Sample Entry in IDL File |
Sample Output Generated by rtiddsgen |
|||||
|
|
|
|
|
|
||
|
|
|
|
|
|
||
bounded |
|
public ref class SequenceStruct { |
|
||||
sequence of |
|
|
|||||
|
ShortSeq^ short_sequence; |
|
|||||
above |
|
|
|
||||
|
struct SequenceStruct { |
/*max = 4*/ |
|
|
|||
types |
|
sequence<short,4> short_sequence; |
}; |
|
|
|
|
|
|
}; |
Note: Sequences of primitive types have been |
||||
(see |
|
||||||
|
predefined by Connext. |
|
|
||||
11 below) |
|
|
|
||||
|
|
|
|
|
|||
|
|
|
|
||||
unbounded |
|
public ref class SequenceStruct { |
|
||||
sequence of |
|
ShortSeq^ short_sequence; |
|
||||
above |
|
struct SequenceStruct { |
/*max = <default bound>*/ |
|
|||
|
}; |
|
|
|
|||
types |
|
|
|
|
|||
|
sequence<short> short_sequence; |
|
|
|
|||
|
Note: rtiddsgen will supply a default bound. |
||||||
|
|
}; |
|||||
|
|
You can specify that bound with the |
|
||||
(see |
|
|
|||||
|
|||||||
11 below) |
|
|
|
|
|||
|
|
|
|
|
|||
|
|
|
public ref class ArraysOfSequences |
|
|||
array |
of |
struct ArraysOfSequences{ |
{ |
|
|
|
|
sequence<short,4> |
array<DDS::ShortSeq^>^ |
|
|
||||
sequences |
sequences_array[2]; |
sequences_array; |
|
|
|||
|
|
}; |
// maximum length = (2) |
|
|
||
|
|
|
}; |
|
|
|
|
|
|
|
|
|
|||
bounded |
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
|
||||
System::String^ string_member; |
|
||||||
string<20> string_member; |
|
||||||
string |
|
// maximum length = (20) |
|
||||
|
}; |
|
|||||
|
|
}; |
|
|
|
||
|
|
|
|
|
|
||
|
|
|
|
|
|||
|
|
|
public ref class PrimitiveStruct { |
|
|||
|
|
|
System::String^ string_member; |
|
|||
unbounded |
struct PrimitiveStruct { |
// maximum length = (255) |
|
||||
string string_member; |
}; |
|
|
|
|||
string |
|
}; |
Note: rtiddsgen will supply a default bound. |
||||
|
|
|
You can specify that bound with the - |
||||
|
|
|
stringSize |
option, |
see |
||
|
|
|
|
|
|
||
|
|
|
|
|
|||
bounded |
struct PrimitiveStruct { |
public ref class PrimitiveStruct { |
|
||||
System::String^ string_member; |
|
||||||
wstring<20> wstring_member; |
|
||||||
wstring |
// maximum length = (20) |
|
|||||
}; |
|
||||||
|
|
}; |
|
|
|
||
|
|
|
|
|
|
||
|
|
|
|
|
|||
|
|
|
public ref class PrimitiveStruct { |
|
|||
|
|
|
System::String^ string_member; // |
||||
|
|
struct PrimitiveStruct { |
maximum length = (255) |
|
|
||
unbounded |
}; |
|
|
|
|||
wstring |
wstring wstring_member; |
Note: rtiddsgen will supply a default bound. |
|||||
}; |
|||||||
|
|
You can specify that bound with the - |
|||||
|
|
|
|||||
|
|
|
stringSize |
option, |
see |
||
|
|
|
|
|
|
||
|
|
|
|
|
|
||
|
|
module PackageName { |
namespace PackageName { |
|
|
||
module |
struct Foo { |
public ref class Foo { |
|
|
|||
long field; |
System::Int32 field; |
|
|||||
|
|
}; |
}; |
|
|
|
|
|
|
}; |
}; |
|
|
|
|
|
|
|
|
|
|
|
Table 3.7 Specifying Data Types in IDL for Java
IDL Type |
Sample Entry in IDL file |
Sample Java Output Generated by |
||
rtiddsgen |
||||
|
|
|
||
|
|
|
|
|
char |
|
|
public class PrimitiveStruct |
|
|
|
struct PrimitiveStruct { |
{ |
|
(see Note |
char char_member; |
public char char_member; |
||
}; |
... |
|||
below) |
|
|
} |
|
|
|
|
|
|
wchar |
|
|
public class PrimitiveStruct |
|
|
|
struct PrimitiveStruct { |
{ |
|
(see Note |
wchar wchar_member; |
public char wchar_member; |
||
}; |
... |
|||
below) |
|
|
} |
|
|
|
|
|
|
|
|
|
public class PrimitiveStruct |
|
octet |
|
struct PrimitiveStruct { |
{ |
|
|
octet octet_member; |
public byte byte_member; |
||
|
|
}; |
... |
|
|
|
|
} |
|
|
|
|
|
|
|
|
|
public class PrimitiveStruct |
|
short |
|
struct PrimitiveStruct { |
{ |
|
|
short short_member; |
public short short_member; |
||
|
|
}; |
... |
|
|
|
|
} |
|
|
|
|
|
|
unsigned |
|
struct PrimitiveStruct { |
public class PrimitiveStruct |
|
short |
|
|||
|
{ |
|||
|
|
unsigned short |
||
|
|
public short unsigned_short_member; |
||
|
|
unsigned_short_member; |
||
(see Note |
... |
|||
}; |
||||
} |
||||
below) |
|
|
||
|
|
|
||
|
|
|
|
|
|
|
|
public class PrimitiveStruct |
|
long |
|
struct PrimitiveStruct { |
{ |
|
|
long long_member; |
public int long_member; |
||
|
|
}; |
... |
|
|
|
|
} |
|
|
|
|
|
|
unsigned |
|
struct PrimitiveStruct { |
public class PrimitiveStruct |
|
long |
|
|||
|
{ |
|||
|
|
unsigned long |
||
|
|
public int unsigned_long_member; |
||
|
|
unsigned_long_member; |
||
(see Note |
... |
|||
}; |
||||
} |
||||
below) |
|
|
||
|
|
|
||
|
|
|
|
|
|
|
|
public class PrimitiveStruct |
|
long long |
|
struct PrimitiveStruct { |
{ |
|
|
long long long_long_member; |
public long long_long_member; |
||
|
|
}; |
... |
|
|
|
|
} |
|
|
|
|
|
|
unsigned |
|
|
public class PrimitiveStruct |
|
long long |
|
struct PrimitiveStruct { |
{ |
|
|
|
unsigned long long |
public long |
|
(see Note |
unsigned_long_long_member; |
unsigned_long_long_member; |
||
}; |
... |
|||
below) |
|
|
} |
|
|
|
|
|
|
|
|
|
public class PrimitiveStruct |
|
float |
|
struct PrimitiveStruct { |
{ |
|
|
float float_member; |
public float float_member; |
||
|
|
}; |
... |
|
|
|
|
} |
|
|
|
|
|
Table 3.7 Specifying Data Types in IDL for Java
IDL Type |
Sample Entry in IDL file |
Sample Java Output Generated by |
||
rtiddsgen |
||||
|
|
|
||
|
|
|
|
|
|
|
|
public class PrimitiveStruct |
|
double |
|
struct PrimitiveStruct { |
{ |
|
|
double double_member; |
public double double_member; |
||
|
|
}; |
... |
|
|
|
|
} |
|
|
|
|
||
long double |
|
public class PrimitiveStruct |
||
|
|
struct PrimitiveStruct { |
{ |
|
(see Note |
long double long_double_member; |
public double long_double_member; |
||
}; |
... |
|||
below) |
|
|
} |
|
|
|
|
|
|
pointer |
|
struct MyStruct { |
public class MyStruct { |
|
|
public int member; |
|||
(see Note |
long * member; |
|||
... |
||||
below) |
|
}; |
||
|
}; |
|||
|
|
|
||
|
|
|
|
|
|
|
|
public class PrimitiveStruct |
|
|
|
struct PrimitiveStruct { |
{ |
|
boolean |
|
boolean boolean_member; |
public boolean boolean_member; |
|
|
|
}; |
... |
|
|
|
|
} |
|
|
|
|
|
|
|
|
|
public class PrimitiveEnum extends Enum |
|
|
|
|
{ |
|
|
|
|
public static PrimitiveEnum ENUM1 = |
|
|
|
|
new PrimitiveEnum ("ENUM1", 0); |
|
|
|
enum PrimitiveEnum { |
public static PrimitiveEnum ENUM2 = |
|
|
|
ENUM1, |
new PrimitiveEnum ("ENUM2", 1); |
|
|
|
ENUM2, |
|
|
|
|
ENUM3 |
public static PrimitiveEnum ENUM3 = |
|
|
|
}; |
new PrimitiveEnum ("ENUM3", 2); |
|
|
|
|
public static PrimitiveEnum |
|
|
|
|
valueOf(int ordinal); |
|
|
|
|
... |
|
enum |
|
|
} |
|
|
|
|
||
|
|
public class PrimitiveEnum extends Enum |
||
|
|
|
||
|
|
|
{ |
|
|
|
|
public static PrimitiveEnum ENUM1 = |
|
|
|
|
new PrimitiveEnum ("ENUM1", 10); |
|
|
|
enum PrimitiveEnum { |
public static PrimitiveEnum ENUM2 = |
|
|
|
ENUM1 = 10, |
new PrimitiveEnum ("ENUM2", 10); |
|
|
|
ENUM2 = 20, |
|
|
|
|
ENUM3 = 30 |
public static PrimitiveEnum ENUM3 = |
|
|
|
}; |
new PrimitiveEnum ("ENUM3", 20); |
|
|
|
|
public static PrimitiveEnum |
|
|
|
|
valueOf(int ordinal); |
|
|
|
|
... |
|
|
|
|
} |
|
|
|
|
|
|
constant |
|
|
public class SIZE { |
|
|
const short SIZE = 5; |
public static final short VALUE = 5; |
||
|
|
|
} |
|
|
|
|
|
Table 3.7 Specifying Data Types in IDL for Java
IDL Type |
Sample Entry in IDL file |
Sample Java Output Generated by |
||
rtiddsgen |
||||
|
|
|
||
|
|
|
|
|
|
|
struct BitfieldType { |
public class BitfieldType |
|
|
|
{ |
||
|
|
short myShort_1 : 1; |
||
|
|
public short myShort_1; |
||
|
|
long myLong_1: 1; |
||
|
|
public int myLong_1; |
||
|
|
char myChar_1 : 1; |
||
|
|
public byte myChar_1; |
||
|
|
wchar myWChar_1 : 1; |
||
|
|
public char myWChar_1; |
||
bitfield |
|
octet myOctet_1 : 1; |
||
|
public byte myOctet_1; |
|||
|
|
short : 0; |
||
|
|
public int myLong_5; |
||
|
|
long myLong_5 : 5; |
||
(see Note 12 |
public int myLong_30; |
|||
long myLong_30 : 30; |
||||
public short myShort_6; |
||||
below) |
|
short myShort_6 : 6; |
||
|
public short myShort_3and4; |
|||
|
|
short myShort_3and4 : 3+4; |
||
|
|
public short myShort; |
||
|
|
short myShort; |
||
|
|
public short myShort_8; |
||
|
|
short myShort_8 : 8; |
||
|
|
public int myLong_32; |
||
|
|
long myLong_32: 32; |
||
|
|
... |
||
|
|
}; |
||
|
|
} |
||
|
|
|
||
|
|
|
|
|
struct |
|
|
public class PrimitiveStruct |
|
|
struct PrimitiveStruct { |
{ |
||
|
|
|||
(see Note 10 |
char char_member; |
public char char_member; |
||
}; |
||||
below) |
|
|
} |
|
|
|
|
||
|
|
|
|
|
union |
|
union PrimitiveUnion switch (long){ |
public class PrimitiveUnion { |
|
|
case 1: |
public int _d; |
||
|
|
|||
|
|
short short_member; |
public short short_member; |
|
(see Note 10 |
default: |
public int long_member; |
||
below) |
|
long long_member; |
... |
|
|
}; |
} |
||
|
|
|||
|
|
|
|
|
typedef |
of |
|
/* typedefs are unwounded to the original |
|
primitives, |
|
|||
typedef short ShortType; |
type when used */ |
|||
enums, |
|
|||
|
|
public class PrimitiveStruct |
||
strings |
|
|
||
|
struct PrimitiveStruct { |
{ |
||
|
|
ShortType short_member; |
public short short_member; |
|
(see Note |
}; |
... |
||
below) |
|
|
} |
|
|
|
|
||
|
|
|
|
|
typedef |
of |
|
/* Wrapper class */ |
|
sequences |
|
|
public class ShortArray |
|
or arrays |
|
typedef short ShortArray[2]; |
{ |
|
|
public short[] userData = new |
|||
|
|
|
||
(see Note |
|
short[2]; |
||
|
... |
|||
below) |
|
|
} |
|
|
|
|
|
|
|
|
|
public class OneDArrayStruct |
|
|
|
struct OneDArrayStruct { |
{ |
|
|
|
public short[] short_array = new |
||
|
|
short short_array[2]; |
||
|
|
short[2]; |
||
|
|
}; |
||
|
|
... |
||
|
|
|
||
array |
|
|
} |
|
|
|
|
||
|
|
public class TwoDArrayStruct |
||
|
|
|
||
|
|
struct TwoDArrayStruct { |
{ |
|
|
|
public short[][] short_array = new |
||
|
|
short short_array[1][2]; |
||
|
|
short[1][2]; |
||
|
|
}; |
||
|
|
... |
||
|
|
|
||
|
|
|
} |
|
|
|
|
|
Table 3.7 Specifying Data Types in IDL for Java
IDL Type |
Sample Entry in IDL file |
Sample Java Output Generated by |
||
rtiddsgen |
||||
|
|
|
||
|
|
|
|
|
|
|
|
public class SequenceStruct |
|
bounded |
|
|
{ |
|
sequence |
|
struct SequenceStruct { |
public ShortSeq short_sequence = new |
|
|
|
sequence<short,4> |
ShortSeq((4)); |
|
(see Note 11 |
short_sequence; |
... |
||
}; |
} |
|||
below) |
|
|
Note: Sequences of primitive types have been |
|
|
|
|
predefined by Connext. |
|
|
|
|
|
|
|
|
|
public class SequenceStruct |
|
unbounded |
|
{ |
||
|
public ShortSeq short_sequence = new |
|||
sequence |
|
struct SequenceStruct { |
ShortSeq((100)); |
|
|
|
sequence<short> short_sequence; |
... |
|
(see Note 11 |
}; |
} |
||
Note: rtiddsgen will supply a default bound. You |
||||
below) |
|
|
||
|
|
can specify that bound with the |
||
|
|
|
||
|
|
|
||
|
|
|
|
|
|
|
|
public class ArraysOfSequences |
|
array |
of |
struct ArraysOfSequences{ |
{ |
|
sequence<short,4> |
public ShortSeq[] sequences_array = |
|||
sequences |
|
sequences_array[2]; |
new ShortSeq[2]; |
|
|
|
}; |
... |
|
|
|
|
} |
|
|
|
|
|
|
|
|
|
/* Wrapper class */ |
|
|
|
|
public class ShortArray |
|
|
|
|
{ |
|
|
|
|
public short[] userData = new |
|
|
|
|
short[2]; |
|
|
|
|
... |
|
|
|
|
} |
|
sequence |
of |
typedef short ShortArray[2]; |
/* Sequence of wrapper class objects */ |
|
arrays |
|
|
||
|
struct SequenceOfArrays{ |
public final class ShortArraySeq |
||
|
|
|||
|
|
extends ArraySequence |
||
|
|
sequence<ShortArray,2> |
||
(see Note 11 |
{ |
|||
arrays_sequence; |
||||
... |
||||
below) |
|
}; |
||
|
} |
|||
|
|
|
||
|
|
|
public class SequenceOfArrays |
|
|
|
|
{ |
|
|
|
|
public ShortArraySeq arrays_sequence |
|
|
|
|
= new ShortArraySeq((2)); |
|
|
|
|
... |
|
|
|
|
} |
|
|
|
|
|
Table 3.7 Specifying Data Types in IDL for Java
IDL Type |
Sample Entry in IDL file |
Sample Java Output Generated by |
||
rtiddsgen |
||||
|
|
|
||
|
|
|
|
|
|
|
|
/* Wrapper class */ |
|
|
|
|
public class ShortSequence |
|
|
|
|
{ |
|
|
|
|
public ShortSeq userData = new |
|
|
|
|
ShortSeq((4)); |
|
|
|
|
... |
|
|
|
|
} |
|
sequence |
of |
typedef sequence<short,4> |
/* Sequence of wrapper class objects */ |
|
sequences |
|
ShortSequence; |
||
|
public final class ShortSequenceSeq |
|||
|
|
|
||
|
|
struct SequencesOfSequences{ |
extends ArraySequence |
|
(see Note |
{ |
|||
sequence<ShortSequence,2> |
||||
... |
||||
and Note 11 |
sequences_sequence; |
|||
} |
||||
below) |
|
}; |
||
|
|
|||
|
|
|
public class SequencesOfSequences |
|
|
|
|
{ |
|
|
|
|
public ShortSequenceSeq |
|
|
|
|
sequences_sequence = new |
|
|
|
|
ShortSequenceSeq((2)); |
|
|
|
|
... |
|
|
|
|
} |
|
|
|
|
|
|
|
|
|
public class PrimitiveStruct |
|
|
|
|
{ |
|
bounded |
|
struct PrimitiveStruct { |
public String string_member = new |
|
|
string<20> string_member; |
String(); |
||
string |
|
|||
|
}; |
/* maximum length = (20) */ |
||
|
|
|||
|
|
|
... |
|
|
|
|
} |
|
|
|
|
|
|
|
|
|
public class PrimitiveStruct |
|
|
|
|
{ |
|
|
|
|
public String string_member = new |
|
|
|
struct PrimitiveStruct { |
String(); |
|
unbounded |
/* maximum length = (255) */ |
|||
string |
|
string string_member; |
... |
|
|
}; |
|||
|
|
} |
||
|
|
|
||
|
|
|
Note: rtiddsgen will supply a default bound. You |
|
|
|
|
can specify that bound with the |
|
|
|
|
||
|
|
|
|
|
|
|
|
public class PrimitiveStruct |
|
|
|
|
{ |
|
bounded |
|
struct PrimitiveStruct { |
public String wstring_member = new |
|
|
wstring<20> wstring_member; |
String(); |
||
wstring |
|
|||
|
}; |
/* maximum length = (20) */ |
||
|
|
|||
|
|
|
... |
|
|
|
|
} |
|
|
|
|
|
|
|
|
|
public class PrimitiveStruct |
|
|
|
|
{ |
|
unbounded |
struct PrimitiveStruct { |
public String wstring_member = new |
||
String(); |
||||
wstring |
|
wstring wstring_member; |
/* maximum length = (255) */ |
|
|
}; |
|||
|
|
... |
||
|
|
|
||
|
|
|
} |
|
|
|
|
Note: rtiddsgen will supply a default bound. |
|
|
|
|
|
Table 3.7 Specifying Data Types in IDL for Java
IDL Type |
Sample Entry in IDL file |
Sample Java Output Generated by |
|
rtiddsgen |
|
||
|
|
|
|
|
|
|
|
|
|
package PackageName; |
|
|
module PackageName { |
|
|
|
struct Foo { |
public class Foo |
|
module |
long field; |
{ |
|
|
}; |
public int field; |
|
|
}; |
… |
|
|
|
} |
|
|
|
|
|
|
|
public class MyValueType |
{ |
|
|
public MyValueType2 member; |
|
|
valuetype MyValueType { |
…. |
|
|
public MyValueType2 * member; |
}; |
|
|
}; |
|
|
valuetype |
|
public class MyValueType |
{ |
|
valuetype MyValueType { |
public MyValueType2 member; |
|
(see Note 9 |
public MyValueType2 member; |
…. |
|
}; |
}; |
|
|
and Note 10 |
|
|
|
below) |
valuetype MyValueType: |
public class MyValueType extends |
|
|
MyBaseValueType { |
MyBaseValueType |
|
|
public MyValueType2 * member; |
{ |
|
|
}; |
public MyValueType2 member; |
|
|
|
…. |
|
|
|
} |
|
|
|
|
|
Notes for Table 3.5 through Table 3.7:
1.Note that in C and C++, primitive types are not represented as native language types (e.g. long, char, etc.) but as custom types in the DDS namespace (DDS_Long, DDS_Char, etc.). These typedefs are used to ensure that a field’s size is the same across platforms.
2.Some platforms do not support long double or have different sizes for that type than defined by IDL (16 bytes). On such platforms, DDS_LongDouble (as well as the unsigned version) is mapped to a character array that matches the expected size of that type by default. If you are using a platform whose native mapping has exactly the expected size, you can instruct Connext to use the native type instead. That is, if sizeof(long double) == 16, you can tell Connext to map DDS_LongDouble to long double by defining the following macro either in code or on the compile line:
Not
3.Unions in IDL are mapped to structs in C and C++, so that Connext will not have to dynamically allocate memory for unions containing
4.
sequence<sequence<short,4>,4> MySequence;
Sequences of typedef’ed types, where the typedef is really a sequence, are supported. For example, this is supported:
typedef sequence<short,4> MyShortSequence;
Not
sequence<MyShortSequence,4> MySequence;
5.IDL wchar and char are mapped to Java char,
6.There are no unsigned types in Java. The unsigned version for integer types is mapped to its signed version as specified in the standard OMG IDL to Java mapping.
7.There is no current support in Java for the IDL long double type. This type is mapped to double as specified in the standard OMG IDL to Java mapping.
8.Java does not have a typedef construct, nor does C++/CLI. Typedefs for types that are neither arrays nor sequences (struct, unions, strings, wstrings, primitive types and enums) are "unwound" to their original type until a simple IDL type or
9.In C and C++, all the members in a value type, structure or union that are declared with the pointer symbol (‘*’) will be mapped to references (pointers). In C++/CLI and Java, the pointer symbol is ignored because the members are always mapped as references.
10.
struct Outer {
short outer_short; struct Inner {
char inner_char; short inner_short;
} outer_nested_inner;
};
|
11. The sequence <Type>Seq is implicitly declared in the IDL file and therefore it cannot be |
|
declared explicitly by the user. For example, this is not supported: |
Not |
|
typedef sequence<Foo> FooSeq; //error |
12.Data types containing bitfield members are not supported by DynamicData (Section 3.8).
3.3.5Escaped Identifiers
To use an IDL keyword as an identifier, the keyword must be “escaped” by prepending an underscore, ‘_’. In addition, you must run rtiddsgen with the
struct MyStruct {
octet _octet; // octet is a keyword. To use the type // as a member name we add ‘_’
};
The use of ‘_’ is a purely lexical convention that turns off keyword checking. The generated code will not contain ‘_’. For example, the mapping to C would be as follows:
struct MyStruct { unsigned char octet;
};
Note: If you generate code from an IDL file to a language ‘X’ (for example, C++), the keywords of this language cannot be used as IDL identifiers, even if they are escaped. For example:
struct MyStruct { long int; // error
long _int; // error
};
3.3.6Referring to Other IDL Files
IDL files may refer to other IDL files using a syntax borrowed from C, C++, and C++/CLI preprocessors:
#include “Bar.idl”
If such a statement is encountered by rtiddsgen and you are generating code for C, C++, and C++/CLI, rtiddsgen will assume that code has been generated for Bar.idl with corresponding header files, Bar.h and BarPlugin.h.
The generated code will automatically have:
#include “Bar.h” #include “BarPlugin.h”
added where needed to compile correctly.
Because Java types do not refer to one another in the same way, it is not possible for rtiddsgen to automatically generate Java import statements based on an IDL #include statement. Any #include statements will be ignored when Java code is generated. To add imports to your generated Java code, you should use the //@copy directive (see Section 3.3.8.2).
3.3.7Preprocessor Directives
rtiddsgen supports the standard preprocessor directives defined by the IDL specification, such as
#if, #endif, #include, and #define.
To support these directives, rtiddsgen calls an external C preprocessor before parsing the IDL file. On Windows systems, the preprocessor is ‘cl.exe.’ On other architectures, the preprocessor is ‘cpp.’ You can change the default preprocessor with the
3.3.8Using Custom Directives
The following
//@key (see Section 3.3.8.1)
//@copy (see Section
Custom directives start with “//@”. Note: Do not put a space between the slashes and the @, or the directive will not be recognized by rtiddsgen.
The directives are also
3.3.8.1The @key Directive
To declare a key for your data type, insert the @key directive in the IDL file after one or more fields of the data type.
With each key, Connext associates an internal
If the maximum size of the serialized key is greater than 16 bytes, to generate the
Only struct definitions in IDL may have key fields. When rtiddsgen encounters //@key, it considers the previously declared field in the enclosing structure to be part of the key. Table 3.8 on page
Table 3.8 Example Keys
Type |
Key Fields |
|
|
|
|
|
|
|
struct NoKey { |
|
|
long member1; |
|
|
long member2; |
|
|
} |
|
|
|
|
|
struct SimpleKey { |
|
|
long member1; //@key |
member1 |
|
long member2; |
||
|
||
} |
|
|
|
|
|
struct NestedNoKey { |
|
|
SimpleKey member1; |
|
|
long member2; |
|
|
} |
|
|
|
|
|
struct NestedKey { |
|
|
SimpleKey member1; //@key |
member1.member1 |
|
long member2; |
||
|
||
} |
|
|
|
|
|
struct NestedKey2 { |
|
|
NoKey member1; //@key |
member1.member1 |
|
long member2; |
member1.member2 |
|
} |
|
|
|
|
|
valuetype BaseValueKey { |
|
|
public long member1; //@key |
member1 |
|
} |
|
|
|
|
|
valuetype DerivedValueKey :BaseValueKey { |
member1 |
|
public long member2; //@key |
||
member2 |
||
} |
||
|
||
|
|
|
valuetype DerivedValue : BaseValueKey { |
|
|
public long member2; |
member1 |
|
} |
|
|
|
|
|
struct ArrayKey { |
member1[0] |
|
long member1[3]; //@key |
member1[1] |
|
} |
member1[2] |
|
|
|
3.3.8.2The @copy and Related Directives
To copy a line of text verbatim into the generated code files, use the @copy directive in the IDL file. This feature is particularly useful when you want your generated code to contain text that is valid in the target programming language but is not valid IDL. It is often used to add user comments or headers or preprocessor commands into the generated code.
//@copy // Modification History
//@copy //
//@copy // 17Jul05aaa, Created.
//@copy
//@copy // #include “MyTypes.h”
These variations allow you to use the same IDL file for multiple languages:
Copies code if the language is C or C++ |
|
|
|
Copies code if the language is C++/CLI |
|
|
|
Copies code if the language is Java. |
|
|
|
Copies code if the language is Ada. |
|
|
|
For example, to add import statements to generated Java code:
The above line would be ignored if the same IDL file was used to generate
In C, C++, and C++/CLI, the lines are copied into all of the “foo*.[h, c, cxx, cpp]” files generated from “foo.idl”. For Java, the lines are copied into all of the “*.java” files that were generated from the original “.idl” file. The lines will not be copied into any additional files that are generated using the
If you want rtiddsgen to copy lines only into the files that declare the data
Note that the first whitespace character to follow “//@copy” is considered a delimiter and will not be copied into generated files. All subsequent text found on the line, including any leading whitespaces will be copied.
Copies the text into the file where the type is declared (<type>.h for C |
||
and C++, or <type>.java for Java) |
||
|
||
|
|
|
Same |
||
|
|
|
Same |
||
|
|
|
Same |
||
|
|
|
Same |
||
|
|
|
Same |
||
file where the type is declared |
||
|
||
|
|
|
Same |
||
|
|
3.3.8.3The
In IDL, the “module” keyword is used to create namespaces for the declaration of types and classes defined within the file. Here is an example IDL definition:
module PackageName { struct Foo {
long field;
};
};
For C++ and C++/CLI, you may use the
namespace PackageName{ typedef struct Foo {
DDS_Long field;
}Foo;
}PackageName;
When generating C++/CLI, the
For C, or if you do not use the
typedef struct PackageName_Foo { DDS_Long field;
} PackageName_Foo;
In Java, a Foo.java file will be created in a directory called PackageName to use the equivalent concept as defined by Java. The file PackageName/Foo.java will contain a declaration of Foo class:
public class Foo { public int field;
...
};
In a more complicated example, consider the following IDL definition:
module PackageName { struct Bar {
long field;
};
struct Foo {
Bar barField;
};
};
When rtiddsgen generates code for the above definition, it will resolve the “Bar” type to be within the scope of the PackageName module and automatically generate
In C or C++, if you do not use
typedef struct PackageName_Bar { DDS_Long field;
} PackageName_Foo;
typedef struct PackageName_Foo { PackageName_Bar barField;
} PackageName_Foo;
In C++, if you use
namespace PackageName { typedef struct Bar {
DDS_Long field; } Bar;
typedef struct Foo
{
PackageName::Bar barField; } Foo;
}
And in Java, PackageName/Bar.java and PackageName/Foo.java would be created with the following code respectively:
public class Bar { public int field;
...
};
and
public class Foo {
public PackageName.Bar barField = PackageName.Bar.create();
...
};
However, sometimes you may not want rtiddsgen to resolve the types of variables when modules are used. In the example above, instead of referring to the Bar as defined by the same package, you may want the barField in Foo to use Bar directly without prepending a module name. To specify that rtiddsgen should not resolve the scope of a type, use the
For example:
module PackageName { struct Bar {
long field;
};
struct Foo {
Bar
};
};
When this directive is used, then for the field preceding the directive, rtiddsgen respects the resolution of its type name indicated in the IDL file. It will use the type unmodified in the generated code. In C and C++:
typedef struct PackageName_Bar { DDS_Long field;
} PackageName_Foo;
typedef struct PackageName_Foo { Bar barField;
} PackageName_Foo;
And in Java, in PackageName/Bar.java and PackageName/Foo.java respectively:
public class Bar { public int field;
...
};
and
public class Foo {
public Bar barField = Bar.create();
...
};
It is up to you to include the correct header files (or if using Java, to import the correct packages) so that the compiler resolves the ‘Bar’ type correctly.
When used at the end of the declaration of a structure in IDL, then the directive applies to all types within the structure.
struct MyStructure { Foo member1;
Bar member2;
By default, without using the directive, rtiddsgen will try to resolve the type of a field and to use the fully qualified name in the generated code. If the type is not found to be defined within the same scope as the structure in which it is used or in a parent scope, then rtiddsgen will generate code with just the type name itself, assuming that the name will be resolved by the compiler through other means available to the user (header files or import statements). A type is in the same scope as the structure if both the type and the structure in which it is used are defined within the same module.
3.3.8.4The
By default, rtiddsgen generates
We use the term
You can mark
In this example, rtiddsgen will generate DataWriter/DataReader code for TopLevelStruct only:
struct EmbeddedStruct{ short member;
struct TopLevelStruct{ EmbeddedStruct member;
};
3.4Creating User Data Types with Extensible Markup Language (XML)
You can describe user data types with Extensible Markup Language (XML) notation. Connext provides DTD and XSD files that describe the XML format; see <NDDSHOME>/resource/ qos_profiles_5.0.x/rtiddsgen/schema/rti_dds_topic_types.dtd and <NDDSHOME>/resource/ qos_profiles_5.0.x/rtiddsgen/schema/rti_dds_topic_types.xsd, respectively (in 5.0.x, the x stands for the revision number of the current release).
The XML validation performed by rtiddsgen always uses the DTD definition. If the <!DOCTYPE> tag is not in the XML file, rtiddsgen will look for the default DTD document in <NDDSHOME>/resource/rtiddsgen/schema. Otherwise, it will use the location specified in <!DOCTYPE>.
We recommend including a reference to the XSD/DTD files in the XML documents. This provides helpful features in code editors such as Visual Studio® and Eclipse™, including validation and
To include a reference to the XSD document in your XML file, use the attribute xsi:noNamespaceSchemaLocation in the <types> tag. For example1:
<?xml version="1.0"
<types
"<same as NDDSHOME>/resource/rtiddsgen/schema/rti_dds_topic_types.xsd">
...
</types>
To include a reference to the DTD document in your XML file, use the <!DOCTYPE> tag. For example1:
<?xml version="1.0"
<!DOCTYPE types SYSTEM
"<same as NDDSHOME>/resource/rtiddsgen/schema/rti_dds_topic_types.dtd"> <types>
...
</types>
Table 3.9 shows how to map the type system constructs into XML.
Table 3.9 Mapping Type System Constructs to XML
|
Type/Construct |
|
Example |
|
|
|
|
|
|
IDL |
|
XML |
IDL |
XML |
|
|
|
|
|
|
|
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
char |
|
char |
<member name="char_member" |
|
|
char char_member; |
|||
|
type="char"/> |
|||
|
|
|
}; |
|
|
|
|
</struct> |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
wchar |
|
wchar |
<member name="wchar_member" |
|
|
wchar wchar_member; |
|||
|
type="wchar"/> |
|||
|
|
|
}; |
|
|
|
|
</struct> |
|
|
|
|
|
|
|
|
|
|
|
1. Replace <same as NDDSHOME> with the full path to the Connext installation directory.
Table 3.9 Mapping Type System Constructs to XML
Type/Construct |
|
Example |
||
|
|
|
|
|
IDL |
XML |
IDL |
XML |
|
|
|
|
|
|
|
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
octet |
octet |
<member name="octet_member" |
||
octet octet_member; |
||||
type="octet"/> |
||||
|
|
}; |
||
|
|
</struct> |
||
|
|
|
||
|
|
|
|
|
|
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
short |
short |
<member name="short_member" |
||
short short_member; |
||||
type="short"/> |
||||
|
|
}; |
||
|
|
</struct> |
||
|
|
|
||
|
|
|
|
|
|
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
unsigned short |
unsignedShort |
unsigned short |
<member name="unsigned_short_member" |
|
unsigned_short_member; |
type="unsignedShort"/> |
|||
|
|
|||
|
|
}; |
</struct> |
|
|
|
|
|
|
long |
long |
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
long long_member; |
<member name="long_member"type="long"/> |
|||
|
|
}; |
</struct> |
|
|
|
|
|
|
|
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
unsigned long |
unsignedLong |
unsigned long |
<member name= "unsigned_long_member" |
|
unsigned_long_member; |
type="unsignedLong"/> |
|||
|
|
|||
|
|
}; |
</struct> |
|
|
|
|
|
|
|
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
long long |
longLong |
long long |
<member name="long_long_member" |
|
long_long_member; |
type="longLong"/> |
|||
|
|
|||
|
|
}; |
</struct> |
|
|
|
|
|
|
unsigned long |
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
unsignedLongLong |
unsigned long long |
<member name="unsigned_long_long_member" |
||
long |
unsigned_long_long_member; |
type="unsignedLongLong"/> |
||
|
||||
|
|
}; |
</struct> |
|
|
|
|
|
|
|
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
float |
float |
<member name="float_member" |
||
float float_member; |
||||
type="float"/> |
||||
|
|
}; |
||
|
|
</struct> |
||
|
|
|
||
|
|
|
|
|
|
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
double |
double |
<member name="double_member" |
||
double double_member; |
||||
type="double"/> |
||||
|
|
}; |
||
|
|
</struct> |
||
|
|
|
||
|
|
|
|
|
|
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
long double |
longDouble |
long double |
<member name= "long_double_member" |
|
long_double_member; |
type="longDouble"/> |
|||
|
|
|||
|
|
}; |
</struct> |
|
|
|
|
|
|
|
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
boolean |
boolean |
<member name="boolean_member" |
||
boolean boolean_member; |
||||
type="boolean"/> |
||||
|
|
}; |
||
|
|
</struct> |
||
|
|
|
||
|
|
|
|
|
|
|
|
<struct name="PrimitiveStruct"> |
|
|
string without |
|
<member name="string_member" |
|
|
|
type="string"/> |
||
unbounded |
stringMaxLength |
struct PrimitiveStruct { |
</struct> |
|
attribute or with |
string string_member; |
or |
||
string |
||||
stringMaxLength set to |
}; |
<struct name="PrimitiveStruct"> |
||
|
||||
|
|
<member name="string_member" |
||
|
|
|
type="string" |
|
|
|
|
</struct> |
|
|
|
|
|
|
|
string with |
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
|
<member name="string_member" |
|||
bounded string |
stringMaxLength |
string<20> string_member; |
||
type="string" stringMaxLength="20"/> |
||||
|
attribute |
}; |
||
|
</struct> |
|||
|
|
|
||
|
|
|
|
Table 3.9 Mapping Type System Constructs to XML
Type/Construct |
|
Example |
||
|
|
|
|
|
IDL |
XML |
IDL |
XML |
|
|
|
|
|
|
|
|
|
<struct name="PrimitiveStruct"> |
|
|
wstring without |
|
<member name="wstring_member" |
|
|
|
type="wstring"/> |
||
unbounded |
stringMaxLength |
struct PrimitiveStruct { |
</struct> |
|
attribute or with |
wstring wstring_member; |
or |
||
wstring |
||||
stringMaxLength set to |
}; |
<struct name="PrimitiveStruct"> |
||
|
||||
|
|
<member name="wstring_member" |
||
|
|
|
type="wstring" |
|
|
|
|
</struct> |
|
|
|
|
|
|
bounded |
wstring with |
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
wstring<20> |
<member name="wstring_member" |
|||
stringMaxLength |
||||
wstring |
wstring_member; |
type="wstring" stringMaxLength="20"/> |
||
attribute |
||||
|
}; |
</struct> |
||
|
|
|||
|
|
|
|
|
|
pointer attribute with |
struct PrimitiveStruct { |
<struct name="PointerStruct"> |
|
|
values true,false,0 or 1 |
|||
pointer |
<member name="long_member" type="long" |
|||
long * long_member; |
||||
Default (if not present): |
pointer="true"/> |
|||
|
}; |
|||
|
</struct> |
|||
|
0 |
|
||
|
|
|
||
|
|
|
|
|
|
|
|
<struct name="BitFieldStruct"> |
|
|
|
struct BitfieldStruct { |
<member name="short_member" |
|
|
|
short short_member: 1; |
type="short" bitField="1"/> |
|
bitfielda |
bitfield attribute with |
unsigned short |
<member name="unsignedShort_member" |
|
unsignedShort_member: 1; |
type="unsignedShort" bitField="1"/> |
|||
the bitfield length |
||||
|
short short_nmember_2: 0; |
<member type="short" bitField="0"/> |
||
|
|
|||
|
|
long long_member : 5; |
<member name="long_member" |
|
|
|
}; |
type="long" bitField="5"/> |
|
|
|
|
</struct> |
|
|
|
|
|
|
|
key attribute with |
struct KeyedPrimitiveStruct |
|
|
|
values true, false, 0 or 1 |
<struct name="KeyedPrimitiveStruct"> |
||
|
{ |
|||
|
|
|
||
key directive b |
|
short short_member; // |
<member name="short_member" |
|
|
type="short" key="true"/> |
|||
|
Default (if not present): |
@key |
||
|
</struct> |
|||
|
0 |
}; |
|
|
|
|
|
||
|
|
|
|
|
|
resolveName attribute |
struct |
<struct name= |
|
|
with values true, false, 0 |
|||
or 1 |
UnresolvedPrimitiveStruct { |
"UnresolvedPrimitiveStruct"> |
||
PrimitiveStruct |
<member name="primitive_member" |
|||
directive b |
|
primitive_member; |
type="PrimitiveStruct" |
|
|
Default (if not present): |
resolveName="false"/> |
||
|
}; |
</struct> |
||
|
1 |
|||
|
|
|
||
|
|
|
|
|
|
topLevel |
|
|
|
|
attribute with values |
struct |
<struct name="TopLevelPrimitiveStruct" |
|
true, false, 0 or 1 |
topLevel="false"> |
|||
TopLevelPrimitiveStruct { |
||||
<member name="short_member" |
||||
directive b |
|
short short_member; |
||
|
type="short"/> |
|||
|
Default (if not present): |
|||
|
</struct> |
|||
|
1 |
|
|
|
|
|
|
|
|
Other directives |
|
//@copy This text will be |
<directive kind="copy"> |
|
directive tag |
This text will be copied in the |
|||
copied in the generated |
||||
generated files |
||||
|
|
files |
||
|
|
</directive> |
||
|
|
|
||
|
|
|
|
Table 3.9 Mapping Type System Constructs to XML
|
Type/Construct |
|
Example |
|
|
|
|
|
|
IDL |
|
XML |
IDL |
XML |
|
|
|
|
|
|
|
|
enum PrimitiveEnum { |
<enum name="PrimitiveEnum"> |
|
|
|
ENUM1, |
<enumerator name="ENUM1"/> |
|
|
|
ENUM2, |
<enumerator name="ENUM2"/> |
|
|
|
ENUM3 |
<enumerator name="ENUM3"/> |
enum |
|
enum tag |
}; |
</enum> |
|
|
|
||
|
enum PrimitiveEnum { |
<enum name="PrimitiveEnum"> |
||
|
|
|
||
|
|
|
ENUM1 = 10, |
<enumerator name="ENUM1" value="10"/> |
|
|
|
ENUM2 = 20, |
<enumerator name="ENUM2" value="20"/> |
|
|
|
ENUM3 = 30 |
<enumerator name="ENUM3" value="30"/> |
|
|
|
}; |
</enum> |
|
|
|
|
|
constant |
|
const tag |
const double PI = 3.1415; |
<const name="PI" type="double" |
|
value="3.1415"/> |
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
struct |
|
struct tag |
<member name="short_member" |
|
|
short short_member; |
|||
|
type="short"/> |
|||
|
|
|
}; |
|
|
|
|
</struct> |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
<union name="PrimitiveUnion"> |
|
|
|
|
<discriminator type="long"/> |
|
|
|
|
<case> |
|
|
|
|
<caseDiscriminator value="1"/> |
|
|
|
union PrimitiveUnion switch |
<member name="short_member" |
|
|
|
type="short"/> |
|
|
|
|
(long) { |
|
|
|
|
</case> |
|
|
|
|
case 1: |
|
|
|
|
<case> |
|
|
|
|
short short_member; |
|
|
|
|
<caseDiscriminator value="2"/> |
|
union |
|
union tag |
case 2: |
|
|
<caseDiscriminator value="3"/> |
|||
|
case 3: |
|||
|
|
|
<member name="float_member" |
|
|
|
|
float float_member; |
|
|
|
|
type="float"/> |
|
|
|
|
default: |
|
|
|
|
</case> |
|
|
|
|
long long_member; |
|
|
|
|
<case> |
|
|
|
|
}; |
|
|
|
|
<caseDiscriminator value="default"/> |
|
|
|
|
|
|
|
|
|
|
<member name="long_member" |
|
|
|
|
type="long"/> |
|
|
|
|
</case> |
|
|
|
|
</union> |
|
|
|
|
|
|
|
|
valuetype BaseValueType { |
<valuetype name="BaseValueType"> |
|
|
|
<member name="long_member" |
|
|
|
|
public long long_member; |
|
|
|
|
type="long" visibility="public"/> |
|
|
|
|
}; |
|
|
|
|
</valuetype> |
|
|
|
|
|
|
valuetype |
|
valuetype tag |
valuetype DerivedValueType: |
<valuetype name="DerivedValueType" |
|
|
|
BaseValueType { |
|
|
|
|
baseClass="BaseValueType"> |
|
|
|
|
public long |
|
|
|
|
<member name="long_member_2" |
|
|
|
|
long_member_2; |
|
|
|
|
type="long" visibility="public"/> |
|
|
|
|
}; |
|
|
|
|
</valuetype> |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
typedef short ShortType; |
<typedef name="ShortType" type="short"/> |
|
|
|
|
|
|
|
|
|
<struct name="PrimitiveStruct"> |
|
|
|
struct PrimitiveStruct { |
<member name="short_member" |
|
|
|
type="short"/> |
|
typedef |
|
typedef tag |
short short_member; |
|
|
</struct> |
|||
|
}; |
|||
|
|
|
|
|
|
|
|
typedef PrimitiveStruct |
<typedef name="PrimitiveStructType" |
|
|
|
PrimitiveStructType; |
|
|
|
|
type="nonBasic" |
|
|
|
|
|
|
|
|
|
|
nonBasicTypeName="PrimitiveStruct"/> |
|
|
|
|
|
Table 3.9 Mapping Type System Constructs to XML
|
Type/Construct |
|
Example |
|
|
|
|
|
|
IDL |
|
XML |
IDL |
XML |
|
|
|
|
|
|
|
|
struct OneArrayStruct { |
<struct name="OneArrayStruct"> |
|
|
|
<member name="short_array" |
|
|
|
|
short short_array[2]; |
|
|
|
|
type="short" arrayDimensions="2"/> |
|
|
|
|
}; |
|
|
|
Attribute |
</struct> |
|
arrays |
|
|
||
|
arrayDimensions |
struct TwoArrayStruct { |
<struct name="TwoArrayStruct"> |
|
|
|
|||
|
|
|
<member name="short_array" |
|
|
|
|
short short_array[1][2]; |
|
|
|
|
type="short" arrayDimensions="1,2"/> |
|
|
|
|
}; |
|
|
|
|
</struct> |
|
|
|
|
|
|
|
|
|
|
|
|
|
Attribute |
struct SequenceStruct { |
<struct name="SequenceStruct"> |
bounded |
|
<member name="short_sequence" |
||
|
sequence<short,4> |
|||
|
sequenceMaxLength > |
type="short" |
||
sequence |
|
short_sequence; |
||
|
0 |
sequenceMaxLength="4"/> |
||
|
|
}; |
||
|
|
|
</struct> |
|
|
|
|
|
|
|
|
|
|
|
unbounded |
|
Attribute |
struct SequenceStruct { |
<struct name="SequenceStruct"> |
|
sequence<short> |
<member name="short_sequence" |
||
|
sequenceMaxLength set |
|||
sequence |
|
short_sequence; |
type="short" |
|
|
to |
|||
|
|
}; |
</struct> |
|
|
|
|
||
|
|
|
|
|
|
|
Attributes |
struct |
<struct name= "ArrayOfSequenceStruct"> |
array of |
|
ArrayOfSequencesStruct { |
<member name= "short_sequence_array" |
|
|
sequenceMaxLength |
sequence<short,4> |
type="short" arrayDimensions="2" |
|
sequences |
|
|||
|
And arrayDimensions |
short_sequence_array[2]; |
sequenceMaxLength="4"/> |
|
|
|
|||
|
|
|
}; |
</struct> |
|
|
|
|
|
|
|
|
|
<typedef name="ShortArray" |
|
|
|
typedef short |
type="short" dimensions="2"/> |
|
|
|
ShortArray[2]; |
|
sequence of |
|
Must be implemented |
|
<struct name= |
|
struct |
"SequenceOfArrayStruct"> |
||
arrays |
|
with a typedef tag |
SequenceOfArraysStruct { |
<member name= "short_array_sequence" |
|
|
|
sequence<ShortArray,2> |
type="nonBasic" |
|
|
|
short_array_sequence; |
nonBasicTypeName="ShortSequence" |
|
|
|
}; |
sequenceMaxLength="2"/> |
|
|
|
|
</struct> |
|
|
|
|
|
|
|
|
typedef sequence<short,4> |
<typedef name="ShortSequence" |
|
|
|
ShortSequence; |
|
|
|
|
type="short"sequenceMaxLength="4"/> |
|
|
|
|
|
|
sequence of |
|
Must be implemented |
struct |
<struct name="SequenceofSequencesStruct"> |
|
<member name="short_sequence_sequence" |
|||
|
SequenceOfSequencesStruct |
|||
sequences |
|
with a typedef tag |
type="nonBasic" |
|
|
{ |
|||
|
|
|
nonBasicTypeName="ShortSequence" |
|
|
|
|
sequence<ShortSequence,2> |
|
|
|
|
||
|
|
|
short_sequence_sequence; |
|
|
|
|
</struct> |
|
|
|
|
}; |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
module PackageName { |
<module name="PackageName"> |
module |
|
module tag |
struct PrimitiveStruct { |
<struct name="PrimitiveStruct"> |
|
long long_member; |
<member name="long_member" type="long"/> |
||
|
|
|
}; |
</struct> |
|
|
|
}; |
</module> |
|
|
|
|
|
include |
|
include tag |
#include |
<include file="PrimitiveTypes.xml"/> |
|
"PrimitiveTypes.idl" |
|||
|
|
|
|
|
|
|
|
|
|
a.Data types containing bitfield members are not supported by DynamicData (Section 3.8).
b.Directives are RTI extensions to the standard IDL grammar. For additional information about directives see Using Custom Directives (Sec- tion 3.3.8).
3.5Creating User Data Types with XML Schemas (XSD)
You can describe data types with XML schemas (XSD), either independent or embedded in a Web Services Description Language (WSDL) file. The format is based on the standard
Example Header for XSD:
<?xml version="1.0"
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:dds="http://www.omg.org/dds"
……
</xsd:schema>
Example Header for WSDL:
<?xml version="1.0"
<definitions xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:dds="http://www.omg.org/dds" xmlns:xsd="http://www.w3.org/2001/XMLSchema"
<xsd:schema
schemaLocation="rti_dds_topic_types_common.xsd"/>
……
</xsd:schema>
</types>
</definitions>
Table 3.10 describes how to map IDL types to XSD. The Connext code generator, rtiddsgen, will only accept XSD or WSDL files that follow this mapping.
Table 3.10 Mapping Type System Constructs to XSD
|
Type/Construct |
|
Example |
|
|
|
|
|
|
IDL |
|
XSD |
IDL |
XSD |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
char |
|
dds:chara |
struct PrimitiveStruct { |
<xsd:element name="char_member" |
|
char char_member; |
minOccurs="1" maxOccurs="1" |
||
|
|
|
}; |
type="dds:char"> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
Table 3.10 Mapping Type System Constructs to XSD
Type/Construct |
|
Example |
||
|
|
|
|
|
IDL |
XSD |
IDL |
XSD |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
|
|
dds:wchara |
struct PrimitiveStruct { |
<xsd:element name="wchar_member" |
|
wchar |
wchar wchar_member; |
minOccurs="1" maxOccurs="1" |
||
|
|
}; |
type="dds:wchar"> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
|
octet |
xsd:unsignedByte |
struct PrimitiveStruct { |
<xsd:element name="octet_member" |
|
octet octet_member; |
minOccurs="1" maxOccurs="1" |
|||
|
|
}; |
type="xsd:unsignedByte"> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
|
|
|
struct PrimitiveStruct { |
<xsd:element name="short_member" |
|
short |
xsd:short |
short short_member; |
minOccurs="1" maxOccurs="1" |
|
|
|
}; |
type="xsd:short"/> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
struct PrimitiveStruct { |
<xsd:sequence> |
|
unsigned |
|
<xsd:element name="unsigned_short_member" |
||
xsd:unsignedShort |
unsigned short |
|||
minOccurs="1" maxOccurs="1" |
||||
short |
unsigned_short_member; |
|||
|
|
}; |
type="xsd:unsignedShort"/> |
|
|
|
</xsd:sequence> |
||
|
|
|
||
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
|
long |
xsd:int |
struct PrimitiveStruct { |
<xsd:element name="long_member" |
|
long long_member; |
minOccurs="1" maxOccurs="1" |
|||
|
|
}; |
type="xsd:int"/> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
struct PrimitiveStruct { |
<xsd:sequence> |
|
unsigned |
|
<xsd:element name= "unsigned_long_member" |
||
|
unsigned long |
|||
xsd:unsignedInt |
minOccurs="1" maxOccurs="1" |
|||
long |
unsigned_long_member; |
|||
|
type="xsd:unsignedInt"/> |
|||
|
|
}; |
||
|
|
</xsd:sequence> |
||
|
|
|
||
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
struct PrimitiveStruct { |
<xsd:sequence> |
|
|
|
<xsd:elementname= "long_long_member" |
||
long long |
xsd:long |
long long |
||
minOccurs="1" maxOccurs="1" |
||||
long_long_member; |
||||
|
|
type="xsd:long"/> |
||
|
|
}; |
||
|
|
</xsd:sequence> |
||
|
|
|
||
|
|
|
</xsd:complexType> |
|
|
|
|
|
Table 3.10 Mapping Type System Constructs to XSD
Type/Construct |
|
Example |
||
|
|
|
|
|
IDL |
XSD |
IDL |
XSD |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
|
unsigned |
|
struct PrimitiveStruct { |
<xsd:element name= |
|
xsd:unsignedLong |
unsigned long long |
"unsigned_long_long_member" |
||
long long |
unsigned_long_long_member; |
minOccurs="1" maxOccurs="1" |
||
|
||||
|
|
}; |
type="xsd:unsignedLong"/> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
|
|
|
struct PrimitiveStruct { |
<xsd:element name="float_member" |
|
float |
xsd:float |
float float_member; |
minOccurs="1" maxOccurs="1" |
|
|
|
}; |
type="xsd:float"/> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
|
double |
xsd:double |
struct PrimitiveStruct { |
<xsd:element name="double_member" |
|
double double_member; |
minOccurs="1" maxOccurs="1" |
|||
|
|
}; |
type="xsd:double"/> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
struct PrimitiveStruct { |
<xsd:sequence> |
|
long |
|
<xsd:element name="long_double_member" |
||
dds:longDoublea |
long double |
|||
minOccurs="1" maxOccurs="1" |
||||
double |
long_double_member; |
|||
|
type="dds:longDouble"/> |
|||
|
|
}; |
||
|
|
</xsd:sequence> |
||
|
|
|
||
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
struct PrimitiveStruct { |
<xsd:sequence> |
|
|
|
<xsd:element name="boolean_member" |
||
boolean |
xsd:boolean |
boolean |
||
minOccurs="1" maxOccurs="1" |
||||
boolean_member; |
||||
|
|
type="xsd:boolean"/> |
||
|
|
}; |
||
|
|
</xsd:sequence> |
||
|
|
|
||
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
|
unbounded |
xsd:string |
struct PrimitiveStruct { |
<xsd:element name="string_member" |
|
string string_member; |
minOccurs="1" maxOccurs="1" |
|||
string |
|
}; |
type="xsd:string"/> |
|
|
|
|||
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
|
|
|
|
<xsd:element name="string_member" |
|
|
|
|
minOccurs="1" maxOccurs="1"> |
|
|
xsd:string with |
|
<xsd:simpleType> |
|
|
struct PrimitiveStruct { |
<xsd:restriction base="xsd:string"> |
||
bounded |
restriction to |
|||
string<20> string_member; |
<xsd:maxLength value="20" |
|||
string |
specify the |
|||
}; |
fixed="true"/> |
|||
|
maximum length |
|||
|
|
</xsd:restriction> |
||
|
|
|
||
|
|
|
</xsd:simpleType> |
|
|
|
|
</xsd:element> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
Table 3.10 Mapping Type System Constructs to XSD
Type/Construct |
|
Example |
||
|
|
|
|
|
IDL |
XSD |
IDL |
XSD |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
|
unbounded |
dds:wstringa |
struct PrimitiveStruct { |
<xsd:element name="wstring_member" |
|
wstring wstring_member; |
minOccurs="1" maxOccurs="1" |
|||
wstring |
|
}; |
type="dds:wstring"/> |
|
|
|
|||
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
|
|
|
|
<xsd:element name="wstring_member" |
|
|
|
|
minOccurs="1" maxOccurs="1"> |
|
|
xsd:wstring with |
|
<xsd:simpleType> |
|
|
struct PrimitiveStruct { |
<xsd:restriction base= |
||
bounded |
restriction to |
wstring<20> |
"dds:wstring"> |
|
wstring |
specify the |
wstring_member; |
<xsd:maxLength value="20" |
|
|
maximum length |
}; |
fixed="true"/> |
|
|
|
|
</xsd:restriction> |
|
|
|
|
</xsd:simpleType> |
|
|
|
|
</xsd:element> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
||
|
|
<xsd:sequence> |
||
|
<true|false|1|0> |
|
||
|
struct PrimitiveStruct { |
<xsd:element name="long_member" |
||
|
||||
|
minOccurs="1" maxOccurs="1" |
|||
pointer |
long * long_member; |
|||
|
type="xsd:int"/> |
|||
|
|
}; |
||
|
Default (if not |
|||
|
|
|||
|
|
</xsd:sequence> |
||
|
specified): false |
|
||
|
|
</xsd:complexType> |
||
|
|
|
||
|
|
|
|
|
|
|
|
<xsd:complexType name="BitfieldStruct"> |
|
|
|
|
<xsd:sequence> |
|
|
|
|
<xsd:element name="short_member" |
|
|
|
|
minOccurs="1" maxOccurs="1" |
|
|
|
|
type="xsd:short"/> |
|
|
|
|
||
|
|
struct BitfieldStruct { |
<xsd:element name=‘unsignedShort_member" |
|
|
|
short short_member: 1; |
minOccurs="1" maxOccurs="1" |
|
|
unsigned short |
type="xsd:unsignedShort"/> |
||
bitfieldb |
unsignedShort_member: 1; |
|||
<bitfield length> |
||||
short: 0; |
<xsd:element name="_ANONYMOUS_3" |
|||
|
||||
|
long long_member: 5; |
minOccurs="1" maxOccurs="1" |
||
|
|
|||
|
|
}; |
type="xsd:short"/> |
|
|
|
|
||
|
|
|
<xsd:element name="long_member" |
|
|
|
|
minOccurs="1" maxOccurs="1" |
|
|
|
|
type="xsd:int"/> |
|
|
|
|
||
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
<xsd:complexType name="KeyedPrimitiveStruct"> |
||
|
|
<xsd:sequence> |
||
|
<true|false|1|0> |
|
||
|
struct |
<xsd:element name="long_member" |
||
key |
KeyedPrimitiveStruct { |
minOccurs="1" maxOccurs="1" |
||
directivec |
|
long long_member; //@key |
type="xsd:int"/> |
|
|
Default (if not |
}; |
||
|
|
</xsd:sequence> |
||
|
specified): false |
|
||
|
|
</xsd:complexType> |
||
|
|
|
||
|
|
|
|
Table 3.10 Mapping Type System Constructs to XSD
Type/Construct |
|
Example |
|||
|
|
|
|
|
|
IDL |
|
XSD |
IDL |
XSD |
|
|
|
|
|
|
|
|
|
|
<xsd:complexType name= |
||
|
|
struct |
"UnresolvedPrimitiveStruct"> |
||
|
|
<true|false|1|0> |
UnresolvedPrimitiveStruct |
<xsd:sequence> |
|
resolvenam |
{ |
<xsd:element name="primitive_member" |
|||
PrimitiveStruct |
minOccurs="1" maxOccurs="1" |
||||
e directivec |
|
||||
|
primitive_member; |
type="PrimitiveStruct"/> |
|||
|
|
|
|||
|
|
Default (if not |
|||
|
|
specified): true |
}; |
</xsd:sequence> |
|
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
|
|
<xsd:complexType |
|
|
|
|
name="TopLevelPrimitiveStruct"> |
||
|
|
<true|false|1|0> - |
struct |
<xsd:sequence> |
|
|
|
<xsd:element name="short_member" |
|||
|
TopLevelPrimitiveStruct { |
||||
|
minOccurs="1" maxOccurs="1" |
||||
directivec |
|
|
short short_member; |
||
|
|
type="xsd:short"/> |
|||
|
|
Default (if not |
|||
|
|
</xsd:sequence> |
|||
|
|
|
|||
|
|
specified): true |
|
</xsd:complexType> |
|
|
|
|
|
||
|
|
|
|
|
|
|
|
//@copy This text will be |
|
||
other |
@<directive kind> |
||||
directives |
<value> |
copied in the generated |
generated files |
||
|
files |
||||
|
|
|
|
||
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
<xsd:simpleType name="PrimitiveEnum"> |
|
|
|
|
|
<xsd:restriction base="xsd:string"> |
|
|
|
|
|
<xsd:enumeration value="ENUM1"/> |
|
|
|
|
|
<xsd:enumeration value="ENUM2"/> |
|
|
|
|
|
<xsd:enumeration value="ENUM3"/> |
|
|
|
|
|
</xsd:restriction> |
|
|
|
|
|
</xsd:simpleType> |
|
|
|
|
|
<xsd:simpleType name="PrimitiveEnum"> |
|
|
|
|
|
<xsd:restriction base="xsd:string"> |
|
|
|
|
|
<xsd:enumeration value="ENUM1"> |
|
|
|
|
|
<xsd:annotation> |
|
|
|
|
enum PrimitiveEnum { |
<xsd:appinfo> |
|
|
|
|
ENUM1, |
<ordinal>10</ordinal> |
|
|
|
|
ENUM2, |
</xsd:appinfo> |
|
|
|
|
ENUM3 |
</xsd:annotation> |
|
enum |
|
xsd:simpleType |
}; |
</xsd:enumeration> |
|
|
|
<xsd:enumeration value="ENUM2"> |
|||
|
with enumeration |
|
|||
|
|
enum PrimitiveEnum { |
<xsd:annotation> |
||
|
|
|
|||
|
|
|
ENUM1 = 10, |
<xsd:appinfo> |
|
|
|
|
ENUM2 = 20, |
<ordinal>20</ordinal> |
|
|
|
|
ENUM3 = 30 |
</xsd:appinfo> |
|
|
|
|
}; |
</xsd:annotation> |
|
|
|
|
|
</xsd:enumeration> |
|
|
|
|
|
<xsd:enumeration value="ENUM3"> |
|
|
|
|
|
<xsd:annotation> |
|
|
|
|
|
<xsd:appinfo> |
|
|
|
|
|
<ordinal>30</ordinal> |
|
|
|
|
|
</xsd:appinfo> |
|
|
|
|
|
</xsd:annotation> |
|
|
|
|
|
</xsd:enumeration> |
|
|
|
|
|
</xsd:restriction> |
|
|
|
|
|
</xsd:simpleType> |
|
|
|
|
|
|
Table 3.10 Mapping Type System Constructs to XSD
Type/Construct |
|
Example |
||
|
|
|
|
|
IDL |
XSD |
IDL |
XSD |
|
|
|
|||
constant |
IDL constants are mapped by substituting their value directly in the generated file |
|||
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
|
<xsd:sequence> |
|
|
xsd:complexType |
struct PrimitiveStruct { |
<xsd:element name="short_member" |
|
struct |
short short_member; |
minOccurs="1" maxOccurs="1" |
||
with xsd:sequence |
||||
|
}; |
type="xsd:short"/> |
||
|
|
|||
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveUnion"> |
|
|
|
|
<xsd:sequence> |
|
|
|
|
<xsd:element name="discriminator" |
|
|
|
|
type="xsd:int"/> |
|
|
|
|
<xsd:choice> |
|
|
|
|
||
|
|
|
<xsd:element name="short_member" |
|
|
|
|
minOccurs="0" maxOccurs="1" |
|
|
|
|
type="xsd:short"> |
|
|
|
|
<xsd:annotation> |
|
|
|
union PrimitiveUnion |
<xsd:appinfo> |
|
|
|
<case>1</case> |
||
|
|
switch (long) { |
||
|
|
</xsd:appinfo> |
||
|
xsd:complexType |
case 1: |
||
union |
</xsd:annotation> |
|||
short short_member; |
||||
with xsd:choice |
</xsd:element> |
|||
|
default: |
|||
|
|
|||
|
|
long long_member; |
||
|
|
<xsd:element name="long_member" |
||
|
|
}; |
||
|
|
minOccurs="0" maxOccurs="1" |
||
|
|
|
||
|
|
|
type="xsd:int"> |
|
|
|
|
<xsd:annotation> |
|
|
|
|
<xsd:appinfo> |
|
|
|
|
<case>default</case> |
|
|
|
|
</xsd:appinfo> |
|
|
|
|
</xsd:annotation> |
|
|
|
|
</xsd:element> |
|
|
|
|
</xsd:choice> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
Table 3.10 Mapping Type System Constructs to XSD
Type/Construct |
|
Example |
|
|
|
|
|
IDL |
XSD |
IDL |
XSD |
|
|
|
|
|
|
|
<xsd:complexType name="BaseValueType"> |
|
|
|
<xsd:sequence> |
|
|
|
<xsd:element name=”long_member" |
|
|
|
maxOccurs="1" minOccurs="1" |
|
|
|
type="xs:int"/> |
|
|
|
|
|
|
|
</xsd:sequence> |
|
|
valuetype BaseValueType { |
</xs:complexType> |
|
|
public long |
|
|
|
long_member; |
|
|
|
}; |
<xs:complexType name="DerivedValueType"> |
|
xsd:complexType |
|
<xs:complexContent> |
|
valuetype |
<xs:extension base="BaseValueType"> |
|
valuetype |
with @valuetype |
DerivedValueType: |
<xs:sequence> |
|
directive |
BaseValueType { |
<xs:element name= "long_member2" |
|
|
public long |
maxOccurs="1" minOccurs="1" |
|
|
long_member2; |
type="xs:int"/> |
|
|
public long |
|
|
|
long_member3; |
<xs:element name= "long_member3" |
|
|
}; |
maxOccurs="1" minOccurs="1" |
|
|
|
type="xs:int"/> |
|
|
|
|
|
|
|
</xs:sequence> |
|
|
|
</xs:extension> |
|
|
|
</xs:complexContent> |
|
|
|
</xs:complexType> |
|
|
|
|
|
|
|
|
|
|
|
<xsd:simpleType name="ShortType"> |
|
|
|
<xsd:restriction base="xsd:short"/> |
|
|
|
</xsd:simpleType> |
|
|
|
|
|
|
|
<xsd:complexType name="PrimitiveStruct"> |
|
|
|
<xsd:sequence> |
|
|
typedef short ShortType; |
<xsd:element name="short_member" |
|
|
minOccurs="1" maxOccurs="1" |
|
|
|
|
|
|
|
struct PrimitiveStruct { |
type="xsd:short"/> |
|
|
</xsd:sequence> |
|
|
Type definitions |
short short_member; |
|
|
</xsd:complexType> |
||
|
}; |
||
typedef |
are mapped to |
|
|
|
|
||
XML schema type |
|
||
|
typedef PrimitiveType |
||
|
restrictions |
<xsd:complexType |
|
|
PrimitiveStructType; |
||
|
|
name="PrimitiveTypeStructType"> |
|
|
|
|
|
|
|
|
<xsd:complexContent> |
|
|
|
<xsd:restriction base=”PrimitiveStruct”> |
|
|
|
<xsd:sequence> |
|
|
|
<xsd:element name="short_member" |
|
|
|
minOccurs="1" maxOccurs="1" |
|
|
|
type="xsd:short"/> |
|
|
|
</xsd:sequence> |
|
|
|
</xsd:restriction> |
|
|
|
</xsd:complexContent> |
|
|
|
</xsd:complexType> |
|
|
|
|
Table 3.10 Mapping Type System Constructs to XSD
|
Type/Construct |
|
Example |
|
|
|
|
|
|
IDL |
|
XSD |
IDL |
XSD |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
<xsd:complexType name= |
|
|
|
|
"OneArrayStruct_short_array_ArrayOfShort"> |
|
|
n xsd:complexType |
|
<xsd:sequence> |
|
|
|
<xsd:element name="item" minOccurs="2" |
|
|
|
with sequence |
|
|
|
|
|
maxOccurs="2" type="xsd:short"> |
|
|
|
containing one |
|
|
|
|
|
</xsd:element> |
|
|
|
element with min |
|
|
|
|
struct OneArrayStruct { |
</xsd:sequence> |
|
|
|
& max occurs |
||
|
|
</xsd:complexType> |
||
arrays |
|
short short_array[2]; |
||
|
|
|||
|
|
|
}; |
|
|
|
There is one |
<xsd:complexType name="OneArrayStruct"> |
|
|
|
|
||
|
|
|
<xsd:sequence> |
|
|
|
xsd:complexType |
|
|
|
|
|
<xsd:element name="short_array" |
|
|
|
per array |
|
|
|
|
|
minOccurs="1" maxOccurs="1" |
|
|
|
dimension |
|
|
|
|
|
type= |
|
|
|
|
|
|
|
|
|
|
"OneArrayStruct_short_array_ArrayOfShort"/> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
<xsd:complexType name= |
|
|
|
|
"TwoArrayStruct_short_array_ArrayOfShort"> |
|
|
|
|
<xsd:sequence> |
|
|
|
|
<xsd:element name="item" minOccurs="2" |
|
|
|
|
maxOccurs="2" type="xsd:short"> |
|
|
|
|
</xsd:element> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
n xsd:complexType |
|
|
|
|
with sequence |
|
<xsd:complexType name= |
|
|
|
"TwoArrayStruct_short_array_ArrayOfArrayOfShort"> |
|
|
|
containing one |
|
|
|
|
|
<xsd:sequence> |
|
|
|
element with min |
|
|
|
|
struct TwoArrayStruct { |
<xsd:element name="item" |
|
arrays |
|
& max occurs |
||
|
minOccurs="1" maxOccurs="1" |
|||
|
short short_array[2][1]; |
|||
(cont’d) |
|
|
type= |
|
|
|
}; |
||
|
|
There is one |
"TwoArrayStruct_short_array_ArrayOfShort"> |
|
|
|
|
||
|
|
|
</xsd:element> |
|
|
|
xsd:complexType |
|
|
|
|
|
</xsd:sequence> |
|
|
|
per array |
|
|
|
|
|
</xsd:complexType> |
|
|
|
dimension |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
member |
|
|
|
|
<xsd:complexType name="TwoArrayStruct"> |
|
|
|
|
<xsd:sequence> |
|
|
|
|
<xsd:element name="short_array" |
|
|
|
|
minOccurs="1" maxOccurs="1" |
|
|
|
|
type= |
|
|
|
|
"TwoArrayStruct_short_array_ArrayOfArrayOfShort"/> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
Table 3.10 Mapping Type System Constructs to XSD
Type/Construct |
|
Example |
||
|
|
|
|
|
IDL |
XSD |
IDL |
XSD |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
<xsd:complexType name= |
|
|
|
|
"SequenceStruct_short_sequence_SequenceOfShort"> |
|
|
|
|
<xsd:sequence> |
|
|
|
|
<xsd:element name="item" minOccurs="0" |
|
|
|
|
maxOccurs="4" type="xsd:short"> |
|
|
|
|
</xsd:element> |
|
|
xsd:complexType |
|
</xsd:sequence> |
|
bounded |
with sequence |
struct SequenceStruct { |
</xsd:complexType> |
|
sequence<short,4> |
|
|||
containing one |
|
|||
sequence |
short_sequence; |
|||
element with min |
||||
|
}; |
member |
||
|
& max occurs |
|||
|
|
<xsd:complexType name="SequenceStruct"> |
||
|
|
|
||
|
|
|
<xsd:sequence> |
|
|
|
|
<xsd:element name="short_sequence" |
|
|
|
|
minOccurs="1" maxOccurs="1" |
|
|
|
|
type= |
|
|
|
|
"SequenceStruct_short_sequence_SequenceOfShort"/> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
|
|
||
|
|
|
<xsd:complexType name= |
|
|
|
|
"SequenceStruct_short_sequence_SequenceOfShort"> |
|
|
|
|
<xsd:sequence> |
|
|
|
|
<xsd:element name="item" |
|
|
|
|
minOccurs="0" maxOccurs="unbounded" |
|
|
|
|
type="xsd:short"/> |
|
|
xsd:complexType |
|
</xsd:sequence> |
|
unbound- |
with sequence |
struct SequenceStruct { |
</xsd:complexType> |
|
sequence<short> |
|
|||
ed |
containing one |
|
||
short_sequence; |
||||
sequence |
element with min |
|||
}; |
member |
|||
|
& max occurs |
|||
|
|
<xsd:complexType name="SequenceStruct"> |
||
|
|
|
||
|
|
|
<xsd:sequence> |
|
|
|
|
<xsd:element name="short_sequence" |
|
|
|
|
minOccurs="1" maxOccurs="1" |
|
|
|
|
type= |
|
|
|
|
"SequenceStruct_short_sequence_SequenceOfShort"/> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
Table 3.10 Mapping Type System Constructs to XSD
|
Type/Construct |
|
Example |
||
|
|
|
|
|
|
IDL |
|
XSD |
IDL |
XSD |
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
<xsd:complexType |
|
|
|
|
|
name= |
|
|
|
|
|
"ArrayOfSequencesStruct_sequence_array_SequenceOf |
|
|
|
|
|
Short"> |
|
|
|
|
|
<xsd:sequence> |
|
|
|
|
|
<xsd:element name="item" |
|
|
|
|
|
minOccurs="0" maxOccurs="4" |
|
|
|
|
|
type="xsd:short"> |
|
|
|
|
|
</xsd:element> |
|
|
|
|
|
</xsd:sequence> |
|
|
|
|
|
</xsd:complexType> |
|
|
|
n + 1 |
|
|
|
|
|
xsd:complexType |
|
||
|
|
with sequence |
|
<xsd:complexType |
|
|
|
containing one |
|
name= |
|
|
|
element with min |
|
"ArrayOfSequencesStruct_sequence_array_ArrayOf |
|
|
|
struct |
SequenceOfShort"> |
||
|
|
& max occurs |
|||
array of |
|
ArrayOfSequencesStruct { |
<xsd:sequence> |
||
|
|
||||
|
|
sequence<short,4> |
<xsd:element name="item" |
||
sequences |
|
||||
There is one |
sequence_sequence[2]; |
minOccurs="2" maxOccurs="2" |
|||
|
|
||||
|
|
xsd:complexType |
}; |
type= |
|
|
|
|
"ArrayOfSequencesStruct_sequence_array_SequenceOf |
||
|
|
per array |
|
||
|
|
|
Short"> |
||
|
|
dimension and one |
|
||
|
|
|
</xsd:element> |
||
|
|
xsd:complexType |
|
||
|
|
|
</xsd:sequence> |
||
|
|
for the sequence |
|
</xsd:complexType> |
|
|
|
|
|
||
|
|
|
|
array of sequences |
|
|
|
|
|
<xsd:complexType name="ArrayOfSequencesStruct"> |
|
|
|
|
|
<xsd:sequence> |
|
|
|
|
|
<xsd:element name="sequence_array" |
|
|
|
|
|
minOccurs="1" maxOccurs="1" |
|
|
|
|
|
type= |
|
|
|
|
|
"ArrayOfSequencesStruct_sequence_array_ArrayOf |
|
|
|
|
|
SequenceOfShort"/> |
|
|
|
|
|
</xsd:sequence> |
|
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
Table 3.10 Mapping Type System Constructs to XSD
Type/Construct |
|
Example |
|
|
|
|
|
IDL |
XSD |
IDL |
XSD |
|
|
|
|
|
|
|
|
|
|
|
<xsd:complexType name="ShortArray"> |
|
|
|
<xsd:sequence> |
|
|
|
<xsd:element name="item" |
|
|
|
minOccurs="2" maxOccurs="2" |
|
|
|
type="xsd:short"> |
|
|
|
</xsd:element> |
|
|
|
</xsd:sequence> |
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
<xsd:complexType name= |
|
Sequences of arrays |
typedef short |
"SequencesOfArraysStruct_array_sequence_SequenceO |
|
ShortArray[2]; |
fShortArray"> |
|
|
must be |
|
<xsd:sequence> |
sequence of |
implemented using |
struct |
<xsd:element name="item" |
arrays |
an explicit type |
SequenceOfArraysStruct { |
minOccurs="0" maxOccurs="2" |
|
definition (typedef) |
sequence<ShortArray,2> |
type="ShortArray"> |
|
for the array |
arrays_sequence; |
</xsd:element> |
|
|
}; |
</xsd:sequence> |
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
<xsd:complexType name="SequenceOfArraysStruct"> |
|
|
|
<xsd:sequence> |
|
|
|
<xsd:element name="arrays_sequence" |
|
|
|
minOccurs="1" maxOccurs="1" |
|
|
|
type= |
|
|
|
"SequencesOfArraysStruct_arrays_sequence_Sequence |
|
|
|
OfShortArray"/> |
|
|
|
</xsd:sequence> |
|
|
|
</xsd:complexType> |
|
|
|
|
Table 3.10 Mapping Type System Constructs to XSD
Type/Construct |
|
Example |
||
|
|
|
|
|
IDL |
XSD |
IDL |
XSD |
|
|
|
|
|
|
|
|
|
||
|
|
|
<xsd:complexType name="ShortSequence"> |
|
|
|
|
<xsd:sequence> |
|
|
|
|
<xsd:element name="item" |
|
|
|
|
minOccurs="0" maxOccurs="4" |
|
|
|
|
type="xsd:short"> |
|
|
|
|
</xsd:element> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
||
|
|
|
<xsd:complexType name= |
|
|
Sequences of |
typedef sequence<short,4> |
"SequencesOfSequences_sequences_sequence_Sequence |
|
|
sequences must be |
ShortSequence; |
OfShortSequence"> |
|
sequence of |
implemented using |
|
<xsd:sequence> |
|
struct |
<xsd:element name="item" |
|||
an explicit type |
||||
sequences |
definition (typedef) |
SequenceOfSequences { |
minOccurs="0" maxOccurs="2" |
|
|
sequence<ShortSequence, 2> |
type="ShortSequence"> |
||
|
for the second |
sequences_sequence; |
</xsd:element> |
|
|
sequence |
|||
|
}; |
</xsd:sequence> |
||
|
|
|
</xsd:complexType> |
|
|
|
|
||
|
|
|
<xsd:complexType name="SequenceOfSequences"> |
|
|
|
|
<xsd:sequence> |
|
|
|
|
<xsd:element name="sequences_sequence" |
|
|
|
|
minOccurs="1" maxOccurs="1" |
|
|
|
|
type="SequencesOfSequences_ |
|
|
|
|
sequences_sequence_SequenceOfShortSequence"/> |
|
|
|
|
</xsd:sequence> |
|
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
|
Modules are |
|
<xsd:complexType name= |
|
|
module PackageName { |
"PackageName.PrimitiveStruct"> |
||
|
mapped adding the |
<xsd:sequence> |
||
module |
name of the |
struct PrimitiveStruct { |
<xsd:element name="long_member" |
|
long long_member; |
||||
module before the |
minOccurs="1" maxOccurs="1" |
|||
|
}; |
|||
|
name of each type |
type="xsd:int"/> |
||
|
}; |
|||
|
inside the module |
</xsd:sequence> |
||
|
|
|||
|
|
|
</xsd:complexType> |
|
|
|
|
|
|
include |
xsd:include |
#include |
<xsd:include schemaLocation= |
|
"PrimitiveType.idl" |
"PrimitiveType.xsd"/> |
|||
|
|
|||
|
|
|
|
a.All files that use the primitive types char, wchar, long double and wstring must reference rti_dds_topic_types_common.xsd. See Primitive Types (Section 3.5.1).
b.Data types containing bitfield members are not supported by DynamicData (Section 3.8).
c.Directives are RTI extensions to the standard IDL grammar. For additional information about directives see Using Custom Directives (Sec- tion 3.3.8).
d.The discriminant values can be described using comments (as specified by the standard) or xsd:annotation tags. We recommend using annotations because comments may be removed by XSD/XML parsers.
3.5.1Primitive Types
The primitive types char, wchar, long double, and wstring are not supported natively in XSD. Connext provides definitions for these types in the file <NDDSHOME>/resource/rtiddsgen/ schema/rti_dds_topic_types_common.xsd. All files that use the primitive types char, wchar, long double and wstring must reference rti_dds_topic_types_common.xsd. For example:
<?xml version="1.0"
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:dds="http://www.omg.org/dds">
<xsd:import namespace="http://www.omg.org/dds" schemaLocation="rti_dds_topic_types_common.xsd"/>
<xsd:complexType name="Foo"> <xsd:sequence>
<xsd:element name="myChar" minOccurs="1" maxOccurs="1" type="dds:char"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>
3.6Using rtiddsgen
The rtiddsgen utility provided by Connext creates the code needed to define and register a user data type with Connext. Using this tool is optional if:
❏You are using dynamic types (see Managing Memory for
❏You are using one of the
To use rtiddsgen, you must supply a description of the type in an IDL, XML, XSD, or WSDL file. The supported syntax for each one of the notations is described in Section 3.8.5.1 (IDL), Section 3.4 (XML) and Section 3.5 (XSD and WSDL). You can define multiple data types in the same
Table 3.11 on page
On Windows systems: Before running rtiddsgen, run VCVARS32.BAT in the same command prompt that you will use to run rtiddsgen.
Table 3.11 Files Created by rtiddsgen for C, C++, C++/CLI, C# for Example “Hello.idl”
Generated Files |
Description |
|
|
Required files for the user data type. The source files should be compiled and linked with the user application. The header files are required to use the data type in source.
You should not modify these files unless you intend to customize the generated code supporting your type.
Hello.[c,cxx, cpp] |
Generated code for the data types. These files contain the |
|
HelloSupport.[c, cxx, cpp] |
||
implementation for your data types. |
||
HelloPlugin.[c,cxx, cpp] |
||
|
||
|
|
|
Hello.h |
Header files that contain declarations used in the implementation of |
|
HelloSupport.h |
||
your data types. |
||
HelloPlugin.h |
||
|
||
|
|
Table 3.11 Files Created by rtiddsgen for C, C++, C++/CLI, C# for Example “Hello.idl”
|
Generated Files |
|
|
Description |
|
|
|
|
|
|
|
|
|
|
|
||
|
Optional files generated when you use the |
||||
|
You may modify and use these files as a way to create simple applications that publish or subscribe to the |
||||
|
user data type. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Example code for an application that publishes the user data type. This |
|
|
|
|
|
example shows the basic steps to create all of the Connext objects needed |
|
|
Hello_publisher.[c, cxx, cpp, cs] |
to send data. |
|||
|
|
|
|
You will need to modify the code to set and change the values being |
|
|
|
|
|
sent in the data structure. Otherwise, just compile and run. |
|
|
|
|
|
|
|
|
|
|
|
Example code for an application that subscribes to the user data type. |
|
|
|
|
|
This example shows the basic steps to create all of the Connext objects |
|
|
Hello_subscriber.[c, cxx, cpp,cs] |
needed to receive data using a “listener” function. |
|||
|
|
|
|
No modification of this file is required. It is ready for you to compile |
|
|
|
|
|
and run. |
|
|
|
|
|
|
|
|
Hello.dsw or Hello.sln, |
|
Microsoft Visual C++ or Visual Studio .NET Project workspace and |
||
|
Hello_publisher.dsp |
or |
|||
|
Hello_publisher.vcproj, |
|
project files, generated only for “i86Win32” architectures. To compile |
||
|
|
the generated source code, open the workspace file and build the two |
|||
|
Hello_subscriber.dsp |
or |
|||
|
projects. |
|
|||
|
Hello_subscriber.vcproj |
|
|
|
|
|
|
|
|
|
|
|
makefile_Hello_<architecture> |
|
Makefile |
for |
|
|
|
<architecture> would be linux2.4gcc3.2.2. |
|||
|
|
|
|
||
|
|
|
|
|
|
Table 3.12 Files Created by rtiddsgen for Java for Example “Hello.idl” |
|||||
|
|
|
|
||
|
Data Type |
Generated Files |
Description |
||
|
|
|
|
||
|
|
||||
|
Since the Java language requires individual files to be created for each class, rtiddsgen will generate a |
||||
|
source file for every IDL construct that translates into a class in Java. |
||||
|
|
|
|
|
|
|
Constants |
<Name>.java |
|
|
Class associated with the constant |
|
|
|
|
|
|
|
Enums |
<Name>.java |
|
|
Class associated with enum type |
|
|
|
|
|
|
|
|
<Name>.java |
|
|
Structure/Union class |
|
|
<Name>Seq.java |
|
||
|
Structures/ |
|
Sequence class |
||
|
<Name>DataReader.java |
||||
|
Unions |
Connext DataReader and DataWriter classes |
|||
|
<Name>DataWriter.java |
||||
|
|
Support (serialize, deserialize, etc.) class |
|||
|
|
<Name>TypeSupport.java |
|||
|
|
|
|||
|
|
|
|
|
|
|
Typedef of |
<Name>.java |
|
|
Wrapper class |
|
sequences or |
<Name>Seq.java |
|
Sequence class |
|
|
arrays |
<Name>TypeSupport.java |
Support (serialize, deserialize, etc.) class |
||
|
|
|
|
|
|
Table 3.12 Files Created by rtiddsgen for Java for Example “Hello.idl”
Data Type |
Generated Files |
Description |
|
|
|
Optional files generated when you use the
|
|
Example code for applications that publish or subscribe to |
|
|
<Name>Publisher.java |
the user data type. You should modify the code in the |
|
|
publisher application to set and change the value of the |
||
Structures/ |
<Name>Subscriber.java |
||
published data. Otherwise, both files should be ready to |
|||
Unions |
|
||
|
compile and run. |
||
|
|
||
|
|
|
|
|
makefile_Hello_<architecture> |
Makefile for |
|
|
|
example <architecture> is linux2.4gcc3.2.2. |
|
Structures/ |
<Name>TypeCode.java |
|
|
Unions/ |
Type code class associated with the IDL type given by |
||
(Note: this is not generated if |
|||
Typedefs/ |
<Name>. |
||
Enums |
you use |
|
|
|
|
||
|
|
|
NOTE: Before using an
3.6.1rtiddsgen
There are several
Note: CORBA support requires the RTI CORBA Compatibility Kit
rtiddsgen
Table 3.13 describes the options (in alphabetical order).
Table 3.13 Options for rtiddsgen
Option |
Description |
|
|
|
|
|
|
|
Converts the input type description file into CCL format. This option creates |
||
a new file with the same name as the input file and a .ccl extension. |
||
|
||
|
|
|
Converts the input type description file into CCs format. This option creates |
||
a new file with the same name as the input file and a .ccs extension. |
||
|
||
|
|
|
Converts the input type description file into IDL format. This option creates |
||
a new file with the same name as the input file and a .idl extension. |
||
|
||
|
|
|
Converts the input type description file into WSDL format. This option |
||
creates a new file with the same name as the input file and a .wsdl extension. |
||
|
||
|
|
|
Converts the input type description file into XML format. This option creates |
||
a new file with the same name as the input file and a .xml extension. |
||
|
||
|
|
|
Converts the input type description file into XSD format. This option creates |
||
a new file with the same name as the input file and a .xsd extension. |
||
|
||
|
|
|
|
This option is only available when using the RTI CORBA Compatibility Kit for |
|
Connext (available for purchase as a separate product). Please see Part 7: RTI |
||
|
||
|
|
|
|
Defines preprocessor macros. |
|
Note: On Windows systems, enclose the argument in quotation marks: |
||
|
||
|
|
|
Generates the output in the specified directory. By default, rtiddsgen will |
||
generate files in the directory where the input |
||
|
||
|
|
|
|
Assigns a suffix to the name of a DataReader interface. Only applies if |
|
is also specified. By default, the suffix is 'DataReader'. Therefore, given the |
||
|
type 'Foo' the name of the DataReader interface will be 'FooDataReader'. |
|
|
|
|
|
Assigns a suffix to the name of a DataWriter interface. Only applies if |
|
is also specified. By default, the suffix is 'DataWriter'. Therefore, given the |
||
|
type 'Foo' the name of the DataWriter interface will be 'FooDataWriter'. |
|
|
|
|
Creates XML files for debugging rtiddsgen only. Use this option only at the |
||
direction of RTI support; it is unlikely to be useful to you otherwise. |
||
|
||
|
|
|
Enables use of the escape character '_' in IDL identifiers. When |
||
used, this option is always enabled. |
||
|
||
|
|
|
|
Generates example application code and makefiles (for |
|
systems) or workspace and project files (for Windows systems) based on the |
||
|
||
|
makefiles. Valid options for <arch> are listed in the Platform Notes. |
|
|
|
|
When converting to CCS or CCL files, expand octet sequences. The default |
||
is to use a blob type. |
||
|
||
|
|
Table 3.13 Options for rtiddsgen
Option |
Description |
|
|
|
|
|
|
|
When converting to CCS or CCL files, expand char sequences. The default is |
||
to use a string type. |
||
|
||
|
|
|
|
Adds to the list of directories to be searched for |
|
XML, XSD or WSDL files). Note: A |
||
|
include a file in another format. |
|
|
|
|
Indicates that the input file is an IDL file, regardless of the file extension. |
||
|
|
|
Indicates that the input file is a WSDL file, regardless of the file extension. |
||
|
|
|
Indicates that the input file is a XML file, regardless of the file extension. |
||
|
|
|
Indicates that the input file is a XSD file, regardless of the file extension. |
||
|
|
|
IDLInputFile.idl |
File containing IDL descriptions of your data types. If |
|
the file must have a ‘.idl’ extension. |
||
|
||
|
|
|
Prints out the command line options for rtiddsgen. |
||
|
|
|
Specifies the language to use for the generated files. The default language is |
||
C++; you can also choose C, C++/CLI, C#, Java, or Ada. |
||
|
||
|
|
|
|
Generates code for the |
|
The METP library requires a special version of Connext; please contact |
||
|
support@rti.com for more information. |
|
|
|
|
Specifies the use of C++ namespace. (For C++ only. For C++/CLI and C#, it |
||
is |
||
|
||
|
|
|
|
Forces rtiddsgen to put ‘copy’ logic into the corresponding TypeSupport class |
|
|
rather than the type itself. This option is only used for Java code generation. |
|
This option is not compatible with the use of ndds_standalone_type.jar (see |
||
|
Section 3.7). Note that when generating code for Java, the |
|
|
implies the |
|
|
Disables |
|
|
can be used in a standalone |
|
|
||
Note: If you are using a large data type (more than 64 K) and type code |
||
support, you will see a warning when type code information is sent. Connext |
||
|
||
|
has a type code size limit of 64K. To avoid the warning when working with |
|
|
data types with type codes larger than 64K, turn off type code support by |
|
|
using |
|
|
|
|
|
Allows rtiddsgen to overwrite any existing generated files. If it is not present |
|
and existing files are found, rtiddsgen will print a warning but will not |
||
|
overwrite them. |
|
|
|
|
See Optimizing Typedefs |
||
|
|
|
|
Specifies the CORBA ORB. The majority of code generated is independent of |
|
|
the ORB. However, for some IDL features the code generated depends on the |
|
ORB. rtiddsgen generates code compatible with |
||
select an ACE_TAO version use the |
||
|
||
|
ACE_TAO1.6. |
|
|
This option can only be used with the |
|
|
|
|
|
Specifies the root package into which generated classes will be placed. It |
|
applies to Java only. If the |
||
|
those modules will be considered subpackages of the package specified here. |
|
|
|
Table 3.13 Options for rtiddsgen
Option |
Description |
|
|
|
|
|
|
|
Disables the preprocessor. |
||
|
|
|
|
Specifies a preprocessor option. This parameter can be used multiple times |
|
to provide the |
||
|
ppPath. |
|
|
|
|
|
Specifies the preprocessor. If you only specify the name of an executable |
|
|
(not a complete path to that executable), the executable must be found in |
|
|
your Path. The default value is "cpp" for |
|
"cl.exe" for Windows architectures.If you use |
||
path and filename for cl.exe or the cpp preprocessor, you must also use - |
||
<preprocessor |
||
ppOption (described below) to set the following preprocessor options: |
||
executable> |
||
If you use a |
||
|
||
|
||
|
If you use a |
|
|
||
|
|
|
Sets the size assigned to unbounded sequences. The default value is 100 |
||
elements. |
||
|
||
|
|
|
Sets the size assigned to unbounded strings, not counting a terminating |
||
NULL character. The default value is 255 bytes. |
||
|
||
|
|
|
|
Assigns a suffix to the names of the implicit sequences defined for IDL types. |
|
Only applies if |
||
<suffix> |
Therefore, given the type 'Foo' the name of the implicit sequence will be |
|
|
'FooSeq'. |
|
|
|
|
Cancels any previous definition of <name>. |
||
|
|
|
|
Makes the generated code compatible with RTI Data Distribution Service 4.2e. |
|
This option should be used when compatibility with 4.2e is required and the |
||
topic data types contain double, long long, unsigned long long, or long |
||
|
||
|
double members. |
|
|
|
|
|
rtiddsgen verbosity: |
|
1: exceptions |
||
2: exceptions and warnings |
||
|
||
|
3: exceptions, warnings and information (Default) |
|
|
|
|
|
Displays the version of rtiddsgen being used, such as 5.0.x. (Note: To see |
|
‘patch’ revision information (such as 5.0.x rev. n), see What Version am I |
||
|
||
|
|
|
WSDLInputFile.wsdl |
WSDL file containing XSD descriptions of your data types. If |
|
not used, the file must have an .wsdl extension. |
||
|
||
|
|
|
XMLInputFile.idl |
File containing XML descriptions of your data types. If |
|
used, the file must have an .xml extension. |
||
|
||
|
|
|
XSDInputFile.xsd |
File containing XSD descriptions of your data types. If |
|
the file must have an .xsd extension. |
||
|
||
|
|
a. CORBA support is only available when using the RTI CORBA Compatibility Kit (available for purchase as a separate product). See Part 7: RTI CORBA Compatibility Kit.
3.6.1.1Optimizing Typedefs
The
applies to C and C++ because the Java language does not contain the typedef construct. In other words, rtiddsgen always resolves typedef’ed names to their most basic types when generating Java code (except for typedefs of arrays and sequences which are converted to wrapper
❏0 (default): No optimization. Typedef’ed types are treated as full types and
❏1: The compiler generates
This will save at least one function call for serialization, deserialization, and other manipulation of the parent structure. This optimization level is always safe to use unless the user intends to modify the generated
❏2: Same as level 1 with the addition that the
This typedef optimization level is only recommend if you only have a single IDL file that contains the definitions of all of the user data types passed by Connext on the network. If you have multiple IDL files, and types defined in one file use typedefs that are defined in another, then rtiddsgen will generate code assuming that the
For example, consider this declaration:
typedef short MyShort
struct MyStructure { MyShort member;
};
With optimization 0: The
MyStructure.
With optimization 1: The type
With optimization 2: The
3.7Using Generated Types without Connext (Standalone)
You can use the generated
The directory <NDDSHOME>/resource/rtiddsgen/standalone contains the required helper files:
❏include: header and templates files for C and C++.
❏src: source files for C and C++.
❏class: Java jar file.
Note: You must use rtiddsgen’s
3.7.1Using Standalone Types in C
The generated files that can be used standalone are:
❏<idl file name>.c: Types source file
❏<idl file name>.h: Types header file
The type
To use the
1.Make sure you use rtiddsgen’s
2.Include the directory <NDDSHOME>/resource/rtiddsgen/standalone/include in the list of directories to be searched for header files.
3.Add the source files, ndds_standalone_type.c and <idl file name>.c, to your project.
4.Include the file <idl file name>.h in the source files that will use the generated types in a standalone manner.
5.Compile the project using the following two preprocessor definitions:
a.NDDS_STANDALONE_TYPE
b.The definition for your platform (RTI_VXWORKS, RTI_QNX, RTI_WIN32, RTI_INTY, RTI_LYNX or RTI_UNIX)
3.7.2Using Standalone Types in C++
The generated files that can be used standalone are:
❏<idl file name>.cxx: Types source file
❏<idl file name>.h: Types header file
The
To use the generated types in a standalone manner:
1. Make sure you use rtiddsgen’s
2.Include the directory <NDDSHOME>/resource/rtiddsgen/standalone/include in the list of directories to be searched for header files.
3.Add the source files, ndds_standalone_type.cxx and <idl file name>.cxx, to your project.
4.Include the file <idl file name>.h in the source files that will use the rtiddsgen types in a standalone manner.
5.Compile the project using the following two preprocessor definitions:
a.NDDS_STANDALONE_TYPE
b.The definition for your platform (such as RTI_VXWORKS, RTI_QNX, RTI_WIN32, RTI_INTY, RTI_LYNX or RTI_UNIX)
3.7.3Standalone Types in Java
The generated files that can be used standalone are:
❏<idl type>.java
❏<idl type>Seq.java
The type code (<idl file>TypeCode.java),
DataReader code (<idl file>DataReader.java) and DataWriter code (<idl file>DataWriter.java) cannot be used standalone.
To use the generated types in a standalone manner:
1.Make sure you use rtiddsgen’s
2.Include the file ndds_standalone_type.jar in the classpath of your project.
3.Compile the project using the standalone types files (<idl type>.java and <idl type>Seq.java).
3.8Interacting Dynamically with User Data Types
3.8.1Introduction to TypeCode
Type
enum TCKind { TK_NULL, TK_SHORT, TK_LONG, TK_USHORT, TK_ULONG, TK_FLOAT, TK_DOUBLE, TK_BOOLEAN, TK_CHAR, TK_OCTET, TK_STRUCT,
TK_UNION,
TK_ENUM,
TK_STRING, TK_SEQUENCE, TK_ARRAY, TK_ALIAS, TK_LONGLONG, TK_ULONGLONG, TK_LONGDOUBLE, TK_WCHAR, TK_WSTRING, TK_VALUE, TK_SPARSE
}
Type codes unambiguously match type representations and provide a more reliable test than comparing the string type names.
The TypeCode class, modeled after the corresponding CORBA API, provides access to type- code information. For details on the available operations for the TypeCode class, see the API Reference HTML documentation, which is available for all supported programming languages (select Modules, DDS API Reference, Topic Module, Type Code Support).
Type codes are enabled by default when you run rtiddsgen. The
Note:
3.8.2Defining New Types
Note: This section does not apply when using the separate
Locally, your application can access the type code for a generated type "Foo" by calling the Foo_get_typecode() operation in the code for the type generated by rtiddsgen (unless
Creating a TypeCode is parallel to the way you would define the type statically: you define the type itself with some name, then you add members to it, each with its own name and type.
For example, consider the following statically defined type. It might be in C, C++, or IDL; the syntax is largely the same.
struct MyType { long my_integer; float my_float; bool my_bool;
string<128> my_string; // @key
};
This is how you would define the same type at run time in C++:
DDS_ExceptionCode_t ex = DDS_NO_EXCEPTION_CODE; DDS_StructMemberSeq structMembers; // ignore for now DDS_TypeCodeFactory* factory = DDS_TypeCodeFactory::get_instance();
DDS_TypeCode* structTc = factory->create_struct_tc( "MyType", structMembers, ex);
// If structTc is NULL, check 'ex' for more information.
More detailed documentation for the methods and constants you see above, including example code, can be found in the API Reference HTML documentation, which is available for all supported programming languages.
If, as in the example above, you know all of the fields that will exist in the type at the time of its construction, you can use the StructMemberSeq to simplify the code:
DDS_StructMemberSeq structMembers; structMembers.ensure_length(4, 4);
DDS_TypeCodeFactory* factory = DDS_TypeCodeFactory::get_instance();
structMembers[0].name = DDS_String_dup("my_integer"); structMembers[0].type =
structMembers[1].name = DDS_String_dup("my_float"); structMembers[1].type =
structMembers[2].name = DDS_String_dup("my_bool"); structMembers[2].type =
structMembers[3].name = DDS_String_dup("my_string"); structMembers[3].type =
DDS_ExceptionCode_t ex = DDS_NO_EXCEPTION_CODE; DDS_TypeCode* structTc =
structMembers, ex);
After you have defined the TypeCode, you will register it with a DomainParticipant using a logical name. You will use this logical name later when you create a Topic.
DDSDynamicDataTypeSupport* type_support =
new DDSDynamicDataTypeSupport(structTc, DDS_DYNAMIC_DATA_TYPE_PROPERTY_DEFAULT);
DDS_ReturnCode_t retcode =
Now that you have created a type, you will need to know how to interact with objects of that type. Continue reading Section 3.8.3 below for more information.
3.8.3Sending Only a Few Fields
In some cases, your data model may contain a large number of potential fields, but it may not be desirable or appropriate to include a value for every one of them with every data sample.
❏It may use too much bandwidth. You may have a very large data structure, parts of which are updated very frequently. Rather than resending the entire data structure with every change, you may wish to send only those fields that have changed and rely on the recipients to reassemble the complete state themselves.
❏It may not make sense. Some fields may only have meaning in the presence of other fields. For example, you may have an event stream in which certain fields are only relevant for certain kinds of events.
To support these and similar cases, Connext supports sparse value types. A sample of such a type only contains the field values that were explicitly set by the sender. A recipient of that sample will receive an error when trying to look up the value of any other field.
An endpoint (DataWriter or DataReader) using a sparse value type will not communicate with another endpoint using a
Because direct programming language representations of data types typically have no way to express the concept of sparse fields (there is no way, for example, for a C structure to omit some of its fields), using sparse types requires use of the dynamic type API described in Defining New Types (Section 3.8.2). You will use the Dynamic Data API to work with sparse samples, just as you would with samples of any other dynamically defined type. For more information about working with sparse samples, see Objects of Dynamically Defined Types (Section 3.9.2) or the API Reference HTML documentation (select Modules, DDS API Reference, Topic Module, Dynamic Data).
A sparse version of the "MyType" type described above would be defined like this:
DDS_ExceptionCode_t ex = DDS_NO_EXCEPTION_CODE; DDS_TypeCodeFactory* factory = DDS_TypeCodeFactory::get_instance(); DDS_TypeCode* sparseTc =
"MySparseType", DDS_VM_NONE, NULL, ex);
// add members
Detailed descriptions of the methods and constants you see above can be found in the API Reference HTML documentation.
Integral to the definition of a sparse type are the member IDs of its fields. An ID is a
designer. (In the code example above, ID_MY_INTEGER, ID_MY_FLOAT, and ID_MY_BOOL are examples of
Although member IDs are a relatively efficient way to describe a sample's contents, they do use network bandwidth. This can be an important issue if you are considering using sparse types to decrease the size of your data samples on the network. Although the relative cost of adding member IDs to your packets will vary depending on the sizes and layout of your fields, the following is a good rule of thumb: if you expect a given data sample to contain less than half of the fields that are legal for its type, sparse types will probably save you on bandwidth. If, on the other hand, most samples contain most fields, you will probably be better off using a plain structure type and simply ignoring irrelevant fields on the receiving side.
3.8.4Type Extension and Versioning
As your system evolves, you may find that your data types need to change. And unless your system is relatively small, you may not be able to bring it all down at once in order to modify them. Instead, you may need to upgrade your types one component at a
You can use the sparse types described above to efficiently version
❏You can add new fields to a type at any time. Because the type is sparse, existing publishers of the type that have not been updated will simply omit the new field in any data samples they send. If you anticipate changing your types in future versions of your system, make sure that you ignore fields that you do not recognize, so that your application will be robust to future type changes.
❏You cannot remove fields from an existing type. Doing so would break older applications and invalidate historical samples that might already be in the caches of upgraded applications. Instead, simply stop sending values for the fields you wish to deprecate.
3.8.5Sending Type Codes on the Network
In addition to being used locally, serialized type codes are typically published automatically during discovery as part of the
Modules, Programming Tools).
Note: Type codes are not cached by Connext upon receipt and are therefore not available from the
DataReader's get_matched_publication_data() operation.
If your data type has an especially complex type code, you may need to increase the value of the type_code_max_serialized_length field in the DomainParticipant's
DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4). Or, to prevent the propagation of type codes altogether, you can set this value to zero (0). Be aware
that some features of monitoring tools, as well as some features of the middleware itself (such as ContentFilteredTopics) will not work correctly if you disable TypeCode propagation.
3.8.5.1Type Codes for
The type codes associated with the
module DDS {
/* String */ struct String {
string<max_size> value;
};
/* KeyedString */ struct KeyedString {
string<max_size> key; //@key string<max_size> value;
};
/* Octets */ struct Octets {
sequence<octet, max_size> value;
};
/* KeyedOctets */ struct KeyedOctets {
string<max_size> key; //@key sequence<octet, max_size> value;
};
};
The maximum size (max_size) of the strings and sequences that will be included in the type code definitions can be configured on a
Table 3.14 Properties for Allocating Size of
Built- |
|
|
|
in |
Property |
Description |
|
Type |
|
|
|
|
|
|
|
|
|
Maximum size of the strings published by the DataWriters |
|
|
|
and received by the DataReaders belonging to a |
|
String |
dds.builtin_type.string.max_size |
DomainParticipant (includes the |
|
|
|
character). |
|
|
|
Default: 1024 |
|
|
|
|
|
|
|
Maximum size of the keys used by the DataWriters and |
|
|
dds.builtin_type.keyed_string. |
DataReaders belonging to a DomainParticipant (includes the |
|
|
max_key_size |
||
KeyedSt |
|
Default: 1024 |
|
|
|
||
|
Maximum size of the strings published by the DataWriters |
||
ring |
|
||
|
and received by the DataReaders belonging to a |
||
|
dds.builtin_type.keyed_string. |
||
|
DomainParticipant using the |
||
|
max_size |
||
|
|||
|
|
||
|
|
Default: 1024 |
|
|
|
|
|
|
|
Maximum size of the octet sequences published by the |
|
Octets |
dds.builtin_type.octets.max_size |
DataWriters and DataReaders belonging to a |
|
DomainParticipant. |
|||
|
|
||
|
|
Default: 2048 |
|
|
|
|
Table 3.14 Properties for Allocating Size of
Built- |
|
|
in |
Property |
Description |
Type |
|
|
|
|
|
|
|
Maximum size of the key published by the DataWriter and |
|
dds.builtin_type.keyed_octets. |
received by the DataReaders belonging to the |
|
DomainParticipant (includes the |
|
|
max_key_size |
|
|
character). |
|
Keyed- |
|
|
|
Default:1024. |
|
Octets |
|
|
|
|
|
|
Maximum size of the octet sequences published by the |
|
|
|
|
|
dds.builtin_type.keyed_octets. |
DataWriters and DataReaders belonging to a |
|
max_size |
DomainParticipant. |
|
|
Default: 2048 |
|
|
|
3.9Working with Data Samples
You should now understand how to define and work with data types, whether you're using the simple data types built into the middleware (see
Now that you have chosen one or more data types to work with, this section will help you understand how to create and manipulate objects of those types.
3.9.1Objects of Concrete Types
If you use one of the
In C and C++, you create and delete your own objects from factories, just as you create Connext objects from factories. In the case of user data types, the factory is a singleton object called the type support. Objects allocated from these factories are deeply allocated and fully initialized.
/* In the generated header file: */ struct MyData {
char* myString;
};
/* In your code: */
MyData* sample = MyDataTypeSupport_create_data();
char* str =
/* ... */
MyDataTypeSupport_delete_data(sample);
In C++, as in C, you create and delete objects using the TypeSupport factories.
MyData* sample = MyDataTypeSupport::create_data(); char* str =
MyDataTypeSupport::delete_data(sample);
In C# and C++/CLI, you can use a
//In the generated code (C++/CLI): public ref struct MyData {
public:
System::String^ myString;
};
//In your code, if you are using C#: MyData sample = new MyData();
System.String str = sample.myString; // empty,
//In your code, if you are using C++/CLI:
MyData^ sample = gcnew MyData();
System::String^ str =
In Java, you can use a
//In the generated code: public class MyData {
public String myString = "";
}
//In your code:
MyData sample = new MyData();
String str =
3.9.2Objects of Dynamically Defined Types
If you are working with a data type that was discovered or defined at run time, you will use the reflective API provided by the DynamicData class to get and set the fields of your object.
Consider the following type definition:
struct MyData { long myInteger;
};
As with a statically defined type, you will create objects from a TypeSupport factory. How to create or otherwise obtain a TypeCode, and how to subsequently create from it a DynamicDataTypeSupport, is described in Defining New Types (Section 3.8.2).
For more information about the DynamicData and DynamicDataTypeSupport classes, consult the API Reference HTML documentation, which is available for all supported programming languages (select Modules, DDS API Reference, Topic Module, Dynamic Data).
In C:
DDS_DynamicDataTypeSupport* support = ...;
DDS_DynamicData* sample = DDS_DynamicDataTypeSupport_create_data(support); DDS_Long theInteger = 0;
DDS_ReturnCode_t success = DDS_DynamicData_set_long(sample, "myInteger", DDS_DYNAMIC_DATA_MEMBER_ID_UNSPECIFIED, 5);
/* Error handling omitted. */
success = DDS_DynamicData_get_long(sample, &theInteger, "myInteger", DDS_DYNAMIC_DATA_MEMBER_ID_UNSPECIFIED);
/* Error handling omitted. "theInteger" now contains the value 5 if no error occurred.
*/
In C++:
DDSDynamicDataTypeSupport* support = ...; DDS_DynamicData* sample =
DDS_DYNAMIC_DATA_MEMBER_ID_UNSPECIFIED, 5); // Error handling omitted.
DDS_Long theInteger = 0;
success =
//Error handling omitted.
//"theInteger" now contains the value 5 if no error occurred.
In C++/CLI:
using DDS;
DynamicDataTypeSupport^ support = ...; DynamicData^ sample =
DynamicData::MEMBER_ID_UNSPECIFIED, 5);
int theInteger =
0 /*redundant w/ field name*/);
/* Exception handling omitted.
* "theInteger" now contains the value 5 if no error occurred. */
In C#:
using namespace DDS; DynamicDataTypeSupport support = ...; DynamicData sample = support.create_data();
sample.set_long("myInteger", DynamicData.MEMBER_ID_UNSPECIFIED, 5);
int theInteger = sample.get_long("myInteger", DynamicData.MEMBER_ID_UNSPECIFIED);
/* Exception handling omitted.
* "theInteger" now contains the value 5 if no error occurred. */
In Java:
import com.rti.dds.dynamicdata.*; DynamicDataTypeSupport support = ...;
DynamicData sample = (DynamicData) support.create_data(); sample.set_int("myInteger", DynamicData.MEMBER_ID_UNSPECIFIED, 5);
int theInteger = sample.get_int("myInteger", DynamicData.MEMBER_ID_UNSPECIFIED);
/* Exception handling omitted.
* "theInteger" now contains the value 5 if no error occurred. */
Chapter 4 Entities
The main classes extend an abstract base class called an Entity. Every Entity has a set of associated events known as statuses and a set of associated Quality of Service Policies (QosPolicies). In addition, a Listener may be registered with the Entity to be called when status changes occur. Entities may also have attached Conditions, which provide a way to wait for status changes.
This chapter describes the common operations and general designed patterns shared by all
Entities including DomainParticipants, Topics, Publishers, DataWriters, Subscribers, and
DataReaders. In subsequent chapters, the specific statuses, Listeners, Conditions, and QosPolicies for each class will be discussed in detail.
4.1Common Operations for All Entities
All Entities (DomainParticipants, Topics, Publishers, DataWriters, Subscribers, and DataReaders) provide operations for:
❏Creating and Deleting Entities (Section 4.1.1)
❏Enabling Entities (Section 4.1.2)
❏Getting an Entity’s Instance Handle (Section 4.1.3)
❏Getting Status and Status Changes (Section 4.1.4)
❏Getting and Setting Listeners (Section 4.1.5)
❏Getting the StatusCondition (Section 4.1.6)
❏Getting and Setting QosPolicies (Section 4.1.7)
4.1.1Creating and Deleting Entities
The factory design pattern is used in creating and deleting Entities. Instead of declaring and constructing or destructing Entities directly, a factory object is used to create an Entity. Almost all entity factories are objects that are also entities. The only exception is the factory for a
DomainParticipant. See Table 4.1.
Table 4.1 Entity Factories
Entity |
Created by |
|
|
|
|
|
|
|
DomainParticipant |
DomainParticipantFactory |
|
(a static singleton object provided by Connext) |
||
|
||
|
|
|
Topic |
|
|
|
|
|
Publisher |
|
|
|
|
|
Subscriber |
DomainParticipant |
|
|
|
|
DataWritera |
|
|
DataReadera |
|
|
DataWritera |
Publisher |
|
DataReadera |
Subscriber |
a. DataWriters may be created by a DomainParticipant or a Publisher. Similarly, DataReaders may be created by a
DomainParticipant or a Subscriber.
All entities that are factories have:
❏Operations to create and delete child entities. For example:
DDSPublisher::create_datawriter, DDSDomainParticipant::delete_topic
❏Operations to get and set the default QoS values used when creating child entities. For example:
DDSSubscriber::get_default_datareader_qos, DDSDomainParticipantFactory::set_default_participant_qos
❏An ENTITYFACTORY QosPolicy (Section 6.4.2) to specify whether or not the newly created child entity should be automatically enabled upon creation.
An entity that is a factory cannot be deleted until all the child entities created by it have been deleted.
Each Entity obtained through create_<entity>() must eventually be deleted by calling delete_<entity>, or by calling delete_contained_entities().
4.1.2Enabling Entities
The enable() operation changes an Entity from a
By default, all Entities are automatically created in the enabled state. This means that as soon as the Entity is created, it is ready to be used. In some cases, you may want to create the Entity in a ‘disabled’ state. For example, by default, as soon as you create a DataReader, the DataReader will start receiving new samples for its Topic if they are being sent. However, your application may still be initializing other components and may not be ready to process the data at that time. In that case, you can tell the Subscriber to create the DataReader in a disabled state. After all of the other parts of the application have been created and initialized, then the DataReader can be enabled to actually receive messages.
To create a particular entity in a disabled state, modify the EntityFactory QosPolicy of its corresponding factory entity before calling create_<entity>(). For example, to create a disabled DataReader, modify the Subscriber’s QoS as follows:
DDS_SubscriberQos subscriber_qos;
DDS_BOOLEAN_FALSE;
DDSDataReader* datareader = subscriber->create_datareader( topic, DDS_DATAREADER_QOS_DEFAULT, listener);
When the application is ready to process received data, it can enable the DataReader:
4.1.2.1Rules for Calling enable()
In the following, a ‘Factory’ refers to a DomainParticipant, Publisher, or Subscriber; a ‘child’ refers to an entity created by the factory:
❏If the factory is disabled, its children are always created disabled, regardless of the setting in the factory's EntityFactoryQoS.
❏If the factory is enabled, its children will be created either enabled or disabled, according to the setting in the factory's EntityFactory Qos.
❏Calling enable() on a child whose factory object is still disabled will fail and return
DDS_RECODE_RECONDITION_NOT_MET.
❏Calling enable() on a factory with EntityFactoryQoS set to DDS_BOOLEAN_TRUE will recursively enable all of the factory’s children. If the factory’s EntityFactoryQoS is set to DDS_BOOLEAN_FALSE, only the factory itself will be enabled.
❏Calling enable() on an entity that is already enabled returns DDS_RETCODE_OK and has no effect.
❏There is no complementary “disable” operation. You cannot disable an entity after it is enabled. Disabled entities must have been created in that state.
❏An entity’s Listener will only be invoked if the entity is enabled.
❏The existence of an entity is not propagated to other DomainParticipants until the entity is enabled (see Chapter 14: Discovery).
❏If a DataWriter/DataReader is to be created in an enabled state, then the associated Topic must already be enabled. The enabled state of the Topic does not matter, if the Publisher/ Subscriber has its EntityFactory QosPolicy to create children in a disabled state.
❏When calling enable() for a DataWriter/DataReader, both the Publisher/Subscriber and the
Topic must be enabled, or the operation will fail and return
DDS_RETCODE_PRECONDITION_NOT_MET.
The following operations may be invoked on disabled Entities:
❏get_qos() and set_qos() Some
Finally, there are extended QosPolicies that are not a part of the DDS specification but offered by Connext to control extended features for an Entity. Some of those extended QosPolicies cannot be changed after the Entity has been
Into which exact categories a QosPolicy
❏get_status_changes() and get_*_status() The status of an Entity can be retrieved at any time (but the status of a disabled Entity never changes).
❏get_statuscondition() An Entity’s StatusCondition can be checked at any time (although the status of a disabled Entity never changes).
❏get_listener() and set_listener() An Entity’s Listener can be changed at any time.
❏create_*() and delete_*() A factory Entity can still be used to create or delete any child Entity that it can produce. Note: following the rules discussed previously, a disabled Entity will always create its children in a disabled state, no matter what the value of the EntityFactory QosPolicy is.
❏lookup_*() An Entity can always look up children it has previously created.
Most other operations are not allowed on disabled Entities. Executing one of those operations when an Entity is disabled will result in a return code of DDS_RETCODE_NOT_ENABLED. The documentation for a particular operation will explicitly state if it is not allowed to be used if the Entity is disabled.
Note: The builtin transports are implicitly registered when (a) the DomainParticipant is enabled,
(b) the first DataWriter/DataReader is created, or (c) you look up a builtin data reader, whichever happens first. Any changes to the builtin transport properties that are made after the builtin transports have been registered will have no affect on any DataWriters/DataReaders.
4.1.3Getting an Entity’s Instance Handle
The Entity class provides an operation to retrieve an instance handle for the object. The operation is simply:
InstanceHandle_t get_instance_handle()
An instance handle is a global ID for the entity that can be used in methods that allow user applications to determine if the entity was locally created, if an entity is owned (created) by another entity, etc.
4.1.4Getting Status and Status Changes
The get_status_changes() operation retrieves the set of events, also known in DDS terminology as communication statuses, in the Entity that have changed since the last time get_status_changes() was called. This method actually returns a value that must be bitwise AND’ed with an enumerated bit mask to test whether or not a specific status has changed. The operation can be used in a polling mechanism to see if any statuses related to the Entity have changed. If an entity is disabled, all communication statuses are in the “unchanged” state so the list returned by the get_status_changes() operation will be empty.
A set of statuses is defined for each class of Entities. For each status, there is a corresponding operation,
DataWriter has a DDS_OFFERED_DEADLINE_MISSED status; it also has a get_offered_deadline_missed_status() operation:
DDS_StatusMask statuses;
DDS_OfferedDeadlineMissedStatus deadline_stat;
statuses =
if (statuses & DDS_OFFERED_DEADLINE_MISSED_STATUS) {
deadline_stat.total_count);
}
See Section 4.3 for more information about statuses.
4.1.5Getting and Setting Listeners
Each type of Entity has an associated Listener, see Listeners (Section 4.4). A Listener represents a set of functions that users may install to be called asynchronously when the state of communication statuses change.
The get_listener() operation returns the current Listener attached to the Entity.
The set_listener() operation installs a Listener on an Entity. The Listener will only be invoked on the changes of statuses specified by the accompanying mask. Only one listener can be attached to each Entity. If a Listener was already attached, set_listener() will replace it with the new one.
The get_listener() and set_listener() operations are directly provided by the DomainParticipant,
Topic, Publisher, DataWriter, Subscriber, and DataReader classes so that listeners and masks used in the argument list are specific to each Entity.
Note: The set_listener() operation is not synchronized with the listener callbacks, so it is possible to set a new listener on an participant while the old listener is in a callback. Therefore you should be careful not to delete any listener that has been set on an enabled participant unless some
See Section 4.4 for more information about Listeners.
4.1.6Getting the StatusCondition
Each type of Entity may have an attached StatusCondition, which can be accessed through the get_statuscondition() operation. You can attach the StatusCondition to a WaitSet, to cause your application to wait for specific status changes that affect the Entity.
See Section 4.6 for more information about StatusConditions and WaitSets.
4.1.7Getting and Setting QosPolicies
Each type of Entity has an associated set of QosPolicies (see Section 4.2). QosPolicies allow you to configure and set properties for the Entity.
While most QosPolicies are defined by the DDS specification, some are offered by Connext as extensions to control parameters specific to the implementation.
There are two ways to specify a QoS policy:
❏Programmatically, as described in this section.
❏QosPolicies can also be configured from XML resources (files,
The get_qos() operation retrieves the current values for the set of QosPolicies defined for the
Entity.
QosPolicies can be set programmatically when an Entity is created, or modified with the Entity's set_qos() operation.
The set_qos() operation sets the QosPolicies of the entity. Note: not all QosPolicy changes will take effect instantaneously; there may be a delay since some QosPolicies set for one entity, for example, a DataReader, may actually affect the operation of a matched entity in another application, for example, a DataWriter.
The get_qos() and set_qos() operations are passed QoS structures that are specific to each derived entity class, since the set of QosPolicies that effect each class of entities is different.
Each QosPolicy has default values (listed in the API Reference HTML documentation). If you want to use custom values, there are three ways to change QosPolicy settings:
❏Before Entity creation (if custom values should be used for multiple Entities). See Section 4.1.7.1.
❏During Entity creation (if custom values are only needed for a particular Entity). See Section 4.1.7.2.
❏After Entity creation (if the values initially specified for a particular Entity are no longer appropriate). See Section 4.1.7.3.
Regardless of when or how you make QoS changes, there are some rules to follow:
❏Some QosPolicies interact with each other and thus must be set in a consistent manner. For instance, the maximum value of the HISTORY QosPolicy’s depth parameter is limited by values set in the RESOURCE_LIMITS QosPolicy. If the values within a QosPolicy structure are inconsistent, then set_qos() will return the error INCONSISTENT_POLICY, and the operation will have no effect.
❏Some policies can only be set when the Entity is created, or before the Entity is enabled. Others can be changed at any time. In general, all standard DDS QosPolicies can be changed before the Entity is enabled. A subset can be changed after the Entity is enabled.
4.1.7.1Changing the QoS Defaults Used to Create Entities: set_default_*_qos()
Each parent factory has a set of default QoS settings that are used when the child entity is created. The DomainParticipantFactory has default QoS values for creating DomainParticipants. A DomainParticipant has a set of default QoS for each type of entity that can be created from the
DomainParticipant (Topic, Publisher, Subscriber, DataWriter, and DataReader). Likewise, a Publisher
has a set of default QoS values used when creating DataWriters, and a Subscriber has a set of default QoS values used when creating DataReaders.
An entity’s QoS are set when it is created. Once an entity is created, all of its
You can change these default values so that they are automatically applied when new child entities are created. For example, suppose you want all DataWriters for a particular Publisher to have their RELIABILITY QosPolicy set to RELIABLE. Instead of making this change for each DataWriter when it is created, you can change the default used when any DataWriter is created from the Publisher by using the Publisher’s set_default_datawriter_qos() operation.
DDS_DataWriterQos default_datawriter_qos;
// get the current default values
// change to desired default values default_datawriter_qos.reliability.kind =
DDS_RELIABLE_RELIABILITY_QOS;
// set the new default values
// created datawriters will use new default values
datawriter =
Note: It is not safe to get or set the default QoS values for an entity while another thread may be simultaneously calling get_default_<entity>_qos(), set_default_<entity>_qos(), or create_<entity>() with DDS_<ENTITY>_QOS_DEFAULT as the qos parameter (for the same entity).
Another way to make QoS changes is by using XML resources (files, strings). For more information, see Chapter 17: Configuring QoS with XML.
4.1.7.2Setting QoS During Entity Creation
If you only want to change a QosPolicy for a particular entity, you can pass in the desired QosPolicies for an entity in its creation routine.
To customize an entity's QoS before creating it:
1.(C API Only) Initialize a QoS object with the appropriate INITIALIZER constructor.
2.Call the relevant get_<entity>_default_qos() method.
3.Modify the QoS values as desired.
4.Create the entity.
For example, to change the RELIABLE QosPolicy for a DataWriter before creating it:
// Initialize the QoS object DDS_DataWriterQos datawriter_qos;
// Get the default values
// Modify the QoS values as desired datawriter_qos.reliability.kind = DDS_BEST_EFFORT_RELIABILITY_QOS;
// Create the DataWriter with new values datawriter =
Another way to set QoS during entity creation is by using a QoS profile. For more information, see Chapter 17: Configuring QoS with XML.
4.1.7.3Changing the QoS for an Existing Entity
Some policies can also be changed after the entity has been created. To change such a policy after the entity has been created, use the entity’s set_qos() operation.
For example, suppose you want to tweak the DEADLINE QoS for an existing DataWriter:
DDS_DataWriterQos datawriter_qos;
// get the current values
// make desired changes datawriter_qos.deadline.period.sec = 3; datawriter_qos.deadline.period.nanosec = 0;
// set new values
Another way to make QoS changes is by using a QoS profile. For more information, see Chapter 17: Configuring QoS with XML.
Note: In the code examples presented in this section, we are not testing for the return code for the set_qos(), set_default_*_qos() functions. If the values used in the QosPolicy structures are inconsistent then the functions will fail and return INCONSISTENT_POLICY. In addition, set_qos() may return IMMUTABLE_POLICY if you try to change a QosPolicy on an Entity after that policy has become immutable. User code should test for and address those anomalous conditions.
4.1.7.4Default Values
Connext provides special constants for each Entity type that can be used in set_qos() and set_default_*_qos() to reset the QosPolicy values to the original DDS default values:
❏DDS_PARTICIPANT_QOS_DEFAULT
❏DDS_PUBLISHER_QOS_DEFAULT
❏DDS_SUBSCRIBER_QOS_DEFAULT
❏DDS_DATAWRITER_QOS_DEFAULT
❏DDS_DATAREADER_QOS_DEFAULT
❏DDS_TOPIC_QOS_DEFAULT
For example, if you want to set a DataWriter’s QoS back to their
Or if you want to reset the default QosPolicies used by a Publisher to create DataWriters back to their
Note: These defaults cannot be used to initialize a QoS structure for an entity. For example, the following is NOT allowed:
Not
create_datawriter(dataWriterQos);
4.2QosPolicies
Connext’s behavior is controlled by the Quality of Service (QoS) policies of the data communication entities (DomainParticipant, Topic, Publisher, Subscriber, DataWriter, and
DataReader) used in your applications. This section summarizes each of the QosPolicies that you can set for the various entities.
The QosPolicy class is the abstract base class for all the QosPolicies. It provides the basic mechanism for an application to specify quality of service parameters. Table 4.2 on page
The detailed description of a QosPolicy that applies to multiple Entities is provided in the first chapter that discusses an Entity whose behavior the QoS affects. Otherwise, the discussion of a QosPolicy can be found in the chapter of the particular Entity to which the policy applies. As you will see in the detailed description sections, all QosPolicies have one or more parameters that are used to configure the policy. The how’s and why’s of tuning the parameters are also discussed in those sections.
As first discussed in Controlling Behavior with Quality of Service (QoS) Policies (Section 2.5.1), QosPolicies may interact with each other, and certain values of QosPolicies can be incompatible with the values set for other policies.
The set_qos() operation will fail if you attempt to specify a set of values would result in an inconsistent set of policies. To indicate a failure, set_qos() will return INCONSISTENT_POLICY. Section 4.2.1 provides further information on QoS compatibility within an Entity as well as across matching Entities, as does the discussion/reference section for each QosPolicy listed in Table 4.2 on page
The values of some QosPolicies cannot be changed after the Entity is created or after the Entity is enabled. Others may be changed at any time. The detailed section on each QosPolicy states when each policy can be changed. If you attempt to change a QosPolicy after it becomes immutable (because the associated Entity has been created or enabled, depending on the policy), set_qos() will fail with a return code of IMMUTABLE_POLICY.
Table 4.2 QosPolicies
QosPolicy |
Summary |
|
|
|
|
|
|
|
Asynchronous- |
Configures the mechanism that sends user data in an external middleware thread. See |
|
Publisher |
||
|
|
|
|
This QoS policy is used in the context of two features: |
|
|
For a Collaborative DataWriter, specifies the group of DataWriters expected to |
|
Availability |
collaboratively provide data and the timeouts that control when to allow data to be |
|
available that may skip samples. |
||
|
For a Durable Subscription, configures a set of Durable Subscriptions on a DataWriter. |
|
|
See Section 6.5.1. |
|
|
|
|
|
Specifies and configures the mechanism that allows Connext to collect multiple user |
|
Batch |
data samples to be sent in a single network packet, to take advantage of the efficiency of |
|
|
sending larger packets and thus increase effective throughput. See Section 6.5.2. |
|
|
|
|
Database |
Various settings and resource limits used by Connext to control its internal database. See |
|
|
||
|
|
|
DataReaderProtocol |
This QosPolicy configures the Connext |
|
|
|
|
DataReaderResourceLimits |
Various settings that configure how DataReaders allocate and use physical memory for |
|
internal resources. See Section 7.6.2. |
||
|
||
|
|
|
DataWriterProtocol |
This QosPolicy configures the Connext |
|
|
|
Table 4.2 QosPolicies
QosPolicy |
Summary |
||
|
|
||
|
|
||
|
Controls how many threads can concurrently block on a write() call of this DataWriter. |
||
DataWriterResourceLimits |
Also controls the number of batches managed by the DataWriter and the instance- |
||
|
replacement kind used by the DataWriter. See Section 6.5.4. |
||
|
|
||
|
For a DataReader, specifies the maximum expected elapsed time between arriving data |
||
|
samples. |
|
|
Deadline |
For a DataWriter, specifies a commitment to publish samples with no greater elapsed |
||
|
time between them. |
|
|
|
See Section 6.5.5. |
|
|
|
|
||
|
Controls how Connext will deal with data sent by multiple DataWriters for the same |
||
DestinationOrder |
topic. Can be set to "by reception timestamp" or to "by source timestamp." See |
||
|
|
||
|
|
||
Discovery |
Configures the mechanism used by Connext to automatically discover and connect with |
||
new remote applications. See Section 8.5.2. |
|
||
|
|
||
|
|
||
DiscoveryConfig |
Controls the amount of delay in discovering entities in the system and the amount of |
||
discovery traffic in the network. See Section 8.5.3. |
|||
|
|||
|
|
||
DomainParticipantResource- |
Various settings that configure how DomainParticipants allocate and use physical |
||
memory for internal resources, including the maximum sizes of various properties. See |
|||
Limits |
|
||
|
|
||
|
|
||
Durability |
Specifies whether or not Connext will store and deliver data that were previously |
||
published to new DataReaders. See Section 6.5.7. |
|||
|
|||
|
|
|
|
DurabilityService |
Various settings to configure the external |
Persistence Service used by Connext for |
|
DataWriters with a Durability QoS setting of Persistent Durability. See Section 6.5.8. |
|||
|
|||
|
|
||
EntityFactory |
Controls whether or not child entities are created in the enabled state. See Section 6.4.2. |
||
|
|
||
EntityName |
Assigns a name to a DomainParticipant. See Section 8.5.5. |
||
|
|
|
|
Event |
Configures the DomainParticipant’s internal |
thread that handles timed events. See |
|
|
|||
|
|
||
|
|
|
|
ExclusiveArea |
Configures |
deadlock prevention capabilities. See |
|
|
|||
|
|
||
|
|
||
|
Along with TOPIC_DATA QosPolicy (Section 5.2.1) and USER_DATA QosPolicy |
||
GroupData |
(Section 6.5.25), this QosPolicy is used to attach a buffer of bytes to Connext's discovery |
||
|
|
||
|
|
||
|
Specifies how much data must to stored by Connextfor the DataWriter or DataReader. |
||
History |
This QosPolicy affects the RELIABILITY QosPolicy (Section 6.5.19) as well as the |
||
|
|||
|
|
||
LatencyBudget |
Suggestion to Connext on how much time is allowed to deliver data. See Section 6.5.11. |
||
|
|
||
Lifespan |
Specifies how long Connext should consider data sent by an user application to be valid. |
||
See Section 6.5.12. |
|
||
|
|
||
|
|
||
Liveliness |
Specifies and configures the mechanism that allows DataReaders to detect when |
||
DataWriters become disconnected or "dead." See Section 6.5.13. |
|||
|
|||
|
|
||
Logging |
Configures the properties associated with Connext logging. See Section 8.4.1. |
||
|
|
||
MultiChannel |
Configures a DataWriter’s ability to send data on different multicast groups (addresses) |
||
based on the value of the data. See Section 6.5.14. |
|||
|
|||
|
|
||
Ownership |
Along with Ownership Strength, specifies if DataReaders for a topic can receive data |
||
from multiple DataWriters at the same time. See Section 6.5.15. |
|||
|
|||
|
|
||
OwnershipStrength |
Used to arbitrate among multiple DataWriters of the same instance of a Topic when |
||
Ownership QoSPolicy is EXLUSIVE. See Section 6.5.16. |
|||
|
|||
|
|
||
Partition |
Adds string identifiers that are used for matching DataReaders and DataWriters for the |
||
same Topic. See Section 6.4.5. |
|
||
|
|
||
|
|
|
Table 4.2 QosPolicies
QosPolicy |
Summary |
|
|
|
|
|
|
|
Presentation |
Controls how Connext presents data received by an application to the DataReaders of the |
|
data. See Section 6.4.6. |
||
|
||
|
|
|
Profile |
Configures the way that XML documents containing QoS profiles are loaded by RTI. |
|
See Section 8.4.2. |
||
|
||
|
|
|
|
Stores name/value(string) pairs that can be used to configure certain parameters of |
|
Property |
Connext that are not exposed through formal QoS policies. It can also be used to store |
|
and propagate |
||
|
||
|
code during discovery. See Section 6.5.17. |
|
|
|
|
|
Specifies how Connext sends application data on the network. By default, data is sent in |
|
PublishMode |
the user thread that calls the DataWriter’s write() operation. However, this QosPolicy |
|
|
can be used to tell Connext to use its own thread to send the data. See Section 6.5.18. |
|
|
|
|
ReaderDataLifeCycle |
Controls how a DataReader manages the lifecycle of the data that it has received. See |
|
|
||
|
|
|
ReceiverPool |
Configures threads used by Connext to receive and process data from transports (for |
|
example, UDP sockets). See Section 8.5.6. |
||
|
||
|
|
|
Reliability |
Specifies whether or not Connext will deliver data reliably. See Section 6.5.19. |
|
|
|
|
|
Controls the amount of physical memory allocated for entities, if dynamic allocations |
|
ResourceLimits |
are allowed, and how they occur. Also controls memory usage among different instance |
|
|
values for keyed topics. See Section 6.5.20. |
|
|
|
|
|
Configures |
|
SystemResourceLimits |
change the maximum number of DomainParticipants that can be created within a single |
|
|
process (address space). See Section 8.4.3. |
|
|
|
|
TimeBasedFilter |
Set by a DataReader to limit the number of new data values received over a period of |
|
time. See Section 7.6.4. |
||
|
||
|
|
|
TopicData |
Along with Group Data QosPolicy and User Data QosPolicy, used to attach a buffer of |
|
bytes to Connext's discovery |
||
|
||
|
|
|
TransportBuiltin |
Specifies which |
|
|
|
|
|
Specifies the multicast address on which a DataReader wants to receive its data. Can |
|
TransportMulticast |
specify a port number as well as a subset of the available transports with which to |
|
|
receive the multicast data. See Section 7.6.5. |
|
|
|
|
|
Specifies the automatic mapping between a list of topic expressions and multicast |
|
TransportMulticastMapping |
address that can be used by a DataReader to receive data for a specific topic. See |
|
|
||
|
|
|
TransportPriority |
Set by a DataWriter to tell Connext that the data being sent is a different "priority" than |
|
other data. See Section 6.5.21. |
||
|
||
|
|
|
TransportSelection |
Allows you to select which physical transports a DataWriter or DataReader may use to |
|
send or receive its data. See Section 6.5.22. |
||
|
||
|
|
|
TransportUnicast |
Specifies a subset of transports and port number that can be used by an Entity to receive |
|
data. See Section 6.5.23. |
||
|
||
|
|
|
TypeConsistencyEnforcement |
Defines rules that determine whether the type used to publish a given data stream is |
|
consistent with that used to subscribe to it. See Section 7.6.6. |
||
|
|
|
|
Used to attach |
|
TypeSupport |
are passed to the serialization or deserialization routine of the associated data type. See |
|
|
||
|
|
|
UserData |
Along with Topic Data QosPolicy and Group Data QosPolicy, used to attach a buffer of |
|
bytes to Connext's discovery |
||
|
||
|
|
|
WireProtocol |
Specifies IDs used by the RTPS wire protocol to create globally unique identifiers. See |
|
|
||
|
|
|
WriterDataLifeCycle |
Controls how a DataWriter handles the lifecycle of the instances (keys) that the |
|
DataWriter is registered to manage. See Section 6.5.26. |
||
|
||
|
|
4.2.1QoS Requested vs. Offered
Some QosPolicies that apply to entities on the sending and receiving sides must have their values set in a compatible manner. This is known as the policy’s ‘requested vs. offered’ (RxO) property. Entities on the publishing side ‘offer’ to provide a certain behavior. Entities on the subscribing side ‘request’ certain behavior. For Connext to connect the sending entity to the receiving entity, the offered behavior must satisfy the requested behavior.
For some QosPolicies, the allowed values may be graduated in a way that the offered value will satisfy the requested value if the offered value is either greater than or less than the requested value. For example, if a DataWriter’s DEADLINE QosPolicy specifies a duration less than or equal to a DataReader’s DEADLINE QosPolicy, then the DataWriter is promising to publish data at least as fast or faster than the DataReader requires new data to be received. This is a compatible situation (see Section 6.5.5).
Other QosPolicies require the values on the sending side and the subscribing side to be exactly equal for compatibility to be met. For example, if a DataWriter’s OWNERSHIP QosPolicy is set to SHARED, and the matching DataReader’s value is set to EXCLUSIVE, then this is an incompatible situation since the DataReader and DataWriter have different expectations of what will happen if more than one DataWriter publishes an instance of the Topic (see OWNERSHIP QosPolicy (Section 6.5.15)).
Finally there are QosPolicies that do not require compatibility between the sending entity and the receiving entity, or that only apply to one side or the other. Whether or not related entities on the publishing and subscribing sides must use compatible settings for a QosPolicy is indicated in the policy’s RxO property, which is provided in the detailed section on each QosPolicy.
RxO = YESThe policy is set at both the publishing and subscribing ends and the values must be set in a compatible manner. What it means to be compatible is defined by the QosPolicy.
RxO = NOThe policy is set only on one end or at both the publishing and subscribing ends, but the two settings are independent. There the requested vs. offered semantics are not used for these QosPolicies.
For those QosPolicies that follow the RxO semantics, Connext will compare the values of those policies for compatibility. If they are compatible, then Connext will connect the sending entity to the receiving entity allowing data to be sent between them. If they are found to be incompatible, then Connext will not interconnect the entities preventing data to be sent between them.
In addition, Connext will record this event by changing the associated communication status in both the sending and receiving applications, see Types of Communication Status (Section 4.3.1). Also, if you have installed Listeners on the associated Entities, then Connext will invoke the associated callback functions to notify user code that an incompatible QoS combination has been found, see Types of Listeners (Section 4.4.1).
For Publishers and DataWriters, the status corresponding to this situation is
OFFERED_INCOMPATIBLE_QOS_STATUS. For Subscribers and DataReaders, the corresponding status is REQUESTED_INCOMPATIBLE_QOS_STATUS. The question of why a DataReader is not receiving data sent from a matching DataWriter can often be answered if you have instrumented the application with Listeners for the statuses noted previously.
4.2.2Special QosPolicy Handling Considerations for C
Many QosPolicy structures contain
In the C language, it is not safe to use an Entity’s QosPolicy structure declared in user code unless it has been initialized first. In addition, user code should always finalize an Entity’s QosPolicy structure to release any memory allocated for the
Thus, for a general Entity’s QosPolicy, Connext will provide:
❏DDS_<Entity>Qos_INITIALIZER This is a macro that should be used when a DDS_<Entity>Qos structure is declared in a C application.
struct DDS_<Entity>Qos qos = DDS_<Entity>Qos_INITIALIZER;
❏DDS_<Entity>Qos_initialize() This is a function that can be used to initialize a DDS_<Entity>Qos structure instead of the macro above.
struct DDS_<Entity>Qos qos; DDS_<Entity>QOS_initialize(&qos);
❏DDS_<Entity>Qos_finalize() This is a function that should be used to finalize a DDS_<Entity>Qos structure when the structure is no longer needed. It will free any memory allocated for sequences contained in the structure.
struct DDS_<Entity>Qos qos = DDS_<Entity>Qos_INITIALIZER;
...
<use qos>
...
// now done with qos
DDS_<Entity>QoS_finalize(&qos);
❏DDS<Entity>Qos_copy() This is a function that can be used to copy one DDS_<Entity>Qos structure to another. It will copy the sequences contained in the source structure and allocate memory for sequence elements if needed. In the code below, both dstQos and srcQos must have been initialized at some point earlier in the code.
DDS_<Entity>QOS_copy(&dstQos, &srcQos);
4.3Statuses
This section describes the different statuses that exist for an entity. A status represents a state or an event regarding the entity. For instance, maybe Connext found a matching DataReader for a DataWriter, or new data has arrived for a DataReader.
Your application can retrieve an Entity’s status by:
❏explicitly checking for any status changes with get_status_changes().
❏explicitly checking a specific status with get_<statusname>_status().
❏using a Listener, which provides asynchronous notification when a status changes.
❏using StatusConditions and WaitSets, which provide a way to wait for status changes.
If you want your application to be notified of status changes asynchronously: create and install a Listener for the Entity. Then internal Connext threads will call the listener methods when the status changes. See Listeners (Section 4.4).
If you want your application to wait for status changes: set up StatusConditions to indicate the statuses of interest, attach the StatusConditions to a WaitSet, and then call the WaitSet’s wait() operation. The call to wait() will block until statuses in the attached Conditions changes (or until a timeout period expires). See Conditions and WaitSets (Section 4.6).
This section includes the following:
❏Types of Communication Status (Section 4.3.1)
❏Special
4.3.1Types of Communication Status
Each Entity is associated with a set of Status objects representing the “communication status” of that Entity. The list of statuses actively monitored by Connext is provided in Table 4.3 on page 4- 15. A status structure contains values that give you more information about the status; for example, how many times the event has occurred since the last time the user checked the status, or how many time the event has occurred in total.
Changes to status values cause activation of corresponding StatusCondition objects and trigger invocation of the corresponding Listener functions to asynchronously inform the application that the status has changed. For example, a change in a Topic’s INCONSISTENT_TOPIC_STATUS may trigger the TopicListener’s on_inconsistent_topic() callback routine (if such a Listener is installed).
Statuses can be grouped into two categories:
❏Plain communication status: In addition to a flag that indicates whether or not a status has changed, a plain communication status also contains state and thus has a corresponding structure to hold its current value.
❏Read communication status: A read communication status is more like an event and has no state other than whether or not it has occurred. Only two statuses listed in Table 4.3 are read communications statuses: DATA_AVAILABLE and DATA_ON_READERS.
As mentioned in Section 4.1.4, all entities have a get_status_changes() operation that can be used to explicitly poll for changes in any status related to the entity. For plain statuses, each entry has operations to get the current value of the status; for example, the Topic class has a get_inconsistent_topic_status() operation. For read statuses, your application should use the take() operation on the DataReader to retrieve the newly arrived data that is indicated by
DATA_AVAILABLE and DATA_ON_READER.
Note that the two read communication statuses do not change independently. If data arrives for a DataReader, then its DATA_AVAILABLE status changes. At the same time, the DATA_ON_READERS status changes for the DataReader’s Subscriber.
Both types of status have a StatusChangedFlag. This flag indicates whether that particular communication status has changed since the last time the status was read by the application. The way the StatusChangedFlag is maintained is slightly different for the plain communication status and the read communication status, as described in the following sections:
❏Changes in Plain Communication Status (Section 4.3.1.1)
❏Changes in Read Communication Status (Section 4.3.1.2)
4.3.1.1Changes in Plain Communication Status
As seen in Figure 4.1 on page
The communication status is also reset to FALSE whenever the associated listener operation is called, as the listener implicitly accesses the status which is passed as a parameter to the operation.
The fact that the status is reset prior to calling the listener means that if the application calls the get_<plain communication status>() operation from inside the listener, it will see the status already reset.
Figure 4.1 Status Changes for Plain Communication Status
status changes
|
|
|
|
|
|
|
|
|
|
|
|
StatusChangedFlag = FALSE |
StatusChangedFlag = TRUE |
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
User calls get_*_status(), or after listener is invoked
Table 4.3 Communication Statuses
Related |
Status (DDS_*_STATUS) |
Description |
Reference |
|
Entity |
||||
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
Another Topic exists with the same name but |
|
|
Topic |
INCONSISTENT_TOPIC |
different |
||
|
|
type. |
|
|
|
|
|
|
|
|
|
This status indicates that a DataWriter has |
|
|
|
|
received an |
|
|
|
APPLICATION_ |
for a sample. The listener provides the identities |
||
|
ACKNOWLEDGMENT |
of the sample and acknowledging DataReader, as |
|
|
|
|
well as |
|
|
|
|
DataReader by the acknowledgment message. |
|
|
|
|
|
|
|
|
DATA_WRITER_CACHE |
The status of the DataWriter’s cache. |
||
|
This status does not have a Listener. |
|||
|
|
|
||
|
|
|
|
|
|
|
The status of a DataWriter’s internal protocol |
|
|
|
|
related metrics (such as the number of samples |
|
|
|
DATA_WRITER_PROTOCOL |
pushed, pulled, filtered) and the status of wire |
||
|
|
protocol traffic. |
|
|
|
|
This status does not have a Listener. |
|
|
|
|
|
|
|
|
|
The liveliness that the DataWriter has committed |
|
|
|
|
to (through its Liveliness QosPolicy) was not |
|
|
|
LIVELINESS_LOST |
respected (assert_liveliness() or write() not called |
||
Data- |
|
in time), thus DataReader entities may consider |
|
|
|
the DataWriter as no longer active. |
|
||
Writer |
|
|
||
|
|
|
||
OFFERED_DEADLINE_ |
The deadline that the DataWriter has committed |
|
||
|
|
|||
|
through its Deadline QosPolicy was not |
|||
|
MISSED |
|||
|
respected for a specific instance of the Topic. |
|
||
|
|
|
||
|
|
|
|
|
|
OFFERED_INCOMPATIBLE_ |
An offered QosPolicy value was incompatible |
|
|
|
with what was requested by a DataReader of the |
|||
|
QOS |
|||
|
same Topic. |
|
||
|
|
|
||
|
|
|
|
|
|
|
The DataWriter found a DataReader that matches |
|
|
|
PUBLICATION_MATCHED |
the Topic, has compatible QoSs and a common |
||
|
partition, or a previously matched DataReader has |
|||
|
|
been deleted. |
|
|
|
|
|
|
|
|
RELIABLE_WRITER_ |
The number of unacknowledged samples in a |
|
|
|
reliable DataWriter's cache has reached one of the |
|||
|
CACHE_CHANGED |
|||
|
predefined trigger points. |
|
||
|
|
|
||
|
|
|
|
|
|
|
One or more reliable DataReaders has either been |
|
|
|
RELIABLE_READER_ |
discovered, deleted, or changed between active |
||
|
ACTIVITY_CHANGED |
and inactive state as specified by the |
|
|
|
|
LivelinessQosPolicy of the DataReader. |
|
|
|
|
|
|
Table 4.3 Communication Statuses
Related |
Status (DDS_*_STATUS) |
Description |
Reference |
|
Entity |
||||
|
|
|
||
|
|
|
|
|
Subscriber |
DATA_ON_READERS |
New data is available for any of the readers that |
||
|
|
were created from the Subscriber. |
|
|
|
DATA_AVAILABLE |
New data (one or more samples) are available for |
||
|
|
the specific DataReader. |
|
|
|
DATA_READER_CACHE |
The status of the reader's cache. |
||
|
This status does not have a Listener. |
|||
|
|
|
||
|
|
|
|
|
|
|
The status of a DataReader’s internal protocol |
|
|
|
|
related metrics (such as the number of samples |
|
|
|
DATA_READER_PROTOCOL |
received, filtered, rejected) and the status of wire |
||
|
|
protocol traffic. |
|
|
|
|
This status does not have a Listener. |
|
|
|
|
|
|
|
|
|
The liveliness of one or more DataWriters that |
|
|
|
|
were writing instances read by the DataReader |
|
|
|
LIVELINESS_CHANGED |
has either been discovered, deleted, or changed |
||
|
|
between active and inactive state as specified by |
|
|
Data- |
|
the LivelinessQosPolicy of the DataWriter. |
|
|
Reader |
REQUESTED_DEADLINE_ |
New data was not received for an instance of the |
|
|
|
Topic within the time period set by the |
|||
|
MISSED |
|||
|
DataReader’s Deadline QosPolicy. |
|
||
|
|
|
||
|
|
|
|
|
|
REQUESTED_ |
A requested QosPolicy value was incompatible |
|
|
|
with what was offered by a DataWriter of the |
|||
|
INCOMPATIBLE_QOS |
|||
|
same Topic. |
|
||
|
|
|
||
|
|
|
|
|
|
SAMPLE_LOST |
A sample sent by Connext has been lost (never |
||
|
|
received). |
|
|
|
SAMPLE_REJECTED |
A received sample has been rejected due to a |
||
|
|
resource limit (buffers filled). |
|
|
|
|
The DataReader has found a DataWriter that |
|
|
|
SUBSCRIPTION_MATCHED |
matches the Topic, has compatible QoSs and a |
||
|
|
common partition, or an existing matched |
|
|
|
|
DataWriter has been deleted. |
|
|
|
|
|
|
An exception to this rule is when the associated listener is the 'nil' listener. The 'nil' listener is treated as a
For example, the value of the StatusChangedFlag associated with the REQUESTED_DEADLINE_MISSED status will become TRUE each time new deadline occurs (which increases the RequestedDeadlineMissed status’ total_count field). The value changes to FALSE when the application accesses the status via the corresponding get_requested_deadline_missed_status() operation on the proper Entity.
4.3.1.2Changes in Read Communication Status
As seen in Figure 4.2 on page
❏The arrival of new data.
❏A change in the InstanceStateKind of a contained instance. This can be caused by either:
• Notification that an instance has been disposed by:
•the DataWriter that owns it, if OWNERSHIP = EXCLUSIVE
•or by any DataWriter, if OWNERSHIP = SHARED
•The loss of liveliness of the DataWriter of an instance for which there is no other
DataWriter.
•The arrival of the notification that an instance has been unregistered by the only DataWriter that is known to be writing the instance.
Depending on the kind of StatusChangedFlag, the flag transitions to FALSE again as follows:
❏The DATA_AVAILABLE StatusChangedFlag becomes FALSE when either on_data_available() is called or the read/take operation (or their variants) is called on the associated DataReader.
❏The DATA_ON_READERS StatusChangedFlag becomes FALSE when any of the following occurs:
•on_data_on_readers() is called.
•on_data_available() is called on any DataReader belonging to the Subscriber.
•One of the read/take operations (or their variants) is called on any DataReader belonging to the Subscriber.
Figure 4.2 Status Changes for Read Communication Status
4.3.2Special
Some status structures contain
In the C language, it is not safe to use a status structure that has internal sequences declared in user code unless it has been initialized first. In addition, user code should always finalize a status structure to release any memory allocated for the
Thus, for a general status structure, Connext will provide:
❏DDS_<STATUS>STATUS_INITIALIZER This is a macro that should be used when a DDS_<Status>Status structure is declared in a C application.
struct DDS_<Status>Status status = DDS_<Status>Status_INITIALIZER;
❏DDS_<Status>Status_initialize() This is a function that can be used to initialize a DDS_<Status>Status structure instead of the macro above.
struct DDS_<Status>Status status; DDS_<Status>Status_initialize(&Status);
❏DDS_<Status>Status_finalize() This is a function that should be used to finalize a DDS_<Status>Status structure when the structure is no longer needed. It will free any memory allocated for sequences contained in the structure.
struct DDS_<Status>Status status = DDS_<Status>Status_INITIALIZER;
...
<use status>
...
// now done with Status DDS_<Status>Status_finalize(&status);
❏DDS<Status>Status_copy() This is a function that can be used to copy one DDS_<Status>Status structure to another. It will copy the sequences contained in the source structure and allocate memory for sequence elements if needed. In the code below, both dstStatus and srcStatus must have been initialized at some point earlier in the code.
DDS_<Status>Status_copy(&dstStatus, &srcStatus);
Note that many status structures do not have sequences internally. For those structures, you do not need to use the macro and methods provided above. However, they have still been created for your convenience.
4.4Listeners
This section describes Listeners and how to use them:
❏Types of Listeners (Section 4.4.1)
❏Creating and Deleting Listeners (Section 4.4.2)
❏ Operations Allowed within Listener Callbacks (Section 4.4.5)
Listeners are triggered by changes in an entity’s status. For instance, maybe Connext found a matching DataReader for a DataWriter, or new data has arrived for a DataReader.
4.4.1Types of Listeners
The Listener class is the abstract base class for all listeners. Each entity class (DomainParticipant, Topic, Publisher, DataWriter, Subscriber, and DataReader) has its own derived Listener class that add methods for handling
Figure 4.3 Listener Class Hierarchy
DDSListener
DDSDataReaderListener DDSDataWriterListener DDSTopicListener
DDSSubscriberListener DDSPublisherListener
DDSDomainParticipantListener
You can choose which changes in status will trigger a callback by installing a listener with a bit- mask. Bits in the mask correspond to different statuses. The bits that are true indicate that the listener will be called back when there are changes in the corresponding status.
You can specify a listener and set its
During Entity creation:
DDS_StatusMask mask = DDS_REQUESTED_DEADLINE_MISSED_STATUS |
DDS_DATA_AVAILABLE_STATUS;
datareader =
or afterwards:
DDS_StatusMask mask = DDS_REQUESTED_DEADLINE_MISSED_STATUS |
DDS_DATA_AVAILABLE_STATUS;
As you can see in the above examples, there are two components involved when setting up listeners: the listener itself and the mask. Both of these can be null. Table 4.4 describes what happens when a status change occurs. See Hierarchical Processing of Listeners (Section 4.4.4) for more information.
Table 4.4 Effect of Different Combinations of Listeners and Status Bit Masks
|
No Bits Set in Mask |
Some/All Bits Set in Mask |
|
|
|
|
|
|
|
|
|
|
|
For the statuses that are enabled in the |
|
Listener is |
Connext finds the next most relevant |
mask, the most relevant listener will be |
|
called. |
|||
Specified |
listener for the changed status. |
The 'statusChangedFlag' for the relevant |
|
|
|
||
|
|
status is reset. |
|
|
|
|
|
Listener is |
Connext behaves as if the listener is not |
Connext behaves as if the listener callback is |
|
installed and finds the next most relevant |
installed, but the callback is doing nothing. |
||
NULL |
|||
listener for that status. |
This is called a ‘nil’ listener. |
||
|
|||
|
|
|
4.4.2Creating and Deleting Listeners
There is no factory for creating or deleting a Listener; use the natural means in each language binding (for example, “new” or “delete” in C++ or Java). For example:
class HelloWorldListener : public DDSDataReaderListener { virtual void on_data_available(DDSDataReader* reader);
};
void HelloWorldListener::on_data_available(DDSDataReader* reader)
{
printf("received data\n");
}
// Create a Listener
HelloWorldListener *reader_listener = NULL; reader_listener = new HelloWorldListener();
// Delete a Listener delete reader_listener;
A listener cannot be deleted until the entity it is attached to has been deleted. For example, you must delete the DataReader before deleting the DataReader’s listener.
Note: Due to a
4.4.3Special Considerations for Listeners in C
In C, a Listener is a structure with function pointers to the user callback routines. Often, you may only be interested in a subset of the statuses that can be monitored with the Listener. In those cases, you may not set all of the functions pointers in a listener structure to a valid function. In that situation, we recommend that the unused,
To help, in the C language, we provide a macro that can be used to initialize a Listener structure so that all of its callback pointers are set to NULL. For example
DDS_<Entity>Listener listener = DDS_<Entity>Listener_INITIALIZER;
// now only need to set the listener callback pointers for statuses // to be monitored
There is no need to do this in languages other than C.
4.4.4Hierarchical Processing of Listeners
As seen in Figure 4.3 on page
You can install Listeners at all levels of the object hierarchy. At the top is the
DomainParticipantListener; only one can be installed in a DomainParticipant. Then every Subscriber and Publisher can have their own Listener. Finally, each Topic, DataReader and DataWriter can have their own listeners. All are optional.
Suppose, however, that an Entity does not install a Listener, or installs a Listener that does not have particular communication status selected in the bitmask. In this case, if/when that particular status changes for that Entity, the corresponding Listener for that Entity’s parent is called. Status changes are “propagated” from child Entity to parent Entity until a Listener is found that is registered for that status. Connext will give up and drop the
For example, suppose that Connext finds a matching DataWriter for a local DataReader. This event will change the SUBSCRIPTION_MATCHED status. So the local DataReader object is checked to see if the application has installed a listener that handles the SUBSCRIPTION_MATCH status. If not, the Subscriber that created the DataReader is checked to see if it has a listener installed that handles the same event. If not, the DomainParticipant is checked. The DomainParticipantListener methods are called only if none of the descendent entities of the DomainParticipant have listeners that handle the particular status that has changed. Again, all listeners are optional. Your application does not have to handle any communication statuses.
Table 4.5 lists the callback functions that are available for each Entity’s status listener.
Table 4.5 Listener Callback Functions
Entity Listener for: |
Callback Functions |
|
|
|
|
|
|
|
|
Topics |
on_inconsistent_topic() |
|
|
|
|
|
on_liveliness_lost() |
|
|
|
|
|
on_offered_deadline_missed() |
|
|
|
|
Publishers and DataWriters |
on_offered_incompatible_qos() |
|
|
|
|
on_publication_matched() |
|
|
|
|
|
|
|
|
|
on_reliable_reader_activity_changed() |
|
|
|
|
|
on_reliable_writer_cache_changed() |
DomainParticipants |
|
|
Subscribers |
on_data_on_readers() |
|
|
|
|
|
|
on_data_available |
|
|
|
|
|
on_liveliness_changed() |
|
|
|
|
|
on_requested_deadline_missed() |
|
Subscribers and DataReaders |
|
|
on_requested_incompatible_qos() |
|
|
|
|
|
|
on_sample_lost() |
|
|
|
|
|
on_sample_rejected() |
|
|
|
|
|
on_subscription_matched() |
|
|
|
4.4.4.1Processing Read Communication Statuses
The processing of the DATA_ON_READERS and DATA_AVAILABLE read communication statuses are handled slightly differently since, when new data arrives for a DataReader, both statuses change simultaneously. However, only one, if any, Listener will be called to handle the event.
If there is a Listener installed to handle the DATA_ON_READERS status in the DataReader’s
Subscriber or in the DomainParticipant, then that Listener’s on_data_on_readers() function will be called back. The DataReaderListener’s on_data_available() function is called only if the DATA_ON_READERS status is not handle by any relevant listeners.
This can be useful if you have generic processing to do whenever new data arrives for any DataReader. You can execute the generic code in the on_data_on_readers() method, and then dispatch the processing of the actual data to the specific DataReaderListener’s on_data_available() function by calling the notify_datareaders() method on the Subscriber.
For example:
void on_data_on_readers (DDSSubscriber *subscriber)
{
//Do some general processing that needs to be done
//whenever new data arrives, but is independent of
//any particular DataReader
< generic processing code here >
//Now dispatch the actual processing of the data
//to the specific DataReader for which the data
//was received
}
4.4.5Operations Allowed within Listener Callbacks
Due to the potential for deadlock, some Connext APIs should not be invoked within the functions of listener callbacks. Exactly which Connext APIs are restricted depends on the Entity upon which the Listener is installed, as well as the configuration of ‘Exclusive Areas,’ as discussed in Section 4.5.
Please read and understand Exclusive Areas (EAs) (Section 4.5) and Restricted Operations in Listener Callbacks (Section 4.5.1) to ensure that the calls made from your Listeners are allowed and will not cause potential deadlock situations.
4.5Exclusive Areas (EAs)
Listener callbacks are invoked by internal Connext threads. To prevent undesirable, multi- threaded interaction, the internal threads may take and hold semaphores (mutexes) used for mutual exclusion. In your listener callbacks, you may want to invoke functions provided by the Connext API. Internally, those Connext functions also may take mutexes to prevent errors due to
Once there are multiple mutexes to protect different critical regions, the possibility for deadlock exists. Consider Figure 4.4’s scenario, in which there are two threads and two mutexes.
Figure 4.4 Multiple Mutexes Leading to a Deadlock Condition
Thread1 |
Thread2 |
take(MutexA) |
take(MutexB) |
take(MutexB) |
take(MutexA) |
X |
X |
Deadlock!
Thread1 takes MutexA while simultaneously Thread2 takes MutexB. Then, Thread1 takes MutexB and simultaneously Thread2 takes MutexA. Now both threads are blocked since they hold a mutex that the other thread is trying to take. This is a deadlock condition.
While the probability of entering the deadlock situation in Figure 4.4 depends on execution timing, when there are multiple threads and multiple mutexes, care must be taken in writing code to prevent those situations from existing in the first place. Connext has been carefully created and analyzed so that we know our threads internally are safe from deadlock interactions.
However, when Connext threads that are holding mutexes call user code in listeners, it is possible for user code to inadvertently cause the threads to deadlock if Connext APIs that try to take other mutexes are invoked. To help you avoid this situation, RTI has defined a concept known as Exclusive Areas, some restrictions regarding the use of Connext APIs within user callback code, and a QoS policy that allows you to configure Exclusive Areas.
Connext uses Exclusive Areas (EAs) to encapsulate mutexes and critical regions. Only one thread at a time can be executing code within an EA. The formal definition of EAs and their implementation ensures safety from deadlock and efficient entering and exiting of EAs. While every Entity created by Connext has an associated EA, EAs may be shared among several entities. A thread is automatically in the entity's EA when it is calling the entity’s listener.
Connext allows you to configure all the Entities within an application in a single domain to share a single Exclusive Area. This would greatly restrict the concurrency of thread execution within Connext’s
You may also have the best of both worlds by configuring a set of Entities to share a global EA and others to have their own. For the Entities that have their own EAs, the types of Connext operations that you can call from the Entity’s callback are restricted.
To understand why the general EA framework limits the operations that can be called in an EA, consider a modification to the example previously presented in Figure 4.4. Suppose we create a rule that is followed when we write our code. “For all situations in which a thread has to take multiple mutexes, we write our code so that the mutexes are always taken in the same order.” Following the rule will ensure us that the code we write cannot enter a deadlock situation due to the taking of the mutexes, see Figure 4.5.
Connext defines an ordering of the mutexes it creates. Generally speaking, there are three ordered levels of Exclusive Areas:
Figure 4.5 Taking Multiple Mutexes in a Specific Order to Eliminate Deadlock
Thread1 |
Thread2 |
|
take(MutexA) |
X take(MutexA) |
|
take(MutexB) |
|
|
|
|
|
|
|
|
|
|
|
give(MutexB) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
give(MutexA) |
|
take(MutexB) |
|
||
|
||
|
||
|
|
|
|
|
|
By creating an order in which multiple mutexes are taken, you can guarantee that no deadlock situation will arise. In this case, if a thread must take both MutexA and MutexB, we write our code so that in those cases MutexA is always taken before MutexB.
❏ParticipantEA There is only one ParticipantEA per participant. The creation and deletion of all Entities (create_xxx(), delete_xxx()) take the ParticipantEA. In addition, the enable() method for an Entity and the setting of the Entity’s QoS, set_qos(), also take the ParticipantEA
❏SubscriberEA This EA is created on a
❏PublisherEA This EA is created on a
In addition, you should also be aware that:
❏The three EA levels are ordered in the following manner: ParticipantEA < SubscriberEA < PublisherEA
❏When executing user code in a listener callback of an Entity, the internal Connext thread is already in the EA of that Entity or used by that Entity.
❏If a thread is in an EA, it can call methods associated with either a higher EA level or that share the same EA. It cannot call methods associated with a lower EA level nor ones that use a different EA at the same level.
4.5.1Restricted Operations in Listener Callbacks
Based on the background and rules provided in Exclusive Areas (EAs) (Section 4.5), this section describes how EAs restrict you from using various Connext APIs from within the Listener callbacks of different Entities.
Note: these restrictions do not apply to builtin topic listener callbacks.
By default, each Publisher and Subscriber creates and uses its own EA, and shares it with its children DataWriters and DataReaders, respectively. In that case:
Within a DataWriter/DataReader’s Listener callback, do not:
❏create any entities
❏delete any entities
❏enable any entities
❏set QoS’s on any entities
Within a Subscriber/DataReader’s Listener callback, do not call any operations on:
❏Other Subscribers
❏DataReaders that belong to other Subscribers
❏Publishers/DataWriters that have been configured to use the ParticipantEA (see below)
Within a Publisher/DataWriter Listener callback, do not call any operations on:
❏Other Publishers
❏DataWriters that belong to other Publishers
❏Any Subscribers
❏Any DataReaders
Connext will enforce the rules to avoid deadlock, and any attempt to call an illegal method from within a Listener callback will return DDS_RETCODE_ILLEGAL_OPERATION.
However, as previously mentioned, if you are willing to
Use the EXCLUSIVE_AREA QosPolicy (DDS Extension) (Section 6.4.3) of the Publisher or Subscriber to set whether or not to use a shared exclusive area. By default, Publishers and Subscribers will create and use their own individual EAs. You can configure a subset of the Publishers and Subscribers to share the ParticipantEA if you need the Listeners associated with those entities or children entities to be able to call any of the restricted methods listed above.
Regardless of how the EXCLUSIVE_AREA QosPolicy is set, the following operations are never allowed in any Listener callback:
❏Destruction of the entity to which the Listener is attached. For instance, a DataWriter/
DataReader Listener callback must not destroy its DataWriter/DataReader.
❏Within the TopicListener callback, you cannot call any operations on DataReaders,
DataWriters, Publishers, Subscribers or DomainParticipants.
4.6Conditions and WaitSets
Conditions and WaitSets provide another way for Connext to communicate status changes (including the arrival of data) to your application. While a Listener is used to provide a callback for asynchronous access, Conditions and WaitSets provide synchronous data access. In other words, Listeners are
A WaitSet allows an application to wait until one or more attached Conditions becomes true (or until a timeout expires).
Briefly, your application can create a WaitSet, attach one or more Conditions to it, then call the WaitSet’s wait() operation. The wait() blocks until one or more of the WaitSet’s attached Conditions becomes TRUE.
A Condition has a trigger_value that can be TRUE or FALSE. You can retrieve the current value by calling the Condition’s only operation, get_trigger_value().
There are three kinds of Conditions. A Condition is a root class for all the conditions that may be attached to a WaitSet. This basic class is specialized in three classes:
❏GuardConditions (Section 4.6.6) are created by your application. Each GuardCondition has a single,
❏ReadConditions and QueryConditions (Section 4.6.7) are created by your application, but triggered by Connext. ReadConditions provide a way for you to specify the data
samples that you want to wait for, by indicating the desired
❏StatusConditions (Section 4.6.8) are created automatically by Connext, one for each Entity. A StatusCondition is triggered by Connext when there is a change to any of that Entity’s enabled statuses.
Figure 4.6 on page
A WaitSet can be associated with more than one Entity (including multiple DomainParticipants). It can be used to wait on Conditions associated with different DomainParticipants. A WaitSet can only be in use by one application thread at a time.
4.6.1Creating and Deleting WaitSets
There is no factory for creating or deleting a WaitSet; use the natural means in each language binding (for example, “new” or “delete” in C++ or Java).
For example, to delete a WaitSet:
delete waitset;
There are two ways to (DDS_WaitSetProperty_t
create a
❏If properties are not specified when the WaitSet is created, the WaitSet will wake up as soon as a trigger event occurs (that is, when an attached Condition becomes true). This is the default behavior if properties are not specified.
This ‘immediate
up and process the data or event as soon as possible). However, "waking up" involves a context
1. These states are described in The SampleInfo Structure (Section 7.4.6).
Figure 4.6 Conditions and WaitSets
Table 4.6 WaitSet Properties (DDS_WaitSet_Property_t)
Type |
Field Name |
Description |
|
|
|
|
|
long |
max_event_count |
Maximum number of trigger events to cause a WaitSet to wake up. |
|
|
|
|
|
|
|
Maximum delay from occurrence of first trigger event to cause a |
|
|
|
WaitSet to wake up. |
|
DDS_Duration_t |
max_event_delay |
This value should reflect the maximum acceptable latency |
|
increase (time delay from occurrence of the event to waking up |
|||
|
|
||
|
|
the waitset) incurred as a result of waiting for additional events |
|
|
|
before waking up the waitset. |
|
|
|
|
waiting on the WaitSet. A context switch consumes significant CPU and therefore waking up on each data update is not optimal in situations where the application needs to maximize throughput (the number of messages processed per second). This is especially true if the receiver is CPU limited.
To create a WaitSet with default behavior:
WaitSet* waitset = new WaitSet();
❏If properties are specified when the WaitSet is created, the WaitSet will wait for either (a) up to max_event_count trigger events to occur, (b) up to max_event_delay time from the occurrence of the first trigger event, or (c) up to the timeout maximum wait duration specified in the call to wait().
To create a WaitSet with properties:
DDS_WaitSetProperty_t prop;
Prop.max_event_count = 5;
DDSWaitSet* waitset = new DDSWaitSet(prop);
4.6.2WaitSet Operations
WaitSets have only a few operations, as listed in Table 4.7 on page
Table 4.7 WaitSet Operations
Operation |
Description |
|
|
|
|
|
|
|
|
Attaches a Condition to this WaitSet. |
|
|
You may attach a Condition to a WaitSet that is currently being waited upon |
|
|
(via the wait() operation). In this case, if the Condition has a trigger_value of |
|
attach_condition |
TRUE, then attaching the Condition will unblock the WaitSet. |
|
|
Adding a Condition that is already attached to the WaitSet has no effect. If the |
|
|
Condition cannot be attached, Connext will return an OUT_OF_RESOURCES |
|
|
error code. |
|
|
|
|
|
Detaches a Condition from the WaitSet. Attempting to detach a Condition that is |
|
detach_condition |
not to attached the WaitSet will result in a PRECONDITION_NOT_MET |
|
|
error code. |
|
|
|
|
wait |
Blocks execution of the thread until one or more attached Conditions becomes true, or |
|
until a |
||
|
||
|
|
|
get_conditions |
Retrieves a list of attached Conditions. |
|
|
|
|
get_property |
Retrieves the DDS_WaitSetProperty_t structure of the associated WaitSet. |
|
|
|
|
set_property |
Sets the DDS_WaitSetProperty_t structure, to configure the associated WaitSet to |
|
return after one or more trigger events have occurred. |
||
|
||
|
|
4.6.3Waiting for Conditions
The WaitSet’s wait() operation allows an application thread to wait for any of the attached Conditions to trigger (become TRUE).
If any of the attached Conditions are already TRUE when wait() is called, it returns immediately. If none of the attached Conditions are TRUE, wait()
(b) a
Note: The resolution of the timeout period is constrained by the resolution of the system clock.
You can also configure the properties of the WaitSet so that it will wait for up to max_event_count trigger events to occur before returning, or for up to max_event_delay time from the occurrence of the first trigger event before returning. See Creating and Deleting WaitSets (Section 4.6.1).
If wait() does not timeout, it returns a list of the attached Conditions that became TRUE and therefore unblocked the wait.
If wait() does timeout, it returns TIMEOUT and an empty list of Conditions.
Only one application thread can be waiting on the same WaitSet. If wait() is called on a WaitSet that already has a thread blocking on it, the operation will immediately return PRECONDITION_NOT_MET.
Note: If you detach a Condition from a Waitset that is currently in a wait state (that is, you are waiting on it), wait() may return OK and an empty sequence of conditions.
4.6.3.1How WaitSets Block
The blocking behavior of the WaitSet is illustrated in Figure 4.7. The result of a wait() operation depends on the state of the WaitSet, which in turn depends on whether at least one attached
Condition has a trigger_value of TRUE.
If the wait() operation is called on a WaitSet with state BLOCKED, it will block the calling thread. If wait() is called on a WaitSet with state UNBLOCKED, it will return immediately.
When the WaitSet transitions from BLOCKED to UNBLOCKED, it wakes up the thread (if there is one) that had called wait() on it. There is no implied “event queuing” in the awakening of a WaitSet. That is, if several Conditions attached to the WaitSet have their trigger_value transition to true in sequence, Connext will only unblock the WaitSet once.
Figure 4.7 WaitSet Blocking Behavior
4.6.4Processing Triggered
When wait() returns, it provides a list of the attached Condition objects that have a trigger_value of true. Your application can use this list to do the following for each Condition in the returned list:
❏If it is a StatusCondition:
•First, call get_status_changes() to see what status changed.
•If the status changes refer to plain communication status: call get_<communication_status>() on the relevant Entity.
•If the status changes refer to DATA_ON_READERS1: call get_datareaders() on the relevant Subscriber.
•If the status changes refer to DATA_AVAILABLE: call read() or take() on the relevant
DataReader.
❏If it is a ReadCondition or a QueryCondition: You may want to call read_w_condition() or take_w_condition() on the DataReader, with the ReadCondition as a parameter (see read_w_condition and take_w_condition (Section 7.4.3.6)).
1.And then read/take on the returned DataReader objects.
Note that this is just a suggestion, you do not have to use the “w_condition” operations (or any read/take operations, for that matter) simply because you used a WaitSet. The “w_condition” operations are just a convenient way to use the same status masks that were set on the ReadCondition or QueryCondition.
❏If it is a GuardCondition: check to see which GuardCondition changed, then react accordingly. Recall that GuardConditions are completely controlled by your application.
See Conditions and WaitSet Example (Section 4.6.5) to see how to determine which of the attached Conditions is in the returned list.
4.6.5Conditions and WaitSet Example
This example creates a WaitSet and then waits for one or more attached Conditions to become true.
// Create a WaitSet
WaitSet* waitset = new WaitSet();
// Attach Conditions DDSCondition* cond1 = ...;
DDSCondition* cond2 =
DDS_ANY_VIEW_STATE, DDS_ANY_INSTANCE_STATE);
DDSCondition* cond4 = new DDSGuardCondition();
DDSCondition* cond5 = ...;
DDS_ReturnCode_t retcode;
retcode =
// ... error
}
retcode =
// ... error
}
retcode =
// ... error
}
retcode =
// ... error
}
retcode =
// ... error
}
// Wait for a condition to trigger or timeout
DDS_Duration_t timeout = { 0, 1000000 }; // 1ms
DDSConditionSeq active_conditions; // holder for active conditions bool is_cond1_triggered = false;
bool is_cond2_triggered = false; DDS_ReturnCode_t retcode;
retcode =
if (retcode == DDS_RETCODE_TIMEOUT) { // handle timeout
printf("Wait timed out. No conditions were triggered.\n");
}
else if (retcode != DDS_RETCODE_OK) {
//... check for cause of failure
}else {
//success
if (active_conditions.length() == 0) {
printf("Wait timed out!! No conditions triggered.\n");
}else
// check if "cond1" or "cond2" are triggered: for(i = 0; i < active_conditions.length(); ++i) {
if (active_conditions[i] == cond1) { printf("Cond1 was triggered!"); is_cond1_triggered = true;
}
if (active_conditions[i] == cond2) { printf("Cond2 was triggered!"); is_cond2_triggered = true;
}
if (is_cond1_triggered && is_cond2_triggered) { break;
}
}
}
}
if (is_cond1_triggered) {
// ... do something because "cond1" was triggered ...
}
if (is_cond2_triggered) {
// ... do something because "cond2" was triggered ...
}
//Delete the waitset delete waitset; waitset = NULL;
4.6.6GuardConditions
GuardConditions are created by your application. GuardConditions provide a way for your application to manually awaken a WaitSet. Like all Conditions, it has a single boolean trigger_value. Your application can manually trigger the GuardCondition by calling set_trigger_value().
Connext does not trigger or clear this type of
A GuardCondition has no factory. It is created as an object directly by the natural means in each language binding (e.g., using “new” in C++ or Java). For example:
//Create a Guard Condition
Condition* my_guard_condition = new GuardCondition();
// Delete a Guard Condition delete my_guard_condition;
When first created, the trigger_value is FALSE.
A GuardCondition has only two operations, get_trigger_value() and set_trigger_value().
When your application calls set_trigger_value(DDS_BOOLEAN_TRUE), Connext will awaken any WaitSet to which the GuardCondition is attached.
4.6.7ReadConditions and QueryConditions
ReadConditions are created by your application, but triggered by Connext. ReadConditions provide
a way for you to specify the data samples that you want to wait for, by indicating the desired
A QueryCondition is a special ReadCondition that allows you to specify a query expression and parameters, so you can filter on the locally available (already received) data. QueryConditions use the same
Multiple mask combinations can be associated with a single content filter. This is important because the maximum number of content filters that may be created per DataReader is 32, but more than 32 QueryConditions may be created per DataReader, if they are different mask- combinations of the same content filter.
ReadConditions and QueryConditions are created by using the DataReader’s create_readcondition() and create_querycondition() operations. For example:
DDSReadCondition* my_read_condition = reader->create_readcondition( DDS_NOT_READ_SAMPLE_STATE, DDS_ANY_VIEW_STATE, DDS_ANY_INSTANCE_STATE);
DDSQueryCondition* my_query_condition = reader->create_querycondition( DDS_NOT_READ_SAMPLE_STATE, DDS_ANY_VIEW_STATE, DDS_ANY_INSTANCE_STATE query_expression, query_parameters);
Note: If you are using a ReadCondition to simply detect the presence of new data, consider using a StatusCondition (Section 4.6.8) with the DATA_AVAILABLE_STATUS instead, which will perform better in this situation.
A DataReader can have multiple attached ReadConditions and QueryConditions. A ReadCondition or QueryCondition may only be attached to one DataReader.
To delete a ReadCondition or QueryCondition, use the DataReader’s delete_readcondition() operation:
DDS_ReturnCode_t delete_readcondition (DDSReadCondition *condition)
After a ReadCondition is triggered, use the FooDataReader’s read/take “with condition” operations (see Section 7.4.3.6) to access the samples.
Table 4.8 lists the operations available on ReadConditions.
4.6.7.1How ReadConditions are Triggered
A ReadCondition has a trigger_value that determines whether the attached WaitSet is BLOCKED or UNBLOCKED. Unlike the StatusCondition, the trigger_value of the ReadCondition is tied to the
presence of at least one sample with a
1. These states are described in The SampleInfo Structure (Section 7.4.6).
Table 4.8 ReadCondition and QueryCondition Operations
Operation |
|
|
Description |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
get_datareader |
Returns the |
DataReader |
to which |
the ReadCondition |
or |
QueryCondition is |
|
attached. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
Returns the instance states that were specified when the ReadCondition or |
||||||
get_instance_state_mask |
QueryCondition was created. These are the sample’s instance states that Connext |
||||||
|
checks to determine whether or not to trigger the ReadCondition or |
||||||
|
QueryCondition . |
|
|
|
|
|
|
|
|
||||||
|
Returns the |
||||||
get_sample_state_mask |
QueryCondition was created. These are the sample states that Connext checks to |
||||||
|
determine whether or not to trigger the ReadCondition or QueryCondition. |
|
|||||
|
|
|
|
|
|||
|
Returns the |
that were specified when the |
ReadCondition |
or |
|||
get_view_state_mask |
QueryCondition was created. These are the view states that Connext checks to |
||||||
|
determine whether or not to trigger the ReadCondition or QueryCondition. |
|
|||||
|
|
|
|
|
|
|
|
trigger_value==TRUE, |
the data |
associated |
with the |
sample must |
be |
such that |
the |
query_expression evaluates to TRUE.
The trigger_value of a ReadCondition depends on the presence of samples on the associated DataReader. This implies that a single ‘take’ operation can potentially change the trigger_value of several ReadConditions or QueryConditions. For example, if all samples are taken, any
ReadConditions and QueryConditions associated with the DataReader that had trigger_value==TRUE before will see the trigger_value change to FALSE. Note that this does not guarantee that WaitSet objects that were separately attached to those conditions will not be awakened. Once we have trigger_value==TRUE on a condition, it may wake up the attached WaitSet, the condition transitioning to trigger_value==FALSE does not necessarily 'unwakeup' the WaitSet, since 'unwakening' may not be possible. The consequence is that an application blocked on a WaitSet may return from wait() with a list of conditions, some of which are no longer “active.” This is unavoidable if multiple threads are concurrently waiting on separate WaitSet objects and taking data associated with the same DataReader.
Consider the following example: A ReadCondition that has a sample_state_mask = {NOT_READ} will have a trigger_value of TRUE whenever a new sample arrives and will transition to FALSE as soon as all the newly arrived samples are either read (so their status changes to READ) or taken (so they are no longer managed by Connext). However, if the same ReadCondition had a sample_state_mask = {READ, NOT_READ}, then the trigger_value would only become FALSE once all the newly arrived samples are taken (it is not sufficient to just read them, since that would only change the SampleState to READ), which overlaps the mask on the
ReadCondition.
4.6.7.2QueryConditions
A QueryCondition is a special ReadCondition that allows your application to also specify a filter on the locally available data.
The query expression is similar to a SQL WHERE clause and can be parameterized by arguments that are dynamically changeable by the set_query_parameters() operation.
QueryConditions are triggered in the same manner as ReadConditions, with the additional requirement that the sample must also satisfy the conditions of the content filter associated with the QueryCondition.
4.6.8StatusConditions
StatusConditions are created automatically by Connext, one for each Entity. Connext will trigger the StatusCondition when there is a change to any of that Entity’s enabled statuses.
By default, when Connext creates a StatusCondition, all status bits are turned on, which means it will check for all statuses to determine when to trigger the StatusCondition. If you only want
Table 4.9 QueryCondition Operations
Operation |
Description |
|
|
|
|
|
|
|
get_query_expression |
Returns the query expression specified when the QueryCondition was created. |
|
|
|
|
|
Returns the query parameters associated with the QueryCondition. That is, the |
|
get_query_parameters |
parameters specified on the last successful call to set_query_parameters(), or if |
|
set_query_parameters() was never called, the arguments specified when the |
||
|
||
|
QueryCondition was created. |
|
|
|
|
set_query_parameters |
Changes the query parameters associated with the QueryCondition. |
|
|
|
Connext to check for specific statuses, you can use the StatusCondition’s set_enabled_statuses() operation and set just the desired status bits.
The trigger_value of the StatusCondition depends on the communication status of the Entity (e.g., arrival of data, loss of information, etc.), ‘filtered’ by the set of enabled statuses on the
StatusCondition.
The set of enabled statuses and its relation to Listeners and WaitSets is detailed in How StatusConditions are Triggered (Section 4.6.8.1).
Table 4.10 lists the operations available on StatusConditions.
Table 4.10 StatusCondition Operations
Operation |
Description |
|
|
|
|
|
Defines the list of communication statuses that are taken into account to |
|
|
determine the trigger_value of the StatusCondition. This operation may change the |
|
|
trigger_value of the StatusCondition. |
|
set_enabled_statuses |
WaitSets behavior depend on the changes of the trigger_value of their attached |
|
conditions. Therefore, any WaitSet to which the StatusCondition is attached is |
||
|
||
|
potentially affected by this operation. |
|
|
If this function is not invoked, the default list of enabled statuses includes all the |
|
|
statuses. |
|
|
|
|
|
Retrieves the list of communication statuses that are taken into account to |
|
get_enabled_statuses |
determine the trigger_value of the StatusCondition. This operation returns the |
|
statuses that were explicitly set on the last call to set_enabled_statuses() or, if |
||
|
||
|
set_enabled_statuses() was never called, the default list |
|
|
|
|
get_entity |
Returns the Entity associated with the StatusCondition. Note that there is exactly |
|
one Entity associated with each StatusCondition. |
||
|
||
|
|
Unlike other types of Conditions, StatusConditions are created by Connext, not by your application. To access an Entity’s StatusCondition, use the Entity’s get_statuscondition() operation. For example:
Condition* my_status_condition =
After a StatusCondition is triggered, call the Entity’s get_status_changes() operation to see which status(es) changed.
4.6.8.1How StatusConditions are Triggered
The trigger_value of a StatusCondition is the boolean OR of the ChangedStatusFlag of all the communication statuses to which it is sensitive. That is, trigger_value==FALSE only if all the values of the ChangedStatusFlags are FALSE.
The sensitivity of the StatusCondition to a particular communication status is controlled by the list of enabled_statuses set on the Condition by means of the set_enabled_statuses() operation.
4.6.9Using Both Listeners and WaitSets
You can use Listeners and WaitSets in the same application. For example, you may want to use WaitSets and Conditions to access the data, and Listeners to be warned asynchronously of erroneous communication statuses.
We recommend that you choose one or the other mechanism for each particular communication status (not both). However, if both are enabled, the Listener mechanism is used first, then the WaitSet objects are signaled.
Chapter 5 Topics
For a DataWriter and DataReader to communicate, they need to use the same Topic. A Topic includes a name and an association with a user data type that has been registered with Connext. Topic names are how different parts of the communication system find each other. Topics are named streams of data of the same data type. DataWriters publish samples into the stream; DataReaders subscribe to data from the stream. More than one Topic can use the same user data type, but each Topic needs a unique name.
Topics, DataWriters, and DataReaders relate to each other as follows:
❏Multiple Topics (each with a unique name) can use the same user data type.
❏Applications may have multiple DataWriters for each Topic.
❏Applications may have multiple DataReaders for each Topic.
❏DataWriters and DataReaders must be associated with the same Topic in order for them to be connected.
❏Topics are created and deleted by a DomainParticipant, and as such, are owned by that DomainParticipant. When two applications (DomainParticipants) want to use the same Topic, they must both create the Topic (even if the applications are on the same node).
This chapter includes the following sections:
❏Topic QosPolicies (Section 5.2)
❏Status Indicator for Topics (Section 5.3)
❏ContentFilteredTopics (Section 5.4)
Builtin Topics: Connext uses ‘Builtin Topics’ to discover and keep track of remote entities, such as new participants in the domain. Builtin Topics are discussed in Chapter 16.
5.1Topics
Before you can create a Topic, you need a user data type (see Chapter 3) and a DomainParticipant (Section 8.3). The user data type must be registered with the DomainParticipant (as we saw in the User Data Types chapter in Section 3.8.5.1).
Once you have created a Topic, what do you do with it? Topics are primarily used as parameters in other Entities’ operations. For instance, a Topic is required when a Publisher or Subscriber creates a DataWriter or DataReader, respectively. Topics do have a few operations of their own, as
listed in Table 5.1. For details on using these operations, see the reference section or the API Reference HTML documentation.
Figure 5.1 Topic Module
Note: MultiTopics are not supported.
Table 5.1 Topic Operations
Purpose |
Operation |
Description |
Reference |
|
|
|
|
|
enable |
Enables the Topic. |
|
|
|
|
|
|
get_qos |
Gets the Topic’s current QosPolicy settings. This is most |
|
|
often used in preparation for calling set_qos(). |
|
|
|
|
|
|
|
|
|
|
|
|
Sets the Topic’s QoS. You can use this operation to change |
|
|
set_qos |
the values for the Topic’s QosPolicies. Note, however, |
|
|
that not all QosPolicies can be changed after the Topic has |
||
|
|
been created. |
|
|
|
|
|
Configuring |
set_qos_with_ |
Sets the Topic’s QoS based on a specified QoS profile. |
|
the Topic |
profile |
|
|
get_listener |
Gets the currently installed Listener. |
|
|
|
|
||
|
|
|
|
|
|
Sets the Topic’s Listener. If you create the Topic without a |
|
|
set_listener |
Listener, you can use this operation to add one later. |
|
|
Setting the listener to NULL will remove the listener |
|
|
|
|
|
|
|
|
from the Topic. |
|
|
|
|
|
|
|
A |
|
|
narrow |
DDSTopicDescription pointer and ‘narrows’ it to a |
|
|
|
DDSTopic pointer. |
|
|
|
|
|
Table 5.1 Topic Operations
Purpose |
Operation |
Description |
Reference |
|
|
|
|
|
|
|
|
|
|
|
|
get_inconsistent_ |
Allows an application to retrieve a Topic’s |
||
Checking |
topic_status |
INCONSISTENT_TOPIC_STATUS status. |
|
|
|
Gets a list of statuses that have changed since the last |
|
||
Status |
|
|
||
get_status_changes |
time the application read the status or the listeners were |
|||
|
||||
|
|
called. |
|
|
|
|
|
|
|
Navigating |
get_name |
Gets the topic_name string used to create the Topic. |
||
|
|
|||
Relationship |
get_type_name |
Gets the type_name used to create the Topic. |
||
|
||||
s |
|
|
|
|
get_participant |
Gets the DomainParticipant to which this Topic belongs. |
|||
|
||||
|
|
|
|
5.1.1Creating Topics
Topics are created using the DomainParticipant’s create_topic() or create_topic_with_profile() operation:
DDSTopic * create_topic (const char *topic_name, const char *type_name, const DDS_TopicQos &qos,
DDSTopicListener *listener, DDS_StatusMask mask)
DDSTopic * create_topic_with_profile (
const char *topic_name, const char *type_name, const char *library_name, const char *profile_name,
DDSTopicListener *listener, DDS_StatusMask mask)
A QoS profile is way to use QoS settings from an XML file or string. With this approach, you can change QoS settings without recompiling the application. For details, see Chapter 17: Configuring QoS with XML.
topic_name Name for the new Topic, must not exceed 255 characters.
type_name Name for the user data type, must not exceed 255 characters. It must be the same name that was used to register the type, and the type must be registered with the same DomainParticipant used to create this Topic. See Section 3.6.
qos |
If you want to use the default QoS settings (described in the API Reference HTML |
||||
|
documentation), use DDS_TOPIC_QOS_DEFAULT for this parameter (see |
||||
|
Figure 5.2). If you want to customize any of the QosPolicies, supply a QoS |
||||
|
structure (see Section 5.1.3). |
|
|
||
|
If you use DDS_TOPIC_QOS_DEFAULT, it is not safe to create the topic while |
||||
|
another |
thread may |
be |
simultaneously calling |
the DomainParticipant’s |
|
set_default_topic_qos() operation. |
|
|||
listener |
Listeners are callback routines. Connext uses them to notify your application of |
||||
|
specific events (status changes) that may occur with respect to the Topic. The |
||||
|
listener parameter may be set to NULL if you do not want to install a Listener. If |
||||
|
you use NULL, the Listener of the DomainParticipant to which the Topic belongs |
||||
|
will be used instead (if it is set). For more information on TopicListeners, see |
||||
|
|
|
|
||
mask |
This |
||||
|
The bits in the mask that are set must have corresponding callbacks implemented |
||||
|
in the |
Listener. |
If |
you use NULL for |
the Listener, use |
|
DDS_STATUS_MASK_NONE for this parameter. If the Listener implements all |
callbacks, use DDS_STATUS_MASK_ALL. For information on statuses, see Listeners (Section 4.4).
library_name A QoS Library is a named set of QoS profiles. See QoS Libraries (Section 17.10). If NULL is used for library_name, the DomainParticipant’s default library is assumed.
profile_name A QoS profile groups a set of related QoS, usually one per entity. See QoS Profiles (Section 17.9). If NULL is used for profile_name, the DomainParticipant’s default profile is assumed and library_name is ignored.
Note: It is not safe to create a topic while another thread is calling lookup_topicdescription() for that same topic (see Section 8.3.7).
Figure 5.2 Creating a Topic with Default QosPolicies
const char *type_name = NULL;
// register the type
type_name = FooTypeSupport::get_type_name();
retcode = FooTypeSupport::register_type(participant, type_name); if (retcode != DDS_RETCODE_OK) {
// handle error
}
// create the topic
DDSTopic* topic =
if (topic == NULL) {
// process error here
};
For more examples, see Configuring QoS Settings when the Topic is Created (Section 5.1.3.1).
5.1.2Deleting Topics
To delete a Topic, use the DomainParticipant’s delete_topic() operation:
DDS_ReturnCode_t delete_topic (DDSTopic * topic)
Note, however, that you cannot delete a Topic if there are any existing DataReaders or DataWriters (belonging to the same DomainParticipant) that are still using it. All DataReaders and DataWriters associated with the Topic must be deleted first.
5.1.3Setting Topic QosPolicies
A Topic’s QosPolicies control its behavior, or more specifically, the behavior of the DataWriters and DataReaders of the Topic. You can think of the policies as the ‘properties’ for the Topic. The DDS_TopicQos structure has the following format:
DDS_TopicQos struct { |
|
DDS_TopicDataQosPolicy |
topic_data; |
DDS_DurabilityQosPolicy |
durability; |
DDS_DurabilityServiceQosPolicy |
durability_service; |
DDS_DeadlineQosPolicy |
deadline; |
DDS_LatencyBudgetQosPolicy |
latency_budget; |
DDS_LivelinessQosPolicy |
liveliness; |
DDS_ReliabilityQosPolicy |
reliability; |
DDS_DestinationOrderQosPolicy |
destination_order; |
DDS_HistoryQosPolicy |
history; |
DDS_ResourceLimitsQosPolicy |
resource_limits; |
DDS_TransportPriorityQosPolicy |
transport_priority; |
DDS_LifespanQosPolicy |
lifespan; |
DDS_OwnershipQosPolicy ownership; } DDS_TopicQos;
Table 5.2 summarizes the meaning of each policy (arranged alphabetically). For information on why you would want to change a particular QosPolicy, see the section noted in the Reference column. For defaults and valid ranges, please refer to the API Reference HTML documentation for each policy.
Table 5.2 Topic QosPolicies
QosPolicy |
Description |
|
|
|
|
|
For a DataReader, specifies the maximum expected elapsed time between arriving |
|
|
data samples. |
|
Deadline |
For a DataWriter, specifies a commitment to publish samples with no greater |
|
|
elapsed time between them. |
|
|
See Section 6.5.5. |
|
|
|
|
|
Controls how Connext will deal with data sent by multiple DataWriters for the |
|
DestinationOrder |
same topic. Can be set to "by reception timestamp" or to "by source timestamp". |
|
|
See Section 6.5.6. |
|
Durability |
Specifies whether or not Connext will store and deliver data that were previously |
|
published to new DataReaders. See Section 6.5.7. |
||
|
||
|
|
|
DurabilityService |
Various settings to configure the external Persistence Service used by Connext for |
|
DataWriters with a Durability QoS setting of Persistent Durability. See Section 6.5.8. |
||
|
|
|
|
Specifies how much data must to stored by Connext for the DataWriter or |
|
History |
DataReader. This QosPolicy affects the RELIABILITY QosPolicy (Section 6.5.19) as |
|
|
well as the DURABILITY QosPolicy (Section 6.5.7). See Section 6.5.10. |
|
|
|
|
LatencyBudget |
Suggestion to Connext on how much time is allowed to deliver data. See |
|
|
||
|
|
|
Lifespan |
Specifies how long Connext should consider data sent by an user application to be |
|
valid. See Section 6.5.12. |
||
|
||
|
|
|
Liveliness |
Specifies and configures the mechanism that allows DataReaders to detect when |
|
DataWriters become disconnected or "dead." See Section 6.5.13. |
||
|
||
|
|
|
Ownership |
Along with Ownership Strength, specifies if DataReaders for a topic can receive |
|
data from multiple DataWriters at the same time. See Section 6.5.15. |
||
|
||
|
|
|
Reliability |
Specifies whether or not Connext will deliver data reliably. See Section 6.5.19. |
|
|
|
|
|
Controls the amount of physical memory allocated for entities, if dynamic |
|
ResourceLimits |
allocations are allowed, and how they occur. Also controls memory usage among |
|
|
different instance values for keyed topics. See Section 6.5.20. |
|
|
|
|
TopicData |
Along with Group Data QosPolicy and User Data QosPolicy, used to attach a |
|
buffer of bytes to Connext's discovery |
||
|
||
|
|
|
TransportPriority |
Set by a DataWriter to tell Connext that the data being sent is a different "priority" |
|
than other data. See Section 6.5.21. |
||
|
||
|
|
5.1.3.1Configuring QoS Settings when the Topic is Created
As described in Creating Topics (Section 5.1.1), there are different ways to create a Topic, depending on how you want to specify its QoS (with or without a QoS profile).
❏In Figure 5.2 on page
DomainParticipant’s set_default_topic_qos() or set_default_topic_qos_with_profile() operations (see Section 8.3.6.4).
❏To create a Topic with
DomainParticipant’s get_default_topic_qos() operation to initialize a DDS_TopicQos structure. Then change the policies from their default values before passing the QoS structure to create_topic().
❏You can also create a Topic and specify its QoS settings via a QoS profile. To do so, call create_topic_with_profile().
❏If you want to use a QoS profile, but then make some changes to the QoS before creating the Topic, call get_topic_qos_from_profile(), modify the QoS and use the modified QoS when calling create_topic().
5.1.3.2Changing QoS Settings After the Topic Has Been Created
There are 2 ways to change an existing Topic’s QoS after it is has been
❏To change QoS programmatically (that is, without using a QoS Profile), see the example code in Figure 5.3 on page
❏You can also change a Topic’s (and all other Entities’) QoS by using a QoS Profile. For an example, see Figure 5.4 on page
Figure 5.3 Changing the QoS of an Existing Topic (without a QoS Profile)
DDS_TopicQos topic_qos;1
// Get current QoS. topic points to an existing DDSTopic. if
// handle error
}
//Next, make changes.
//New ownership kind will be Exclusive
topic_qos.ownership.kind = DDS_EXCLUSIVE_OWNERSHIP_QOS;
// Set the new QoS
if
}
1.For the C API, you need to use DDS_TopicQos_INITIALIZER or DDS_TopicQos_initialize(). See Spe- cial QosPolicy Handling Considerations for C (Section 4.2.2)
Figure 5.4 Changing the QoS of an Existing Topic with a QoS Profile
retcode =
“FooProfileLibrary”,”FooProfile”);
if (retcode != DDS_RETCODE_OK) { // handle error
}
5.1.4Copying QoS From a Topic to a DataWriter or DataReader
Only the TOPIC_DATA QosPolicy strictly applies to
DDS_TopicQos structure.
Because many QosPolicies affect the behavior of matching DataWriters and DataReaders, the DDS_TopicQos structure is provided as a convenient way to set the values for those policies in a single place in the application. Otherwise, you would have to modify the individual QosPolicies within separate DataWriter and DataReader QoS structures. And because some QosPolicies are compared between DataReaders and DataWriters, you will need to make certain that the individual values that you set are compatible (see Section 4.2.1).
The use of the DDS_TopicQos structure to set the values of any QosPolicy except
To cause a DataWriter to use its Topic’s QoS settings, either:
❏Pass DDS_DATAWRITER_QOS_USE_TOPIC_QOS to create_datawriter(), or
❏Call the Publisher’s copy_from_topic_qos() operation
To cause a DataReader to use its Topic’s QoS settings, either:
❏Pass DDS_DATAREADER_QOS_USE_TOPIC_QOS to create_datareader(), or
❏Call the Subscriber’s copy_from_topic_qos() operation
Please refer to the API Reference HTML documentation for the Publisher’s create_datawriter() and Subscriber’s create_datareader() methods for more information about using values from the
Topic QosPolicies when creating DataWriters and DataReaders.
5.1.5Setting Up TopicListeners
When you create a Topic, you have the option of giving it a Listener. A TopicListener includes just one callback routine, on_inconsistent_topic(). If you create a TopicListener (either as part of the Topic creation call, or later with the set_listener() operation), Connext will invoke the TopicListener’s on_inconsistent_topic() method whenever it detects that another application has created a Topic with same name but associated with a different user data type. For more information, see INCONSISTENT_TOPIC Status (Section 5.3.1).
Note: Some operations cannot be used within a listener callback, see Restricted Operations in Listener Callbacks (Section 4.5.1).
If a Topic’s Listener has not been set and Connext detects an inconsistent Topic, the DomainParticipantListener (if it exists) will be notified instead (see Section 8.3.5). So you only need to set up a TopicListener if you need to perform specific actions when there is an error on that particular Topic. In most cases, you can set the TopicListener to NULL and process
5.1.6Navigating Relationships Among Entities
5.1.6.1Finding a Topic’s DomainParticipant
To retrieve a handle to the Topic’s DomainParticipant, use the get_participant() operation:
DDSDomainParticipant*DDSTopicDescription::get_participant()
Notice that this method belongs to the DDSTopicDescription class, which is the base class for
DDSTopic.
5.1.6.2Retrieving a Topic’s Name or Type Name
If you want to retrieve the topic_name or type_name used in the create_topic() operation, use these methods:
const char* DDSTopicDescription::get_type_name(); const char* DDSTopicDescription::get_name();
Notice that these methods belong to the DDSTopicDescription class, which is the base class for
DDSTopic.
5.2Topic QosPolicies
This section describes the only QosPolicy that strictly applies to Topics (and no other types of
Most of the QosPolicies that can be set on a Topic can also be set on the corresponding DataWriter and/or DataReader. The Topic’s QosPolicy is essentially just a place to store QoS settings that you plan to share with multiple entities that use that Topic (see how in Section 5.1.3); they are not used otherwise and are not propagated on the wire.
5.2.1TOPIC_DATA QosPolicy
This QosPolicy provides an area where your application can store additional information related to the Topic. This information is passed between applications during discovery (see Chapter 14: Discovery) using
The value of the TOPIC_DATA QosPolicy is sent to remote applications when they are first discovered, as well as when the Topic’s set_qos() method is called after changing the value of the TOPIC_DATA. User code can set listeners on the builtin DataReaders of the builtin Topics used by Connext to propagate discovery information. Methods in the builtin topic listeners will be called whenever new applications, DataReaders, and DataWriters are found. Within the user callback, you will have access to the TOPIC_DATA that was set for the associated Topic.
Currently, TOPIC_DATA of the associated Topic is only propagated with the information that declares a DataWriter or DataReader. Thus, you will need to access the value of TOPIC_DATA through DDS_PublicationBuiltinTopicData or DDS_SubscriptionBuiltinTopicData (see Chapter 16:
The structure for the TOPIC_DATA QosPolicy includes just one field, as seen in Table 5.3. The field is a sequence of octets that translates to a contiguous buffer of bytes whose contents and length is set by the user. The maximum size for the data are set in the DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4).
Table 5.3 DDS_TopicDataQosPolicy
Type |
Field Name |
Description |
|
|
|
DDS_OctetSeq |
value |
default: empty |
|
|
|
This policy is similar to the GROUP_DATA (Section 6.4.4) and USER_DATA (Section 6.5.25) policies that apply to other types of Entities.
5.2.1.1Example
One possible use of TOPIC_DATA is to send an associated XML schema that can be used to process the data stored in the associated user data structure of the Topic. The schema, which can be passed as a long sequence of characters, could be used by an XML parser to take samples of the data received for a Topic and convert them for updating some graphical user interface, web application or database.
5.2.1.2Properties
This QosPolicy can be modified at any time. A change in the QosPolicy will cause Connext to send packets containing the new TOPIC_DATA to all of the other applications in the domain.
Because Topics are created independently by the applications that use the Topic, there may be different instances of the same Topic (same topic name and data type) in different applications. The TOPIC_DATA for different instances of the same Topic may be set differently by different applications.
5.2.1.3Related QosPolicies
❏GROUP_DATA QosPolicy (Section 6.4.4)
❏USER_DATA QosPolicy (Section 6.5.25)
❏DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4)
5.2.1.4Applicable Entities
5.2.1.5System Resource Considerations
As mentioned earlier, the maximum size of the TOPIC_DATA is set in the topic_data_max_length field of the DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4). Because Connext will allocate memory based on this value, you should only increase this value if you need to. If your system does not use TOPIC_DATA, then you can set this value to 0 to save memory. Setting the value of the TOPIC_DATA QosPolicy to hold data longer than the value set in the topic_data_max_length field will result in failure and an INCONSISTENT_QOS_POLICY return code.
However, should you decide to change the maximum size of TOPIC_DATA, you must make certain that all applications in the domain have changed the value of topic_data_max_length to be the same. If two applications have different limits on the size of TOPIC_DATA, and one application sets the TOPIC_DATA QosPolicy to hold data that is greater than the maximum size set by another application, then the DataWriters and DataReaders of that Topic between the two applications will not connect. This is also true for the GROUP_DATA (Section 6.4.4) and USER_DATA (Section 6.5.25) QosPolicies.
5.3Status Indicator for Topics
There is only one communication status defined for a Topic, ON_INCONSISTENT_TOPIC. You can use the get_inconsistent_topic_status() operation to access the current value of the status or use a TopicListener to catch the change in the status as it occurs. See Section 4.4 for a general discussion on Listeners and Statuses.
5.3.1INCONSISTENT_TOPIC Status
In order for a DataReader and a DataWriter with the same Topic to communicate, their types must be consistent according to the DataReader’s
The status is a structure of type DDS_InconsistentTopicStatus, see Table 5.4. The total_count keeps track of the total number of (DataReader, DataWriter) pairs with topic names that match the Topic to which this status is attached, but whose types are inconsistent. The TopicListener’s on_inconsistent_topic() operation is invoked when this status changes (an inconsistent topic is found). You can also retrieve the current value by calling the Topic’s get_inconsistent_topic_status() operation.
The value of total_count_change reflects the number of inconsistent topics that were found since the last time get_inconsistent_topic_status() was called by user code or on_inconsistent_topic() was invoked by Connext.
Table 5.4 DDS_InconsistentTopicStatus Structure
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
Total cumulative count of (DataReader, DataWriter) pairs whose topic |
DDS_Long |
total_count |
names match the Topic to which this status is attached, but whose |
|
|
types are inconsistent. |
|
|
|
DDS_Long |
total_count_change |
The change in total_count since the last time this status was read. |
|
|
|
5.4ContentFilteredTopics
A ContentFilteredTopic is a Topic with filtering properties. It makes it possible to subscribe to topics and at the same time specify that you are only interested in a subset of the Topic’s data.
For example, suppose you have a Topic that contains a temperature reading for a boiler, but you are only interested in temperatures outside the normal operating range. A ContentFilteredTopic can be used to limit the number of data samples a DataReader has to process and may also reduce the amount of data sent over the network.
This section includes the following:
❏Where Filtering is
❏Creating ContentFilteredTopics (Section 5.4.3)
❏Deleting ContentFilteredTopics (Section 5.4.4)
❏Using a ContentFilteredTopic (Section 5.4.5)
❏SQL Filter Expression Notation (Section 5.4.6)
❏STRINGMATCH Filter Expression Notation (Section 5.4.7)
❏Custom Content Filters (Section 5.4.8)
5.4.1Overview
A ContentFilteredTopic creates a relationship between a Topic, also called the related topic, and
❏The filter expression evaluates a logical expression on the Topic content. The filter expression is similar to the WHERE clause in a SQL expression.
❏The parameters are strings that give values to the 'parameters' in the filter expression. There must be one parameter string for each parameter in the filter expression.
A ContentFilteredTopic is a type of topic description, and can be used to create DataReaders. However, a ContentFilteredTopic is not an
A ContentFilteredTopic relates to other entities in Connext as follows:
❏ContentFilteredTopics are used when creating DataReaders, not DataWriters.
❏Multiple DataReaders can be created with the same ContentFilteredTopic.
❏A ContentFilteredTopic belongs to (is created/deleted by) a DomainParticipant.
❏A ContentFilteredTopic and Topic must be in the same DomainParticipant.
❏A ContentFilteredTopic can only be related to a single Topic.
❏A Topic can be related to multiple ContentFilteredTopics.
❏A ContentFilteredTopic can have the same name as a Topic, but ContentFilteredTopics must have unique names within the same DomainParticipant.
❏A DataReader created with a ContentFilteredTopic will use the related Topic's QoS and
Listeners.
❏Changing filter parameters on a ContentFilteredTopic causes all DataReaders using the same ContentFilteredTopic to see the change.
❏A Topic cannot be deleted as long as at least one ContentFilteredTopic that has been created with it exists.
❏A ContentFilteredTopic cannot be deleted as long as at least one DataReader that has been created with the ContentFilteredTopic exists.
5.4.2Where Filtering is
Filtering may be performed on either side of the distributed application. (The DataWriter obtains the filter expression and parameters from the DataReader during discovery.)
Connext also supports
A DataWriter will automatically filter data samples for a DataReader if all of the following are true; otherwise filtering is performed by the DataReader.
1.The DataWriter is filtering for no more than writer_resource_limits.max_remote_reader_filters DataReaders at the same time.
•There is a
•If a DataWriter is filtering max_remote_reader_filters DataReaders at the same time and a new filtered DataReader is created, then the newly created DataReader (max_remote_reader_filters + 1) is not filtered. Even if one of the first (max_remote_reader_filters) DataReaders is deleted, that already created DataReader (max_remote_reader_filters + 1) will still not be filtered. However, any subsequently created DataReaders will be filtered as long as the number of DataReaders currently being filtered is not more than writer_resource_limits.max_remote_reader_filters.
2.The DataReader is not subscribing to data using multicast.
3.There are no more than 4 matching DataReaders in the same locator (see Peer Descriptor Format (Section 14.2.1)).
4.The DataWriter has infinite liveliness. (See LIVELINESS QosPolicy (Section 6.5.13).)
5.The DataWriter is not using an Asynchronous Publisher. (That is, the DataWriter’s PUBLISH_MODE QosPolicy (DDS Extension) (Section 6.5.18) kind is set to
DDS_SYNCHRONOUS_PUBLISHER_MODE_QOS.) See Note below.
6.If you are using a custom filter (not the default one), it must be registered in the
DomainParticipant of the DataWriter and the DataReader.
Notes:
Connext supports limited
In addition to filtering new samples, a DataWriter can also be configured to filter previously written samples stored in the DataWriter’s queue for newly discovered DataReaders. To do so, use the refilter field in the DataWriter’s HISTORY QosPolicy (Section 6.5.10).
5.4.3Creating ContentFilteredTopics
To create a ContentFilteredTopic that uses the default SQL filter, use the DomainParticipant’s create_contentfilteredtopic() operation:
DDS_ContentFilteredTopic *create_contentfilteredtopic
(const |
char |
* |
name, |
const |
DDS_Topic * related_topic, |
||
const |
char |
* |
filter_expression, |
const |
DDS_StringSeq & expression_parameters) |
Or, to use a custom filter or the builtin STRINGMATCH filter (see Section 5.4.7), use the create_contentfilteredtopic_with_filter() variation:
DDS_ContentFilteredTopic *create_contentfilteredtopic_with_filter
(const |
char |
* |
name, |
DDSTopic |
* |
related_topic, |
|
const |
char |
* |
filter_expression, |
const |
DDS_StringSeq & expression_parameters, |
||
const |
char |
* |
filter_name = |
|
DDS_SQLFILTER_NAME) |
name |
Name of the ContentFilteredTopic. Note that it is legal for a |
|
ContentFilteredTopic to have the same name as a Topic in the same |
|
DomainParticipant, but a ContentFilteredTopic cannot have the same name |
|
as another ContentFilteredTopic in the same DomainParticipant. This |
|
parameter cannot be NULL. |
related_topic The related Topic to be filtered. The related topic must be in the same DomainParticipant as the ContentFilteredTopic. This parameter cannot be NULL. The same related topic can be used in many different ContentFilteredTopics.
filter_expression A logical expression on the contents on the Topic. If the expression evaluates to TRUE, a sample is received; otherwise it is discarded. This parameter cannot be NULL. Once a ContentFilteredTopic is created, its filter_expression cannot be changed. The notation for this expression depends on the filter that you are using (specified by the filter_name parameter). See SQL Filter Expression Notation (Section 5.4.6) and STRINGMATCH Filter Expression Notation (Section 5.4.7).
expression_parameters A string sequence of filter expression parameters. Each parameter corresponds to a positional argument in the filter expression: element 0
|
corresponds to positional argument 0, element 1 to positional argument 1, |
||||||||
|
and so forth. |
|
|
|
|
|
|
|
|
|
The |
expression_parameters |
can |
be |
changed |
with |
|||
|
set_expression_parameters() |
|
|
|
|
||||
|
append_to_expression_parameter() |
and |
|||||||
|
remove_from_expression_parameter() (Section 5.4.5.4). |
|
|
||||||
filter_name |
Name of the content filter to use for filtering. The filter must have been |
||||||||
|
previously registered with the DomainParticipant (see Registering a Custom |
||||||||
|
5.4.8.2)). |
|
There |
are two |
builtin |
filters, |
|||
|
DDS_SQLFILTER_NAME1 |
|
(the |
default |
|
filter) |
and |
||
|
are |
automatically |
|||||||
|
registered. |
|
|
|
|
|
|
|
|
|
To |
use |
the |
STRINGMATCH |
|
filter, |
call |
||
|
create_contentfilteredtopic_with_filter() |
|
|
|
with |
||||
|
"DDS_STRINGMATCHFILTER_NAME" |
as |
the |
filter_name. |
|||||
|
STRINGMATCH filter expressions have the syntax: |
|
|
|
|||||
|
<field name> MATCH <string pattern> (see Section 5.4.7). |
|
If you run rtiddsgen with
To summarize:
❏To use the builtin default SQL filter:
•Do not use
•Call create_contentfilteredtopic()
•See SQL Filter Expression Notation (Section 5.4.6)
❏To use the builtin STRINGMATCH filter:
•Do not use
•Call create_contentfilteredtopic_with_filter(), setting the filter_name to
DDS_STRINGMATCHFILTER_NAME
•See STRINGMATCH Filter Expression Notation (Section 5.4.7)
❏To use a custom filter:
•call create_contentfilteredtopic_with_filter(), setting the filter_name to a registered custom filter
❏To use rtiddsgen with
•call create_contentfilteredtopic_with_filter(), setting the filter_name to a registered custom filter
Note: Be careful with memory management of the string sequence in some of the ContentFilteredTopic APIs. See the String Support section in the API Reference HTML documentation (within the Infrastructure module) for details on sequences.
1.In the Java and C# APIs, you can access the names of the builtin filters by using DomainParticipant.SQLFILTER_NAME and DomainParticipant.STRINGMATCHFILTER_NAME.
5.4.4Deleting ContentFilteredTopics
To delete a ContentFilteredTopic, use the DomainParticipant’s delete_contentfilteredtopic() operation:
1.Make sure no DataReaders are using the ContentFilteredTopic. (If this is not true, the operation returns PRECONDITION_NOT_MET.)
2.Delete the ContentFilteredTopic by using the DomainParticipant’s delete_contentfilteredtopic() operation.
DDS_ReturnCode_t delete_contentfilteredtopic (DDSContentFilteredTopic * a_contentfilteredtopic)
5.4.5Using a ContentFilteredTopic
Once you’ve created a ContentFilteredTopic, you can use the operations listed in Table 5.5.
Table 5.5 ContentFilteredTopic Operations
Operation |
Description |
Reference |
|
|
|
|
|
|
append_to_expression_parameter |
Concatenates a string value to the input |
|
|
expression parameter |
|
get_expression_parameters |
Gets the expression parameters. |
|
|
|
|
get_filter_expression |
Gets the expression. |
|
|
|
|
get_related_topic |
Gets the related Topic. |
|
|
|
|
narrow |
Casts a DDS_TopicDescription pointer to a |
|
|
ContentFilteredTopic pointer. |
|
remove_from_expression_parameter |
Removes a string value from the input expression |
|
|
parameter |
|
set_expression_parameters |
Changes the expression parameters. |
|
|
|
|
5.4.5.1Getting the Current Expression Parameters
To get the expression parameters, use the ContentFilteredTopic’s get_expression_parameters() operation:
DDS_ReturnCode_t get_expression_parameters
(struct DDS_StringSeq & parameters)
parameters The filter expression parameters.
The memory for the strings in this sequence is managed as described in the String Support section of the API Reference HTML documentation (within the Infrastructure module). In particular, be careful to avoid a situation in which Connext allocates a string on your behalf and you then reuse that string in such a way that Connext believes it to have more memory allocated to it than it actually does. This parameter cannot be NULL.
This operation gives you the expression parameters that were specified on the last successful call to set_expression_parameters() or, if that was never called, the parameters specified when the ContentFilteredTopic was created.
5.4.5.2Setting Expression Parameters
To change the expression parameters associated with a ContentFilteredTopic:
DDS_ReturnCode_t set_expression_parameters
(const struct DDS_StringSeq & parameters)
parameters The filter expression parameters. Each element in the parameter sequence corresponds to a positional parameter in the filter expression. When using the default DDS_SQLFILTER_NAME, parameter strings are automatically converted to the member type. For example, "4" is converted to the integer 4. This parameter cannot be NULL.
Note: The ContentFilteredTopic’s operations do not manage the sequences; you must ensure that the parameter sequences are valid. Please refer to the String Support section in the API Reference HTML documentation (within the Infrastructure module) for details on sequences.
5.4.5.3Appending a String to an Expression Parameter
To concatenate a string to an expression parameter, use the ContentFilteredTopic's append_to_expression_parameter() operation:
DDS_ReturnCode_t append_to_expression_parameter( const DDS_Long index, const char* value);
When using the STRINGMATCH filter, index must be 0.
This function is only intended to be used with the builtin SQL and STRINGMATCH filters. This function can be used in expression parameters associated with MATCH operators (see SQL Extension: Regular Expression Matching (Section 5.4.6.4)) to add a pattern to the match pattern list. For example, if filter_expression is:
symbol MATCH 'IBM'
Then append_to_expression_parameter(0, "MSFT") would generate the expression:
symbol MATCH 'IBM,MSFT'
5.4.5.4Removing a String from an Expression Parameter
To remove a string from an expression parameter use the ContentFilteredTopic's remove_from_expression_parameter() operation:
DDS_ReturnCode_t remove_from_expression_parameter( const DDS_Long index, const char* value)
When using the STRINGMATCH filter, index must be 0.
This function is only intended to be used with the builtin SQL and STRINGMATCH filters. It can be used in expression parameters associated with MATCH operators (see SQL Extension: Regular Expression Matching (Section 5.4.6.4)) to remove a pattern from the match pattern list. For example, if filter_expression is:
symbol MATCH 'IBM,MSFT'
Then remove_from_expression_parameter(0, "IBM") would generate the expression:
symbol MATCH 'MSFT'
5.4.5.5Getting the Filter Expression
To get the filter expression that was specified when the ContentFilteredTopic was created: const char* get_filter_expression ()
There is no corresponding set operation. The filter expression can only be set when the ContentFilteredTopic is created.
5.4.5.6Getting the Related Topic
To get the related topic that was specified when the ContentFilteredTopic was created:
DDS_Topic * get_related_topic ()
5.4.5.7‘Narrowing’ a ContentFilteredTopic to a TopicDescription
To safely cast a DDS_TopicDescription pointer to a ContentFilteredTopic pointer, use the ContentFilteredTopic’s narrow() operation:
DDS_TopicDescription* narrow ()
5.4.6SQL Filter Expression Notation
A SQL filter expression is similar to the WHERE clause in SQL. The SQL expression format provided by Connext also supports the MATCH operator as an extended operator (see Section 5.4.6.4).
The following sections provide more information:
❏SQL Grammar (Section 5.4.6.1)
❏Token Expressions (Section 5.4.6.2)
❏Type Compatibility in the Predicate (Section 5.4.6.3)
❏SQL Extension: Regular Expression Matching (Section 5.4.6.4)
❏Composite Members (Section 5.4.6.5)
❏Enumerations (Section 5.4.6.7)
❏Example SQL Filter Expressions (Section 5.4.6.11)
5.4.6.1SQL Grammar
This section describes the subset of SQL syntax, in
The following notational conventions are used:
❏NonTerminals are typeset in italics.
❏'Terminals' are quoted and typeset in a fixed width font. They are written in upper case in most cases in the
❏TOKENS are typeset in bold.
❏The notation (element // ',') represents a
Expression ::= |
FilterExpression |
| |
TopicExpression |
| |
QueryExpression |
. |
|
FilterExpression |
::= |
Condition |
TopicExpression |
::= |
SelectFrom { Where } ';' |
QueryExpression |
::= { Condition }{ 'ORDER BY' (FIELDNAME // ',') } |
|
|
|
. |
SelectFrom |
::= 'SELECT' Aggregation 'FROM' Selection |
|
|
. |
|
Aggregation |
::= '*' |
|
|
| |
(SubjectFieldSpec // ',') |
|
. |
|
SubjectFieldSpec |
::= FIELDNAME |
|
|
| |
FIELDNAME 'AS' IDENTIFIER |
|
|
| FIELDNAME IDENTIFIER |
|
|
. |
Selection |
::= |
TOPICNAME |
|
| |
TOPICNAME NaturalJoin JoinItem |
|
. |
|
JoinItem |
::= |
TOPICNAME |
|
| |
TOPICNAME NaturalJoin JoinItem |
|
| |
'(' TOPICNAME NaturalJoin JoinItem ')' |
|
. |
|
NaturalJoin ::= |
'INNER JOIN' |
|
|
| |
'INNER NATURAL JOIN' |
|
| |
'NATURAL JOIN' |
|
| |
'NATURAL INNER JOIN' |
|
. |
|
Where |
::= |
'WHERE' Condition |
|
. |
|
Condition |
::= |
Predicate |
|
| |
Condition 'AND' Condition |
|
| |
Condition 'OR' Condition |
|
| |
'NOT' Condition |
|
| |
'(' Condition ')' |
|
. |
|
Predicate |
::= |
ComparisonPredicate |
|
| |
BetweenPredicate |
|
. |
|
ComparisonPredicate ::= ComparisonTerm RelOp ComparisonTerm
|
. |
|
ComparisonTerm |
::= FieldIdentifier |
|
|
| Parameter |
|
|
. |
|
BetweenPredicate |
::= |
FieldIdentifier 'BETWEEN' Range |
|
| |
FieldIdentifier 'NOT BETWEEN' Range |
|
. |
|
FieldIdentifier |
::= FIELDNAME |
|
|
| IDENTIFIER |
|
|
. |
|
RelOp |
::= |
'=' | '>' | '>=' | '<' | '<=' | '<>' | 'LIKE' | 'MATCH' |
|
. |
|
Range |
::= |
Parameter 'AND' Parameter |
|
. |
|
Parameter |
::= |
INTEGERVALUE |
|
| |
CHARVALUE |
|
| |
FLOATVALUE |
|
| |
STRING |
|
| |
ENUMERATEDVALUE |
|
| |
BOOLEANVALUE |
|
| |
PARAMETER |
|
. |
|
Note: INNER JOIN, INNER NATURAL JOIN, NATURAL JOIN, and NATURAL INNER JOIN are all aliases, in the sense that they have the same semantics. They are all supported because they all are part of the SQL standard.
5.4.6.2Token Expressions
The syntax and meaning of the tokens used in SQL grammar is described as follows:
IDENTIFIER: LETTER (PART_LETTER)* where LETTER: [
PART_LETTER: [
FIELDNAME: FieldNamePart ( "." FieldNamePart )* where FieldNamePart : IDENTIFIER ( "[" Index "]" )*
Index> :
|
Primitive IDL types referenced by FIELDNAME are treated as different types in Predicate according to the following table:
Predicate Data Type |
IDL Type |
|
|
|
|
BOOLEANVALUE |
boolean |
|
|
INTEGERVALUE |
octet, (unsigned) short, (unsigned) long, (unsigned) long long |
|
|
FLOATVALUE |
float, double |
|
|
CHARVALUE |
char, wchar |
|
|
STRING |
string, wstring |
|
|
ENUMERATEDVALUE |
enum |
|
|
TOPICNAME : IDENTIFIER
INTEGERVALUE :
CHARVALUE : "'" (~["'"])? "'"
FLOATVALUE :
where EXPONENT: ["e","E"]
STRING : "'" (~["'"])* "'"
ENUMERATEDVALUE : "'" ["A" - "Z", "a" - "z"] ["A" - "Z", "a" - "z", "_", "0" - "9"]* "'"
BOOLEANVALUE : ["TRUE","FALSE"]
PARAMETER : "%"
5.4.6.3Type Compatibility in the Predicate
As seen in Table 5.6, only certain combinations of type comparisons are valid in the Predicate.
Table 5.6 Valid Type Comparisons
|
BOOLEAN |
INTEGER |
FLOAT |
CHAR |
STRING |
ENUMERATED |
|
VALUE |
VALUE |
VALUE |
VALUE |
VALUE |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
BOOLEAN |
YES |
|
|
|
|
|
|
|
|
|
|
|
|
INTEGERVALUE |
|
YES |
YES |
|
|
|
|
|
|
|
|
|
|
FLOATVALUE |
|
YES |
YES |
|
|
|
|
|
|
|
|
|
|
CHARVALUE |
|
|
|
YES |
YES |
YES |
|
|
|
|
|
|
|
STRING |
|
|
|
YES |
YES a |
YES |
ENUMERATED |
|
YES |
|
YES b |
YES b |
YES c |
VALUE |
|
|
|
|
|
|
a.See Section 5.4.6.4.
b.Because of the formal notation of the Enumeration values, they are compatible with string and char literals, but they are not compatible with string or char variables, i.e., "MyEnum='EnumValue'" is correct, but "MyEnum=MyS- tring" is not allowed.
c.Only for
5.4.6.4SQL Extension: Regular Expression Matching
The relational operator MATCH may only be used with string fields. The
MATCH is
The pattern allows limited "wild card" matching under the rules in Table 5.7 on page
The syntax is similar to the POSIX® fnmatch syntax1. The MATCH syntax is also similar to the 'subject' strings of TIBCO Rendezvous®. Some example expressions include:
"symbol MATCH
"symbol MATCH 'NASDAQ/GOOG,NASDAQ/MSFT'"
5.4.6.5Composite Members
Any member can be used in the filter expression, with the following exceptions:
1.See http://www.opengroup.org/onlinepubs/000095399/functions/fnmatch.html.
Table 5.7 Wild Card Matching
Character |
|
Meaning |
|
|
|
|
|
|
, |
|
A , separates a list of alternate patterns. The field string is matched if it matches |
|
one or more of the patterns. |
|
|
|
|
|
|
|
/ |
|
A / in the pattern string matches a / in the field string. It separates a sequence of |
|
mandatory substrings. |
|
|
|
|
|
|
|
? |
|
A ? in the pattern string matches any single |
|
string. |
|
|
|
|
|
|
|
* |
|
A * in the pattern string matches 0 or more |
|
|
|
% |
|
This special character is used to designate filter expression parameters. |
|
|
|
\ |
|
(Not supported) Escape character for special characters. |
|
|
|
[charlist] |
|
Matches any one of the characters in charlist. |
|
|
|
[!charlist] |
or |
(Not supported) Matches any one of the characters not in charlist. |
[^charlist] |
|
|
|
Matches any character from s to e, inclusive. |
|
|
|
|
(Not supported) Matches any character not in the interval s to e. |
||
|
|
|
❏
❏bitfields are not supported
❏LIKE is not supported
Composite members are accessed using the familiar dot notation, such as "x.y.z > 5". For unions, the notation is special due to the nature of the IDL union type.
On the publishing side, you can access the union discriminator with myunion._d and the actual member with myunion._u.mymember. If you want to use a ContentFilteredTopic on the subscriber side and filter a sample with a
5.4.6.6Strings
The filter expression and parameters can use IDL strings. String constants must appear between single quotation marks (').
For example:
"fish = 'salmon' "
Strings used as parameter values must contain the enclosing quotation marks (') within the parameter value; do not place the quotation marks within the expression statement. For example, the expression " symbol MATCH %0 " with parameter 0 set to " 'IBM' " is legal, whereas the expression " symbol MATCH '%0' " with parameter 0 set to " IBM " will not compile.
5.4.6.7Enumerations
A filter expression can use enumeration values, such as GREEN, instead of the numerical value. For example, if x is an enumeration of GREEN, YELLOW and RED, the following expressions are valid:
"x = 'GREEN'" "X < 'RED'"
5.4.6.8Pointers
Pointers can be used in filter expressions and are automatically dereferenced to the correct value.
For example:
struct Point { long x; long y;
};
struct Rectangle { Point *u_l; Point *l_r;
};
The following expression is valid on a Topic of type Rectangle:
"u_l.x > l_r.x"
5.4.6.9Arrays
Arrays are accessed with the familiar [] notation.
For example:
struct ArrayType { long value[255][5];
};
The following expression is valid on a Topic of type ArrayType:
"value[244][2] = 5"
In order to compare an array of bytes(octets in idl), instead of comparing each individual element of the array using [] notation, Connext provides a helper function, hex(). The hex() function can be used to represent an array of bytes (octets in IDL). To use the hex() function, use the notation &hex() and pass the byte array as a sequence of hexadecimal values.
For example:
&hex (07 08 09 0A 0B 0c 0D 0E 0F 10 11 12 13 14 15 16)
Here the
Note: If the length of the octet array represented by the hex() function does not match the length of the field being compared, it will result in a compilation error.
For example:
struct ArrayType { octet value[2];
};
The following expression is valid:
"value = &hex(12 0A)"
5.4.6.10Sequences
Sequence elements can be accessed using the () or [] notation.
For example:
struct SequenceType { sequence<long> s;
};
The following expressions are valid on a Topic of type SequenceType:
"s(1) = 5" "s[1] = 5"
5.4.6.11Example SQL Filter Expressions
Assume that you have a Topic with two floats, X and Y, which are the coordinates of an object moving inside a rectangle measuring 200 x 200 units. This object moves quite a bit, generating lots of samples that you are not interested in. Instead you only want to receive samples outside the middle of the rectangle, as seen in Figure 5.5. That is, you want to filter out data points in the gray box.
Figure 5.5 Filtering Example
The filter expression would look like this (remember the expression is written so that samples that we do want will pass):
"(X < 50 or X > 150) and (Y < 50 or Y > 150)"
While this filter works, it cannot be changed after the ContentFilteredTopic has been created. Suppose you would like the ability to adjust the coordinates that are considered outside the acceptable range (changing the size of the gray box). You can achieve this by using filter parameters. An more flexible way to write the expression is this:
"(X < %0 or X > %1) and (Y < %2 or Y > %3)"
Recall that when you create a ContentFilteredTopic (see Section 5.4.3), you pass a expression_parameters string sequence as one of the parameters. Each element in the string sequence corresponds to one argument.
See the String and Sequence Support sections of the API Reference HTML documentation (from the Modules page, select Infrastructure).
In C++, the filter parameters could be assigned like this:
FilterParameter[0] = "50";
FilterParameter[1] = "150";
FilterParameter[2] = "50";
FilterParameter[3] = "150";
With these parameters, the filter expression is identical to the first approach. However, it is now possible to change the parameters by calling set_expression_parameters(). For example, perhaps you decide that you only want to see data points where X < 10 or X > 190. To make this change:
FilterParameter[0] = 10
FilterParameter[1] = 190 set_expression_parameters(....)
Note: The new filter parameters will affect all DataReaders that have been created with this ContentFilteredTopic.
5.4.7STRINGMATCH Filter Expression Notation
The STRINGMATCH Filter is a subset of the SQL filter; it only supports the MATCH relational operator on a single string field. It is introduced mainly for the use case of partitioning data according to channels in the DataWriter's MULTI_CHANNEL QosPolicy (DDS Extension) (Section 6.5.14) in Market Data applications.
A STRINGMATCH filter expression has the following syntax:
<field name> MATCH <string pattern>
The STRINGMATCH filter is provided to support the narrow use case of filtering a single string field of the sample against a
The STRINGMATCH filter must contain only one <field name>, and a single occurrence of the MATCH operator. The <string pattern> must be either the single parameter %0, or a single,
During creation of a STRINGMATCH filter, the <string pattern> is automatically parameterized. That is, during creation, if the <string pattern> specified in the filter expression is not the parameter %0, then the
The initial matching string list is converted to an explicit parameter value so that subsequent additions and deletions of string values to and from the list of matching strings may be performed with the append_to_expression_parameter() and remove_from_expression_parameter() operations mentioned above.
5.4.7.1Example STRINGMATCH Filter Expressions
❏This expression evaluates to TRUE if the value of symbol is equal to NASDAQ/MSFT: symbol MATCH 'NASDAQ/MSFT'
❏This expression evaluates to TRUE if the value of symbol is equal to NASDAQ/IBM or
NASDAQ/MSFT:
symbol MATCH 'NASDAQ/IBM,NASDAQ/MSFT'
❏This expression evaluates to TRUE if the value of symbol corresponds to NASDAQ and starts with a letter between M and Y:
symbol MATCH
5.4.7.2STRINGMATCH Filter Expression Parameters
In the builtin STRINGMATCH filter, there is one, and only one, parameter: parameter 0. (If you want to add more parameters, see Appending a String to an Expression Parameter (Section 5.4.5.3).) The parameter can be specified explicitly using the same syntax as the SQL filter or implicitly by using a constant string pattern. For example:
symbol |
MATCH |
%0 (Explicit parameter) |
symbol |
MATCH |
‘IBM’ (Implicit parameter initialized to IBM) |
Strings used as parameter values must contain the enclosing quotation marks (') within the parameter value; do not place the quotation marks within the expression statement. For
example, the expression " symbol MATCH %0 " with parameter 0 set to " 'IBM' " is legal, whereas the expression " symbol MATCH '%0' " with parameter 0 set to " IBM " will not compile.
5.4.8Custom Content Filters
By default, a ContentFilteredTopic will use a
If you want to use a different filter, you must register it first, then create the ContentFilteredTopic using create_contentfilteredtopic_with_filter() (see Creating ContentFilteredTopics (Section 5.4.3)).
One reason to use a custom filter is that the default filter can only filter based on relational operations between topic members, not on a computation involving topic members. For example, if you want to filter based on the sum of the members, you must create your own filter.
Notes:
❏The API for using a custom content filter is subject to change in a future release.
❏Custom content filters are not supported when using the .NET APIs.
5.4.8.1Filtering on the Writer Side with Custom Filters
There are two approaches for performing
The second approach is to evaluate the written sample once for the writer and then rely on the filter implementation to provide a set of readers whose filter passes the sample. This approach allows the filter implementation to cache the result of filtering, if possible. For example, consider a scenario where the data is described by the struct shown below, where 10<x<20:
struct MyData { int x;
int y;
};
If the filter expression is based only on the x field, the filter implementation can maintain a hash map for all the different values of x and cache the filtering results in the hash map. Then any future evaluations will only be O(1), because it only requires a lookup in the hash map.
But if in the same example, a reader has a content filter that is based on both x and y, or just y, the filter implementation cannot cache the
Table 5.8 DDS_ExpressionProperty
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_Boolean |
key_only_filter |
Indicates if the filter expression is based only on key fields. In this |
|
case, Connext itself can cache the filtering results. |
|||
|
|
||
|
|
|
|
|
|
Indicates if the filter implementation can cache the filtering result for |
|
|
writer_side_filter_ |
the expression provided. If this is true then Connext will do no |
|
DDS_Boolean |
caching or explicit filter evaluation for the associated DataReader. It |
||
|
optimization |
will instead rely on the filter implementation to provide appropriate |
|
|
|
||
|
|
results. |
|
|
|
|
5.4.8.2Registering a Custom Filter
To use a custom filter, it must be registered in the following places:
❏Register the custom filter in any subscribing application in which the filter is used to create a ContentFilteredTopic and corresponding DataReader.
❏In each publishing application, you only need to register the custom filter if you want to perform
For example, suppose Application A on the subscription side creates a Topic named X and a ContentFilteredTopic named filteredX (and a corresponding DataReader), using a previously registered content filter, myFilter. With only that, you will have filtering on the subscription side. If you also want to perform filtering in any application that publishes Topic X, then you also need to register the same definition of the ContentFilter myFilter in that application.
To register a new filter, use the DomainParticipant’s register_contentfilter() operation1:
DDS_ReturnCode_t register_contentfilter(const char * filter_name, const DDSContentFilter * contentfilter)
)
filter_name The name of the filter. The name must be unique within the DomainParticipant. The filter_name cannot have a length of 0. The same filtering functions and handle can be registered under different names.
content_filter This class specifies the functions that will be used to process the filter.
You must derive from the DDSContentFilter base class and implement the virtual compile, evaluate, and finalize functions described below.
Optionally, you can derive from the DDSWriterContentFilter base class instead, to implement additional filtering operations that will be used by the DataWriter. When performing
•compile
The function that will be used to compile a filter expression and parameters. Connext will call this function when a ContentFilteredTopic is created and when the filter parameters are changed. This parameter cannot be NULL. See Compile Function (Section 5.4.8.5). This is a member of DDSContentFil- ter and DDSWriterContentFilter.
•evaluate
The function that will be called by Connext each time a sample is received. Its purpose is to evaluate the sample based on the filter. This parameter cannot be NULL. See Evaluate Function (Section 5.4.8.6). This is a member of DDSContentFilter and DDSWriterContentFilter.
•finalize
The function that will be called by Connext when an instance of the custom content filter is no longer needed. This parameter may be NULL. See Final- ize Function (Section 5.4.8.7). This is a member of DDSContentFilter and DDSWriterContentFilter.
1. This operation is an extension to the DDS standard.
•writer_attach
The function that will be used to create some state required to perform filter- ing on the writer side using the operations provided in DDSWriterContent- Filter. Connext will call this function for every DataWriter; it will be called only the first time the DataWriter matches a DataReader using the specified fil- ter. This function will not be called for any subsequent DataReaders that match the DataWriter and are using the same filter. See Writer Attach Func- tion (Section 5.4.8.8). This is a member of DDSWriterContentFilter.
•writer_detach
The function that will be used to delete any state created using the writer_attach function. Connext will call this function when the DataWriter is deleted. See Writer Detach Function (Section 5.4.8.9). This is a member of DDSWriterContentFilter.
•writer_compile
The function that will be used by the DataWriter to compile filter expression and parameters provided by the reader. Connext will call this function when the DataWriter discovers a DataReader with a ContentFilteredTopic or when a DataWriter is notified of a change in DataReader’s filter parameter. This func- tion will receive as an input a DDS_Cookie_t which uniquely identifies the DataReader for which the function was invoked. See Writer Compile Func- tion (Section 5.4.8.10). This is a member of DDSWriterContentFilter.
•writer_evaluate
The function that will be called by Connext every time a DataWriter writes a new sample. Its purpose is to evaluate the sample for all the readers for which the DataWriter is performing
•writer_return_loan
The function that will be called by Connext to return the loan on a sequence of DDS_Cookie_t provided by the writer_evaluate function. See Writer Return Loan Function (Section 5.4.8.12). This is a member of DDSWriterCon- tentFilter.
•writer_finalize
The function that will be called by Connext to notify the filter implementa- tion that the DataWriter is no longer matching with a DataReader for which it was previously performing
5.4.8.3Unregistering a Custom Filter
To unregister a filter, use the DomainParticipant’s unregister_contentfilter() operation1, which is useful if you want to reuse a particular filter name. (Note: You do not have to unregister the filter before deleting the parent DomainParticipant. If you do not need to reuse the filter name to register another filter, there is no reason to unregister the filter.)
DDS_ReturnCode_t unregister_contentfilter(const char * filter_name)
filter_name The name of the previously registered filter. The name must be unique within the DomainParticipant. The filter_name cannot have a length of 0.
1. This operation is an extension to the DDS standard.
If you attempt to unregister a filter that is still being used by a ContentFilteredTopic, unregister_contentfilter() will return
PRECONDITION_NOT_MET.
If there are still existing discovered DataReaders with the same filter_name and the filter's compile function has previously been called on the discovered DataReaders, the filter’s finalize function will be called on those discovered DataReaders before the content filter is unregistered. This means filtering will be performed on the application that is creating the DataReader.
5.4.8.4Retrieving a ContentFilter
If you know the name of a ContentFilter, you can get a pointer to its structure. If the ContentFilter has not already been registered, this operation will return NULL.
DDS_ContentFilter *lookup_contentfilter (const char * filter_name)
5.4.8.5Compile Function
The compile function specified in the ContentFilter will be used to compile a filter expression and parameters. Please note that the term ‘compile’ is intentionally defined very broadly. It is entirely up to you, as the user, to decide what this function should do. The only requirement is that the error_code parameter passed to the compile function must return OK on successful execution. For example:
DDS_ReturnCode_t sample_compile_function(
void ** |
new_compile_data, |
const char * |
expression, |
const DDS_StringSeq & parameters, |
|
const DDS_TypeCode * |
type_code, |
const char * |
type_class_name, |
void * |
old_compile_data) |
{
*new_compile_data = (void*)DDS_String_dup(parameters[0]); return DDS_RETCODE_OK;
}
new_compile_dataA
expression |
An ASCIIZ string with the filter expression the ContentFilteredTopic was |
|
created with. Note that the memory used by the parameter pointer is |
|
owned by Connext. If you want to manipulate this string, you must make |
|
a copy of it first. Do not free the memory for this string. |
parameters |
A string sequence of expression parameters used to create the |
|
ContentFilteredTopic. The string sequence is equal (but not identical) to |
|
the string sequence passed to create_contentfilteredtopic() (see |
|
expression_parameters in Section 5.4.3). |
Important: |
The sequence passed to the compile function is owned by Connext and |
|
must not be referred to outside the compile function. |
type_code |
A pointer to the type code of the related Topic. A type code is a description |
|
of the topic members, such as their type (long, octet, etc.), but does not |
|
contain any information with respect to the memory layout of the |
|
structures. The type code can be used to write filters that can be used with |
|
any type. See Using Generated Types without Connext (Standalone) |
|
(Section 3.7). [Note: If you are using the Java API, this parameter will |
|
always be NULL.] |
type_class_name Fully qualified class name of the related Topic.
old_compile_data The new_compile_data value from a previous call to this instance of a content filter. If compile is called more than once for an instance of a ContentFilteredTopic (such as if the expression parameters are changed), then the new_compile_data value returned by the previous invocation is passed in the old_compile_data parameter (which can be NULL). If this is a new instance of the filter, NULL is passed. This parameter is useful for freeing or reusing previously allocated resources.
5.4.8.6Evaluate Function
The evaluate function specified in the ContentFilter will be called each time a sample is received. This function’s purpose is to determine if a sample should be filtered out (not put in the receive queue).
For example:
DDS_Boolean sample_evaluate_function(
void* compile_data, const void* sample,
struct DDS_FilterSampleInfo * meta_data) {
char *parameter = (char*)compile_data; DDS_Long x;
Foo *foo_sample = (Foo*)sample;
sscanf(parameter,"%d",&x);
return
}
The function may use the following parameters:
compile_data The last return value from the compile function for this instance of the content filter. Can be NULL.
sample A pointer to a C structure with the data to filter. Note that the evaluate function always receives deserialized data.
meta_data A pointer to the meta data associated with the sample.
Note: Currently the meta_data field only supports related_sample_identity
(described in Table 6.15, “DDS_WriteParams_t,” on page
5.4.8.7Finalize Function
The finalize function specified in the ContentFilter will be called when an instance of the custom content filter is no longer needed. When this function is called, it is safe to free all resources used by this particular instance of the custom content filter.
For example:
void sample_finalize_function ( void* compile_data) { /* free parameter string from compile function */ DDS_String_free((char *)compile_data);
}
The finalize function may use the following optional parameters:
system_key See Section 5.4.8.5.
handle This is the opaque returned by the last call to the compile function.
5.4.8.8Writer Attach Function
The writer_attach function specified in the WriterContentFilter will be used to create some state that can be used by the filter to perform
The function has the following parameter:
writer_filter_dataA
5.4.8.9Writer Detach Function
The writer_detach function specified in the WriterContentFilter will be used to free up any state that was created using the writer_attach function.
The function has the following parameter:
writer_filter_dataA pointer to the state created using the writer_attach function.
5.4.8.10Writer Compile Function
The writer_compile function specified in the WriterContentFilter will be used by a DataWriter to compile a filter expression and parameters associated with a DataReader for which the DataWriter is performing filtering. The function will receive as input a DDS_Cookie_t that uniquely identifies the DataReader for which the function was invoked.
The function has the following parameters:
writer_filter_dataA pointer to the state created using the writer_attach function. prop
expression |
An ASCIIZ string with the filter expression the ContentFilteredTopic was |
|
created with. Note that the memory used by the parameter pointer is |
|
owned by Connext. If you want to manipulate this string, you must make a |
|
copy of it first. Do not free the memory for this string. |
parameters |
A string sequence of expression parameters used to create the |
|
ContentFilteredTopic. The string sequence is equal (but not identical) to the |
|
string sequence passed to create_contentfilteredtopic() (see |
|
expression_parameters in Creating ContentFilteredTopics (Section 5.4.3)). |
|
Important: The sequence passed to the compile function is owned by |
|
Connext and must not be referred to outside the writer_compile function. |
type_code |
A pointer to the type code of the related Topic. A type code is a description |
|
of the topic members, such as their type (long, octet, etc.), but does not |
|
contain any information with respect to the memory layout of the |
|
structures. The type code can be used to write filters that can be used with |
|
any type. See Using Generated Types without Connext (Standalone) |
|
(Section 3.7). [Note: If you are using the Java API, this parameter will |
|
always be NULL.] |
type_class_name |
The fully qualified class name of the related Topic. |
cookie |
DDS_Cookie_t to uniquely identify the DataReader for which the |
|
writer_compile function was called. |
5.4.8.11Writer Evaluate Function
The writer_evaluate function specified in the WriterContentFilter will be used by a DataWriter to retrieve the list of DataReaders whose filter passed the sample. The writer_evaluate function
returns a sequence of cookies which identifies the set of DataReaders whose filter passes the sample.
The function has the following parameters:
writer_filter_data A pointer to the state created using the writer_attach function. |
|
||||||
sample |
A pointer to the data to be filtered. Note that the writer_evaluate function |
||||||
|
always receives deserialized data. |
|
|
|
|
||
meta_data |
A pointer to the |
|
|
||||
|
Note: |
Currently |
the |
meta_data |
field |
only |
supports |
|
related_sample_identity (described in Table 6.15, “DDS_WriteParams_t,” |
||||||
|
|
|
|
|
|
5.4.8.12Writer Return Loan Function
Connext uses the writer_return_loan function specified in the WriterContentFilter to indicate to the filter implementation that it has finished using the sequence of cookies returned by the filter’s writer_evaluate function. Your filter implementation should not free the memory associated with the cookie sequence before the writer_return_loan function is called.
The function has the following parameters:
writer_filter_data A pointer to the state created using the writer_attach function.
cookies |
The sequence of cookies for which the writer_return_loan function was |
|
called. |
5.4.8.13Writer Finalize Function
The writer_finalize function specified in the WriterContentFilter will be called when the DataWriter no longer matches with a DataReader that was created with ContentFilteredTopic. This will allow the filter implementation to delete any state it was maintaining for the
DataReader.
The function has the following parameters:
writer_filter_data A pointer to the state created using the writer_attach function.
cookie |
A DDS_Cookie_t to uniquely identify the DataReader for which the |
|
writer_finalize was called. |
Chapter 6 Sending Data
This chapter discusses how to create, configure, and use Publishers and DataWriters to send data. It describes how these entities interact, as well as the types of operations that are available for them.
This chapter includes the following sections:
❏Preview: Steps to Sending Data (Section 6.1)
❏Publisher/Subscriber QosPolicies (Section 6.4)
❏DataWriter QosPolicies (Section 6.5)
❏FlowControllers (DDS Extension) (Section 6.6)
The goal of this chapter is to help you become familiar with the Entities you need for sending data. For
6.1Preview: Steps to Sending Data
To send samples of a data instance:
1.Create and configure the required Entities:
a.Create a DomainParticipant (see Section 8.3.1).
b.Register user data types1 with the DomainParticipant. For example, the ‘FooDataType’.
c.Use the DomainParticipant to create a Topic with the registered data type.
d.Optionally2, use the DomainParticipant to create a Publisher.
e.Use the Publisher or DomainParticipant to create a DataWriter for the Topic.
f.Use a
1.Type registration is not required for
2.You are not required to explicitly create a Publisher; instead, you can use the 'implicit Publisher' created from the DomainParticipant. See Creating Publishers Explicitly vs. Implicitly (Section 6.2.1).
g.Optionally, register data instances with the DataWriter. If the Topic’s user data type contain key fields, then registering a data instance (data with a specific key value) will improve performance when repeatedly sending data with the same key. You may register many different data instances; each registration will return an instance handle corresponding to the specific key value. For
2.Every time there is changed data to be published:
a.Store the data in a variable of the correct data type (for instance, variable ‘Foo’ of the type ‘FooDataType’).
b.Call the FooDataWriter’s write() operation, passing it a reference to the variable ‘Foo’. For
DDS_HANDLE_NIL.
For keyed data types, you should pass in the instance handle corresponding to the instance stored in ‘Foo’, if you have registered the instance previously. This means that the data stored in ‘Foo’ has the same key value that was used to create instance handle.
c.The write() function will take a snapshot of the contents of ‘Foo’ and store it in Connext internal buffers from where the data sample is sent under the criteria set by the Publisher’s and DataWriter’s QosPolicies. If there are matched DataReaders, then the data sample will have been passed to the physical transport
6.2Publishers
An application that intends to publish information needs the following Entities: DomainParticipant, Topic, Publisher, and DataWriter. All Entities have a corresponding specialized Listener and a set of QosPolicies. A Listener is how Connext notifies your application of status changes relevant to the Entity. The QosPolicies allow your application to configure the behavior and resources of the Entity.
❏A DomainParticipant defines the domain in which the information will be made available.
❏A Topic defines the name under which the data will be published, as well as the type (format) of the data itself.
❏An application writes data using a DataWriter. The DataWriter is bound at creation time to a Topic, thus specifying the name under which the DataWriter will publish the data and the type associated with the data. The application uses the DataWriter’s write() operation to indicate that a new value of the data is available for dissemination.
❏A Publisher manages the activities of several DataWriters. The Publisher determines when the data is actually sent to other applications. Depending on the settings of various QosPolicies of the Publisher and DataWriter, data may be buffered to be sent with the data of other DataWriters or not sent at all. By default, the data is sent as soon as the DataWriter’s write() function is called.
You may have multiple Publishers, each managing a different set of DataWriters, or you may choose to use one Publisher for all your DataWriters.
For more information, see Creating Publishers Explicitly vs. Implicitly (Section 6.2.1).
Figure 6.1 on page
Figure 6.1 Publication Module
Publishers are used to perform the operations listed in Table 6.1 on page
Note: Some operations cannot be used within a listener callback, see Restricted Operations in Listener Callbacks (Section 4.5.1).
6.2.1Creating Publishers Explicitly vs. Implicitly
To send data, your application must have a Publisher. However, you are not required to explicitly create one. If you do not create one, the middleware will implicitly create a Publisher the first
Table 6.1 Publisher Operations
Working |
Operation |
|
|
Description |
|
|
Reference |
|
with ... |
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||
|
|
|
|
|||||
|
begin_coherent_ |
Indicates that the application will begin a coherent set |
||||||
|
changes |
of modifications. |
|
|
|
|
|
|
|
create_datawriter |
Creates a DataWriter that will belong to the Publisher. |
|
|||||
|
|
|
||||||
|
create_datawriter_ |
Sets the DataWriter’s QoS based on a specified QoS |
||||||
|
with_profile |
profile. |
|
|
|
|
|
|
|
|
|
|
|||||
|
copy_from_topic_qos |
Copies relevant QosPolicies from a Topic into a |
||||||
|
|
DataWriterQoS structure. |
|
|
|
|
||
|
delete_contained_ |
Deletes all of the DataWriters that were created by the |
||||||
|
entities |
Publisher. |
|
|
|
|
|
|
|
delete_datawriter |
Deletes a DataWriter that belongs to the Publisher. |
||||||
|
|
|
|
|
|
|
|
|
|
end_coherent_changes |
Ends |
the |
coherent |
set |
initiated |
by |
|
|
|
begin_coherent_changes(). |
|
|
|
|
||
|
get_all_datawriters |
Retrieves all the DataWriters created from this |
||||||
DataWriters |
|
Publisher. |
|
|
|
|
|
|
|
get_default_ |
Copies the Publisher’s default DataWriterQoS values |
||||||
|
datawriter_qos |
into a DataWriterQos structure. |
|
|
|
|||
|
get_status_changes |
Will always return 0 since there are no Statuses |
||||||
|
|
currently defined for Publishers. |
|
|
|
|||
|
lookup_datawriter |
Retrieves a DataWriter previously created for a specific |
||||||
|
|
Topic. |
|
|
|
|
|
|
|
set_default_datawriter_ |
Sets or changes the default DataWriterQos values. |
|
|||||
|
qos |
|
|
|
|
|
|
|
|
set_default_datawriter_ |
Sets or changes the default DataWriterQos values |
||||||
|
|
|||||||
|
qos_with_profile |
based on a QoS profile. |
|
|
|
|
||
|
|
|
|
|||||
|
|
Blocks until all data written by the Publisher’s reliable |
|
|||||
|
wait_for_ |
DataWriters are acknowledged by all matched reliable |
||||||
|
acknowledgments |
DataReaders, or until the a specified timeout duration, |
||||||
|
|
max_wait, elapses. |
|
|
|
|
||
|
|
|
|
|
||||
|
get_default_library |
Gets the Publisher’s default QoS profile library. |
|
|
||||
|
|
|
|
|
||||
|
get_default_profile |
Gets the Publisher’s default QoS profile. |
|
|
||||
|
|
|
|
|
||||
Libraries |
get_default_profile_ |
Gets the library that contains the Publisher’s |
default |
|||||
and Profiles |
library |
QoS profile. |
|
|
|
|
||
|
|
|
|
|
||||
|
set_default_library |
Sets the default library for a Publisher. |
|
|
|
|||
|
|
|
|
|
|
|||
|
set_default_profile |
Sets the default profile for a Publisher. |
|
|
|
|||
|
|
|
|
|||||
Participants |
get_participant |
Gets the DomainParticipant that was used to create the |
||||||
|
|
Publisher. |
|
|
|
|
|
|
Table 6.1 Publisher Operations
Working |
Operation |
Description |
Reference |
|
with ... |
||||
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
enable |
Enables the Publisher. |
||
|
|
|
|
|
|
get_qos |
Gets the Publisher’s current QosPolicy settings. This is |
|
|
|
most often used in preparation for calling set_qos(). |
|
||
|
|
|
||
|
|
|
|
|
|
|
Sets the Publisher’s QoS. You can use this operation to |
|
|
|
set_qos |
change the values for the Publisher’s QosPolicies. Note, |
||
|
|
however, that not all QosPolicies can be changed after |
|
|
|
|
the Publisher has been created. |
|
|
|
|
|
|
|
|
set_qos_with_profile |
Sets the Publisher’s QoS based on a specified QoS |
|
|
Publishers |
profile. |
|
||
|
|
|||
|
|
|
||
|
get_listener |
Gets the currently installed Listener. |
|
|
|
|
|
|
|
|
set_listener |
Sets the Publisher’s Listener. If you created the |
||
|
Publisher without a Listener, you can use this |
|
||
|
|
operation to add one later. |
|
|
|
|
|
|
|
|
|
Provides a hint that multiple |
|
|
|
suspend_publications |
Publisher are about to be written. Connext does not |
||
|
|
currently use this hint. |
||
|
|
|
||
|
|
|
|
|
|
resume_publications |
Reverses the action of suspend_publications(). |
|
|
|
|
|
|
time you create a DataWriter using the DomainParticipant’s operations. It will be created with default QoS (DDS_PUBLISHER_QOS_DEFAULT) and no Listener.
A Publisher (implicit or explicit) gets its own default QoS and the default QoS for its child DataWriters from the DomainParticipant. These default QoS are set when the Publisher is created. (This is true for Subscribers and DataReaders, too.)
The 'implicit Publisher' can be accessed using the DomainParticipant’s get_implicit_publisher() operation (see Section 8.3.9). You can use this ‘implicit Publisher’ just like any other Publisher (it has the same operations, QosPolicies, etc.). So you can change the mutable QoS and set a Listener if desired.
DataWriters are created by calling create_datawriter() or create_datawriter_with_profile()— these operations exist for DomainParticipants and Publishers. If you use the DomainParticipant to create a DataWriter, it will belong to the implicit Publisher. If you use a Publisher to create a DataWriter, it will belong to that Publisher.
The middleware will use the same implicit Publisher for all DataWriters that are created using the
DomainParticipant’s operations.
Having the middleware implicitly create a Publisher allows you to skip the step of creating a Publisher. However, having all your DataWriters belong to the same Publisher can reduce the concurrency of the system because all the write operations will be serialized.
6.2.2Creating Publishers
Before you can explicitly create a Publisher, you need a DomainParticipant (see Section 8.3). To create a Publisher, use the DomainParticipant’s create_publisher() or create_publisher_with_profile() operations:
DDSPublisher * create_publisher (const DDS_PublisherQos &qos,
DDSPublisherListener *listener,
DDS_StatusMask mask)
DDSPublisher * create_publisher_with_profile (
const char *library_name, const char *profile_name,
DDSPublisherListener *listener,
DDS_StatusMask mask)
A QoS profile is way to use QoS settings from an XML file or string. With this approach, you can change QoS settings without recompiling the application. For details, see Chapter 17: Configuring QoS with XML.
qos If you want the default QoS settings (described in the API Reference HTML documentation), use DDS_PUBLISHER_QOS_DEFAULT for this parameter (see Figure 6.2). If you want to customize any of the QosPolicies, supply a QoS structure (see Figure 6.3). The QoS structure for a Publisher is described in Section 6.4.
Note: If you use DDS_PUBLISHER_QOS_DEFAULT, it is not safe to create the Publisher while another thread may be simultaneously calling set_default_publisher_qos().
listener Listeners are callback routines. Connext uses them to notify your application when specific events (status changes) occur with respect to the Publisher or the DataWriters created by the Publisher. The listener parameter may be set to NULL if you do not want to install a Listener. If you use NULL, the Listener of the DomainParticipant to which the Publisher belongs will be used instead (if it is set). For more information on
PublisherListeners, see Section 6.2.5.
mask This
library_name A QoS Library is a named set of QoS profiles. See QoS Libraries (Section 17.10). If NULL is used for library_name, the DomainParticipant’s default library is assumed (see Section 6.2.4.3).
profile_name A QoS profile groups a set of related QoS, usually one per entity. See QoS Profiles (Section 17.9). If NULL is used for profile_name, the DomainParticipant’s default profile is assumed and library_name is ignored.
Figure 6.2 Creating a Publisher with Default QosPolicies
// create the publisher DDSPublisher* publisher =
if (publisher == NULL) { // handle error
};
For more examples, see Configuring QoS Settings when the Publisher is Created (Section 6.2.4.1).
After you create a Publisher, the next step is to use the Publisher to create a DataWriter for each Topic, see Section 6.3.1. For a list of operations you can perform with a Publisher, see Table 6.1 on page
6.2.3Deleting Publishers
This section applies to both implicitly and explicitly created Publishers. To delete a Publisher:
1.You must first delete all DataWriters that were created with the Publisher. Use the Publisher’s delete_datawriter() operation to delete them one at a time, or use the delete_contained_entities() operation (Section 6.2.3.1) to delete them all at the same time.
DDS_ReturnCode_t delete_datawriter (DDSDataWriter *a_datawriter)
2. Delete the Publisher by using the DomainParticipant’s delete_publisher() operation.
DDS_ReturnCode_t delete_publisher (DDSPublisher *p)
Note: A Publisher cannot be deleted within a Listener callback, see Restricted Operations in Listener Callbacks (Section 4.5.1).
6.2.3.1Deleting Contained DataWriters
The Publisher’s delete_contained_entities() operation deletes all the DataWriters that were created by the Publisher.
DDS_ReturnCode_t delete_contained_entities ()
After this operation returns successfully, the application may delete the Publisher (see Section 6.2.3).
6.2.4Setting Publisher QosPolicies
A Publisher’s QosPolicies control its behavior. Think of the policies as the configuration and behavior ‘properties’ of the Publisher. The DDS_PublisherQos structure has the following format:
DDS_PublisherQos struct { |
|
DDS_PresentationQosPolicy |
presentation; |
DDS_PartitionQosPolicy |
partition; |
DDS_GroupDataQosPolicy |
group_data; |
DDS_EntityFactoryQosPolicy |
entity_factory; |
DDS_AsynchronousPublisherQosPolicy asynchronous_publisher; |
|
DDS_ExclusiveAreaQosPolicy |
exclusive_area; |
} DDS_PublisherQos; |
|
Note: set_qos() cannot always be used in a listener callback; see Restricted Operations in Listener Callbacks (Section 4.5.1).
Table 6.2 summarizes the meaning of each policy. (They appear alphabetically in the table.) For information on why you would want to change a particular QosPolicy, see the referenced section. For defaults and valid ranges, please refer to the API Reference HTML documentation for each policy.
Table 6.2 Publisher QosPolicies
QosPolicy |
Description |
|
|
|
|
Configures the mechanism that sends user data in an |
|
external middleware thread. |
|
|
|
Controls whether or not child entities are created in the |
|
enabled state. |
|
|
|
Configures |
|
prevention capabilities. |
|
|
|
Table 6.2 Publisher QosPolicies
QosPolicy |
Description |
Along with TOPIC_DATA QosPolicy (Section 5.2.1) and
GROUP_DATA QosPolicy (Section 6.4.4)
USER_DATA QosPolicy (Section 6.5.25), this QosPolicy is used to attach a buffer of bytes to Connext's discovery meta- data.
PARTITION QosPolicy (Section 6.4.5)
Adds string identifiers that are used for matching
DataReaders and DataWriters for the same Topic.
PRESENTATION QosPolicy (Section 6.4.6)
Controls how Connext presents data received by an application to the DataReaders of the data.
6.2.4.1Configuring QoS Settings when the Publisher is Created
As described in Creating Publishers (Section 6.2.2), there are different ways to create a Publisher, depending on how you want to specify its QoS (with or without a QoS Profile).
❏In Figure 6.2 on page
❏To create a Publisher with
❏You can also create a Publisher and specify its QoS settings via a QoS Profile. To do so, call create_publisher_with_profile(), as seen in Figure 6.4 on page
❏If you want to use a QoS profile, but then make some changes to the QoS before creating the Publisher, call the DomainParticipantFactory’s get_publisher_qos_from_profile(), modify the QoS and use the modified QoS structure when calling create_publisher(), as seen in Figure 6.5 on page
For more information, see Creating Publishers (Section 6.2.2) and Chapter 17: Configuring QoS with XML.
Figure 6.3 Creating a Publisher with
DDS_PublisherQos publisher_qos;1
// get defaults
if
// handle error
}
//make QoS changes here
//for example, this changes the ENTITY_FACTORY QoS publisher_qos.entity_factory.autoenable_created_entities =
DDS_BOOLEAN_FALSE;
// create the publisher DDSPublisher* publisher =
NULL, DDS_STATUS_MASK_NONE);
if (publisher == NULL) { // handle error
}
1. For the C API, you need to use DDS_PublisherQos_INITIALIZER or DDS_PublisherQos_initialize(). See Section 4.2.2
Figure 6.4 Creating a Publisher with a QoS Profile
// create the publisher with QoS profile DDSPublisher* publisher =
“MyPublisherLibary”,
“MyPublisherProfile”,
NULL, DDS_STATUS_MASK_NONE);
if (publisher == NULL) { // handle error
}
Figure 6.5 Getting QoS Values from a Profile, Changing QoS Values, Creating a Publisher with Modified QoS Values
DDS_PublisherQos publisher_qos;1
// Get publisher QoS from profile
retcode =
if (retcode != DDS_RETCODE_OK) { // handle error
}
//Makes QoS changes here
//New entity_factory autoenable_created_entities will be true publisher_qos.entity_factory.autoenable_created_entities =
DDS_BOOLEAN_TRUE;
// create the publisher with modified QoS
DDSPublisher* publisher = participant->create_publisher( “Example Foo”, type_name, publisher_qos,
NULL, DDS_STATUS_MASK_NONE);
if (publisher == NULL) { // handle error
}
1. For the C API, you need to use DDS_PublisherQos_INITIALIZER or DDS_PublisherQos_initialize(). See Section 4.2.2
6.2.4.2Changing QoS Settings After the Publisher Has Been Created
There are 2 ways to change an existing Publisher’s QoS after it is has been
❏To change an existing Publisher’s QoS programmatically (that is, without using a QoS profile): get_qos() and set_qos(). See the example code in Figure 6.6. It retrieves the current values by calling the Publisher’s get_qos() operation. Then it modify the value and call set_qos() to apply the new value. Note, however, that some QosPolicies cannot be changed after the Publisher has been
❏You can also change a Publisher’s (and all other Entities’) QoS by using a QoS Profile and calling set_qos_with_profile(). For an example, see Figure 6.7. For more information, see Chapter 17: Configuring QoS with XML.
Figure 6.6 Changing the Qos of an Existing Publisher
DDS_PublisherQos publisher_qos;1
// Get current QoS. publisher points to an existing DDSPublisher. if
// handle error
}
//make changes
//New entity_factory autoenable_created_entities will be true
publisher_qos.entity_factory.autoenable_created_entities =DDS_BOOLEAN_TRUE; // Set the new QoS
if
}
1.For the C API, you need to use DDS_PublisherQos_INITIALIZER or DDS_PublisherQos_Initialize(). See Section 4.2.2
Figure 6.7 Changing the QoS of an Existing Publisher with a QoS Profile
retcode = publisher->set_qos_with_profile( “PublisherProfileLibrary”,”PublisherProfile”);
if (retcode != DDS_RETCODE_OK) { // handle error
}
6.2.4.3Getting and Setting the Publisher’s Default QoS Profile and Library
You can retrieve the default QoS profile used to create Publishers with the get_default_profile() operation.
You can also get the default library for Publishers, as well as the library that contains the Publisher’s default profile (these are not necessarily the same library); these operations are called get_default_library() and get_default_library_profile(), respectively. These operations are for informational purposes only (that is, you do not need to use them as a precursor to setting a library or profile.) For more information, see Chapter 17: Configuring QoS with XML.
virtual const char * get_default_library () const char * get_default_profile ()
const char * get_default_profile_library ()
There are also operations for setting the Publisher’s default library and profile:
DDS_ReturnCode_t set_default_library (const char * library_name) DDS_ReturnCode_t set_default_profile (const char * library_name, const char * profile_name)
These operations only affect which library/profile will be used as the default the next time a default Publisher library/profile is needed during a call to one of this Publisher’s operations.
When calling a Publisher operation that requires a profile_name parameter, you can use NULL to refer to the default profile. (This same information applies to setting a default library.) If the default library/profile is not set, the Publisher inherits the default from the DomainParticipant.
set_default_profile() does not set the default QoS for DataWriters created by the Publisher; for this functionality, use the Publisher’s set_default_datawriter_qos_with_profile(), see
Section 6.2.4.4 (you may pass in NULL after having called the Publisher’s set_default_profile()).
set_default_profile() does not set the default QoS for newly created Publishers; for this functionality, use the DomainParticipant’s set_default_publisher_qos_with_profile() operation,
see Section 8.3.6.4.
6.2.4.4Getting and Setting Default QoS for DataWriters
These operations set the default QoS that will be used for new DataWriters if create_datawriter() is called with DDS_DATAWRITER_QOS_DEFAULT as the ‘qos’ parameter:
DDS_ReturnCode_t set_default_datawriter_qos (const DDS_DataWriterQos &qos)
DDS_ReturnCode_t set_default_datawriter_qos_with_profile (
const char *library_name, const char *profile_name)
The above operations may potentially allocate memory, depending on the sequences contained in some QoS policies.
To get the default QoS that will be used for creating DataWriters if create_datawriter() is called with DDS_PARTICIPANT_QOS_DEFAULT as the ‘qos’ parameter:
DDS_ReturnCode_t get_default_datawriter_qos (DDS_DataWriterQos & qos)
This operation gets the QoS settings that were specified on the last successful call to set_default_datawriter_qos() or set_default_datawriter_qos_with_profile(), or if the call was never made, the default values listed in DDS_DataWriterQos.
Note: It is not safe to set the default DataWriter QoS values while another thread may be simultaneously calling get_default_datawriter_qos(), set_default_datawriter_qos(), or create_datawriter() with DDS_DATAWRITER_QOS_DEFAULT as the qos parameter. It is also not safe to get the default DataWriter QoS values while another thread may be simultaneously calling set_default_datawriter_qos(),
6.2.4.5Other Publisher
❏ Copying a Topic’s QoS into a DataWriter’s QoS This method is provided as a convenience for setting the values in a DataWriterQos structure before using that structure to create a DataWriter. As explained in Section 5.1.3, most of the policies in a TopicQos structure do not apply directly to the Topic itself, but to the associated DataWriters and DataReaders of that Topic. The TopicQos serves as a single container where the values of QosPolicies that must be set compatibly across matching DataWriters and DataReaders can be stored.
Thus instead of setting the values of the individual QosPolicies that make up a DataWriterQos structure every time you need to create a DataWriter for a Topic, you can use the Publisher’s copy_from_topic_qos() operation to “import” the Topic’s QosPolicies into a DataWriterQos structure. This operation copies the relevant policies in the TopicQos to the corresponding policies in the DataWriterQos.
This copy operation will often be used in combination with the Publisher’s get_default_datawriter_qos() and the Topic’s get_qos() operations. The Topic’s QoS values are merged on top of the Publisher’s default DataWriter QosPolicies with the result used to create a new DataWriter, or to set the QoS of an existing one (see Section 6.3.15).
❏Copying a Publisher’s QoS C API users should use the DDS_PublisherQos_copy() operation rather than using structure assignment when copying between two QoS structures. The copy() operation will perform a deep copy so that policies that allocate heap memory such as sequences are copied correctly. In C++, C++/CLI, C# and Java, a copy constructor is provided to take care of sequences automatically.
❏Clearing
DDS_PublisherQos objects before they are freed, or for QoS structures allocated on the stack, before they go out of scope. In C++, C++/CLI, C# and Java, the memory used by sequences is freed in the destructor.
6.2.5Setting Up PublisherListeners
Like all Entities, Publishers may optionally have Listeners. Listeners are
Note: Some operations cannot be used within a listener callback, see Restricted Operations in Listener Callbacks (Section 4.5.1).
As illustrated in Figure 6.1 on page
Instead, the methods of a PublisherListener will be called back for changes in the Statuses of any of the DataWriters that the Publisher has created. This is only true if the DataWriter itself does not have a DataWriterListener installed, see Section 6.3.4. If a DataWriterListener has been installed and has been enabled to handle a Status change for the DataWriter, then Connext will call the method of the DataWriterListener instead.
If you want a Publisher to handle status events for its DataWriters, you can set up a PublisherListener during the Publisher’s creation or use the set_listener() method after the Publisher is created. The last parameter is a
DDS_StatusMask mask = DDS_OFFERED_DEADLINE_MISSED_STATUS | DDS_OFFERED_INCOMPATIBLE_QOS_STATUS;
publisher =
or
DDS_StatusMask mask = DDS_OFFERED_DEADLINE_MISSED_STATUS | DDS_OFFERED_INCOMPATIBLE_QOS_STATUS;
As previously mentioned, the callbacks in the PublisherListener act as ‘default’ callbacks for all the DataWriters contained within. When Connext wants to notify a DataWriter of a relevant Status change (for example, PUBLICATION_MATCHED), it first checks to see if the DataWriter has the corresponding DataWriterListener callback enabled (such as the on_publication_matched() operation). If so, Connext dispatches the event to the DataWriterListener callback. Otherwise, Connext dispatches the event to the corresponding PublisherListener callback.
A particular callback in a DataWriter is not enabled if either:
❏The application installed a NULL DataWriterListener (meaning there are no callbacks for the DataWriter at all).
❏The application has disabled the callback for a DataWriterListener. This is done by turning off the associated status bit in the mask parameter passed to the set_listener() or create_datawriter() call when installing the DataWriterListener on the DataWriter. For more information on DataWriterListeners, see Section 6.3.4.
Similarly, the callbacks in the DomainParticipantListener act as ‘default’ callbacks for all the Publishers that belong to it. For more information on DomainParticipantListeners, see Section 8.3.5.
For example, Figure 6.8 shows how to create a Publisher with a Listener that simply prints the events it receives.
Figure 6.8 Example Code to Create a Publisher with a Simple Listener
class MyPublisherListener : public DDSPublisherListener { public:
virtual void on_offered_deadline_missed(DDSDataWriter* writer, const DDS_OfferedDeadlineMissedStatus& status);
virtual void on_liveliness_lost(DDSDataWriter* writer, const DDS_LivelinessLostStatus& status);
virtual void on_offered_incompatible_qos(DDSDataWriter* writer, const DDS_OfferedIncompatibleQosStatus& status);
virtual void on_publication_matched(DDSDataWriter* writer, const DDS_PublicationMatchedStatus& status);
virtual void
on_reliable_writer_cache_changed(DDSDataWriter* writer, const DDS_ReliableWriterCacheChangedStatus& status);
virtual void on_reliable_reader_activity_changed (DDSDataWriter* writer,
const DDS_ReliableReaderActivityChangedStatus& status);
};
void MyPublisherListener::on_offered_deadline_missed( DDSDataWriter* writer,
const DDS_OfferedDeadlineMissedStatus& status)
{
printf(“on_offered_deadline_missed\n”);
}
// ...Implement all remaining listeners in a similar manner...
DDSPublisherListener *myPubListener = new MyPublisherListener();
DDSPublisher* publisher = participant->create_publisher( DDS_PUBLISHER_QOS_DEFAULT, myPubListener, DDS_STATUS_MASK_ALL);
6.2.6Finding a Publisher’s Related Entities
These Publisher operations are useful for obtaining a handle to related entities:
❏get_participant(): Gets the DomainParticipant with which a Publisher was created.
❏lookup_datawriter(): Finds a DataWriter created by the Publisher with a Topic of a particular name. Note that in the event that multiple DataWriters were created by the same Publisher with the same Topic, any one of them may be returned by this method.
❏DDS_Publisher_as_Entity(): This method is provided for C applications and is necessary when invoking the parent class Entity methods on Publishers. For example, to call the Entity method get_status_changes() on a Publisher, my_pub, do the following:
DDS_Entity_get_status_changes(DDS_Publisher_as_Entity(my_pub))
DDS_Publisher_as_Entity() is not provided in the C++, C++/CLI, C# and Java APIs because the
6.2.7Waiting for Acknowledgments in a Publisher
The Publisher’s wait_for_acknowledgments() operation blocks the calling thread until either all data written by the Publisher’s reliable DataWriters is acknowledged or the duration specified by the max_wait parameter elapses, whichever happens first.
Note that if a thread is blocked in the call to wait_for_acknowledgments() on a Publisher and a different thread writes new samples on any of the Publisher’s reliable DataWriters, the new samples must be acknowledged before unblocking the thread that is waiting on wait_for_acknowledgments().
DDS_ReturnCode_t wait_for_acknowledgments
(const DDS_Duration_t & max_wait)
This operation returns DDS_RETCODE_OK if all the samples were acknowledged, or
DDS_RETCODE_TIMEOUT if the max_wait duration expired first.
There is a similar operation available for individual DataWriters, see Section 6.3.11.
The reliability protocol used by Connext is discussed in Chapter 10: Reliable Communications.
6.2.8Statuses for Publishers
There are no statuses specific to the Publisher itself. The following statuses can be monitored by the PublisherListener for the Publisher’s DataWriters.
❏OFFERED_DEADLINE_MISSED Status (Section 6.3.6.4)
❏LIVELINESS_LOST Status (Section 6.3.6.3)
❏OFFERED_INCOMPATIBLE_QOS Status (Section 6.3.6.5)
❏PUBLICATION_MATCHED Status (Section 6.3.6.6)
❏RELIABLE_WRITER_CACHE_CHANGED Status (DDS Extension) (Section 6.3.6.7)
❏RELIABLE_READER_ACTIVITY_CHANGED Status (DDS Extension) (Section 6.3.6.8)
6.2.9Suspending and Resuming Publications
The operations suspend_publications() and resume_publications() provide a hint to Connext that multiple
6.3DataWriters
To create a DataWriter, you need a DomainParticipant and a Topic.
You need a DataWriter for each Topic that you want to publish. Once you have a DataWriter, you can use it to perform the operations listed in Table 6.3. The most important operation is write(), described in Section 6.3.8. For more details on all operations, see the API Reference HTML documentation.
DataWriters are created by using operations on a DomainParticipant or a Publisher, as described in Section 6.3.1. If you use the DomainParticipant’s operations, the DataWriter will belong to an implicit Publisher that is automatically created by the middleware. If you use a Publisher’s operations, the DataWriter will belong to that Publisher. So either way, the DataWriter belongs to a
Publisher.
Note: Some operations cannot be used within a listener callback, see Restricted Operations in Listener Callbacks (Section 4.5.1).
Table 6.3 DataWriter Operations
Working with |
Operation |
|
|
|
Description |
|
|
Reference |
|
... |
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
||||||
|
assert_liveliness |
Manually asserts the liveliness of the DataWriter. |
|||||||
|
|
|
|
|
|
|
|||
|
enable |
Enables the DataWriter. |
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
get_qos |
Gets the QoS. |
|
|
|
|
|
||
|
|
|
|
|
|
||||
|
lookup_instance |
Gets a handle, given an instance. |
(Useful |
for |
|||||
DataWriters |
|
keyed data types only.) |
|
|
|
|
|||
|
set_qos |
Modifies the QoS. |
|
|
|
||||
|
|
|
|
|
|||||
|
set_qos_with_profile |
Modifies the QoS based on a QoS profile. |
|
||||||
|
|
|
|
|
|
||||
|
get_listener |
Gets the currently installed Listener. |
|
|
|||||
|
|
|
|
|
|
|
|
|
|
|
set_listener |
Replaces the Listener. |
|
|
|
||||
|
|
|
|
|
|||||
|
|
|
|
||||||
|
dispose |
States that the instance no longer exists. (Useful |
|
||||||
|
for keyed data types only.) |
|
|
|
|
||||
|
|
|
|
|
|
||||
|
|
|
|||||||
|
|
Same as dispose, but allows the application to |
|||||||
|
dispose_w_timestamp |
override the automatic source_timestamp. |
|
||||||
|
|
(Useful for keyed data types only.) |
|
|
|
||||
|
|
|
|
||||||
|
flush |
Makes the batch available to be sent on the |
|||||||
|
|
network. |
|
|
|
|
|
|
|
|
get_key_value |
Maps an instance_handle to the corresponding |
|||||||
|
|
key. |
|
|
|
|
|
|
|
|
|
A |
|
||||||
|
narrow |
DDSDataWriter pointer and ‘narrows’ it to a |
|||||||
|
‘FooDataWriter’ where ‘Foo’ is the related data |
||||||||
|
|
type. |
|
|
|
|
|
|
|
|
|
|
|
||||||
FooData- |
|
States the intent of the DataWriter to write values |
|
||||||
Writer |
register_instance |
of the |
|
||||||
(See |
Improves the performance of subsequent writes |
|
|||||||
|
|
||||||||
|
to the instance. (Useful for keyed data types only.) |
|
|||||||
|
register_instance_w_ |
Like register_instance, but allows the application |
|
||||||
|
to override |
the |
automatic |
source_timestamp. |
|
||||
|
timestamp |
|
|||||||
|
(Useful for keyed data types only.) |
|
|
|
|||||
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
Reverses |
register_instance. |
Relinquishes |
the |
||||
|
|
|
|||||||
|
unregister_instance |
ownership of the instance. (Useful for keyed data |
|
||||||
|
|
types only.) |
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
Like unregister_instance, but allows the |
|
||||||
|
unregister_instance_w_ |
application |
to |
override |
the |
automatic |
|
||
|
timestamp |
source_timestamp. (Useful for keyed data types |
|
||||||
|
|
only.) |
|
|
|
|
|
|
|
|
|
|
|
|
|||||
|
write |
Writes a new value for a |
|
|
|||||
|
|
|
|
|
|
|
|||
|
write_w_timestamp |
Same as |
write, |
but allows |
the application |
to |
|||
|
override the automatic source_timestamp. |
|
|
||||||
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
Table 6.3 DataWriter Operations
Working with |
Operation |
Description |
|
Reference |
... |
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Same as write, but allows the application to |
|
|
|
write_w_params |
specify parameters such as source timestamp and |
||
|
|
instance handle. |
|
|
|
|
|
|
|
|
|
Same as dispose, but allows the application to |
|
|
FooData- |
dispose_w_params |
specify parameters such as source timestamp and |
||
|
instance handle.. |
|
|
|
Writer |
|
|
|
|
|
|
|
|
|
(See |
|
Same as register, but allows the application to |
|
|
register_w_params |
specify parameters such as source timestamp, |
|
||
|
|
instance handle. |
|
|
|
|
|
|
|
|
|
Same as unregister, but allows the application to |
||
|
|
|
||
|
unregister_w_params |
specify parameters such as source timestamp, and |
|
|
|
|
instance handle. |
|
|
|
|
|
|
|
|
|
Gets a list of subscriptions that have a matching |
|
|
|
get_matched_ |
Topic and compatible QoS. These are the |
|
|
|
subscriptions |
subscriptions currently associated with the |
|
|
|
|
DataWriter. |
|
|
|
|
|
|
|
Matched |
get_matched_ |
Gets information on a subscription with |
a |
|
Subscriptions |
subscription_data |
matching Topic and compatible QoS. |
|
|
|
|
|||
|
|
Gets a list of locators for subscriptions that have a |
|
|
|
get_matched_ |
matching Topic and compatible QoS. These are |
|
|
|
subscription_locators |
the subscriptions currently associated with the |
|
|
|
|
DataWriter. |
|
|
|
|
|
|
|
|
|
Gets a list of statuses that have changed since the |
|
|
|
get_status_changes |
last time the application read the status or the |
||
|
|
listeners were called. |
|
|
|
|
|
|
|
|
get_liveliness_lost_status |
Gets LIVELINESS_LOST status. |
|
|
|
|
|
|
|
|
get_offered_deadline_ |
Gets OFFERED_DEADLINE_MISSED status. |
|
|
|
missed_status |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
get_offered_ |
Gets OFFERED_INCOMPATIBLE_QOS status. |
|
|
|
incompatible_qos_status |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
get_publication_match_ |
Gets PUBLICATION_MATCHED_QOS status. |
|
|
|
status |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
get_reliable_writer_ |
Gets RELIABLE_WRITER_CACHE_CHANGED |
||
|
cache_changed_status |
status |
|
|
|
|
|
||
|
|
|
|
|
Status |
get_reliable_reader_ |
Gets |
|
|
|
RELIABLE_READER_ACTIVITY_CHANGED |
|
|
|
|
activity_changed_status |
|
|
|
|
status |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
get_datawriter_cache_ |
Gets DATA_WRITER_CACHE_status |
|
|
|
status |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
get_datawriter_protocol_ |
Gets DATA_WRITER_PROTOCOL status |
|
|
|
status |
|
|
|
|
get_matched_ |
Gets DATA_WRITER_PROTOCOL status for this |
|
|
|
subscription_datawriter_ |
DataWriter, per matched subscription identified |
|
|
|
protocol_status |
by the subscription_handle. |
|
|
|
|
|
|
|
|
get_matched_ |
Gets DATA_WRITER_PROTOCOL status for this |
||
|
subscription_datawriter_ |
|
||
|
DataWriter, per matched subscription |
as |
|
|
|
protocol_status_ |
identified by a locator. |
|
|
|
by_locator |
|
|
|
|
|
|
|
|
|
|
|
|
|
Table 6.3 DataWriter Operations
Working with |
Operation |
Description |
Reference |
|
... |
||||
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
get_publisher |
Gets the Publisher to which the DataWriter |
|
|
|
belongs. |
|||
|
|
|||
|
get_topic |
Get the Topic associated with the DataWriter. |
|
|
Other |
|
|
|
|
|
Blocks the calling thread until either all data |
|
||
|
|
|
||
|
wait_for_ |
written by the DataWriter is acknowledged by all |
||
|
acknowledgements |
matched Reliable DataReaders, or until the a |
|
|
|
|
specified timeout duration, max_wait, elapses. |
|
|
|
|
|
|
6.3.1Creating DataWriters
Before you can create a DataWriter, you need a DomainParticipant, a Topic, and optionally, a
Publisher.
DataWriters are created by calling create_datawriter() or create_datawriter_with_profile()— these operations exist for DomainParticipants and Publishers. If you use the DomainParticipant to create a DataWriter, it will belong to the implicit Publisher described in Section 6.2.1. If you use a Publisher’s operations to create a DataWriter, it will belong to that Publisher.
DDSDataWriter* create_datawriter ( DDSTopic *topic,
const DDS_DataWriterQos &qos,
DDSDataWriterListener *listener, DDS_StatusMask mask)
DDSDataWriter * create_datawriter_with_profile ( DDSTopic * topic,
const char * library_name, const char * profile_name,
DDSDataWriterListener * listener, DDS_StatusMask mask)
A QoS profile is way to use QoS settings from an XML file or string. With this approach, you can change QoS settings without recompiling the application. For details, see Chapter 17: Configuring QoS with XML.
topic The Topic that the DataWriter will publish. This must have been previously created by the same DomainParticipant.
qos If you want the default QoS settings (described in the API Reference HTML documentation), use the constant DDS_DATAWRITER_QOS_DEFAULT for this parameter (see Figure 6.9 on page
Note: If you use DDS_DATAWRITER_QOS_DEFAULT for the qos parameter, it is not safe to create the DataWriter while another thread may be simultaneously calling the Pub- lisher’s set_default_datawriter_qos() operation.
listener Listeners are callback routines. Connext uses them to notify your application of specific events (status changes) that may occur with respect to the DataWriter. The listener parameter may be set to NULL; in this case, the PublisherListener (or if that is NULL, the DomainParticipantListener) will be used instead. For more information, see Section 6.3.4.
mask This
library_name A QoS Library is a named set of QoS profiles. See QoS Libraries (Section 17.10).
profile_name A QoS profile groups a set of related QoS, usually one per entity. See QoS Profiles (Section 17.9).
For more examples on how to create a DataWriter, see Configuring QoS Settings when the DataWriter is Created (Section 6.3.15.1)
After you create a DataWriter, you can use it to write data. See Writing Data (Section 6.3.8).
Note: When a DataWriter is created, only those transports already registered are available to the DataWriter. The
Figure 6.9 Creating a DataWriter with Default QosPolicies and a Listener
// MyWriterListener is user defined, extends DDSDataWriterListener DDSDataWriterListener* writer_listener = new MyWriterListener();
DDSDataWriter* writer = publisher->create_datawriter( topic,
DDS_DATAWRITER_QOS_DEFAULT, writer_listener, DDS_STATUS_MASK_ALL);
if (writer == NULL) { // ... error
};
// narrow it for your specific data type
FooDataWriter* foo_writer = FooDataWriter::narrow(writer);
6.3.2Getting All DataWriters
To retrieve all the DataWriters created by the Publisher, use the Publisher’s get_all_datawriters() operation:
DDS_ReturnCode_t get_all_datawriters(DDS_Publisher* self,
struct DDS_DataWriterSeq* writers);
6.3.3Deleting DataWriters
To delete a single DataWriter, use the Publisher’s delete_datawriter() operation:
DDS_ReturnCode_t delete_datawriter (DDSDataWriter *a_datawriter)
Note: A DataWriter cannot be deleted within its own writer listener callback, see Restricted Operations in Listener Callbacks (Section 4.5.1)
To delete all of a Publisher’s DataWriters, use the Publisher’s delete_contained_entities() operation (see Section 6.2.3.1).
Special instructions for deleting DataWriters if you are using the ‘Timestamp’ APIs and BY_SOURCE_TIMESTAMP Destination Order:
This note only applies when the DataWriter’s DestinationOrderQosPolicy’s kind is BY_SOURCE_TIMESTAMP.
Calls to delete_datawriter() may fail if your application has previously used the “with timestamp” APIs (write_w_timestamp(), register_instance_w_timestamp(), unregister_instance_w_timestamp(), or dispose_w_timestamp()) with a timestamp that is larger than the time at which delete_datawriter() is called.
To prevent delete_datawriter() from failing in this situation, either:
❏Change the WriterDataLifeCycle QoS Policy so that Connext will not
writer_qos.writer_data_lifecycle. autodispose_unregistered_instances =
DDS_BOOLEAN_FALSE; or
❏Explicitly call unregister_instance_w_timestamp() for all instances modified with the
*_w_timestamp() APIs before calling delete_datawriter().
6.3.4Setting Up DataWriterListeners
DataWriters may optionally have Listeners. Listeners are essentially callback routines and provide the means for Connext to notify your application of the occurrence of events (status changes) relevant to the DataWriter. For more general information on Listeners, see Listeners (Section 4.4).
Note: Some operations cannot be used within a listener callback, see Restricted Operations in Listener Callbacks (Section 4.5.1).
If you do not implement a DataWriterListener, the associated PublisherListener is used instead. If that Publisher also does not have a Listener, then the DomainParticipant’s Listener is used if one exists (see Section 6.2.5 and Section 8.3.5).
Listeners are typically set up when the DataWriter is created (see Section 6.2). You can also set one up after creation by using the set_listener() operation. Connext will invoke a DataWriter’s
Listener to report the status changes listed in Table 6.4 (if the Listener is set up to handle the particular status, see Section 6.3.4).
Table 6.4 DataWriterListener Callbacks
This DataWriterListener callback... |
|
|
|
... is triggered by ... |
|
|
|
||||
|
|
||||
|
A replacement of an existing instance by a new instance; |
||||
on_instance_replaced() |
|||||
|
|
|
|
||
|
|
||||
on_liveliness_lost |
A change to LIVELINESS_LOST Status (Section 6.3.6.3) |
||||
|
|
|
|
|
|
on_offered_deadline_missed |
A |
change |
to |
||
|
|
||||
|
|
|
|||
|
|
|
|
|
|
on_offered_incompatible_qos |
A |
change |
to |
||
|
|
||||
|
|
|
|||
|
|
||||
on_publication_matched |
A change to PUBLICATION_MATCHED Status (Section |
||||
|
|
|
|||
|
|
|
|
||
|
|
|
|
||
on_reliable_writer_cache_changed |
A |
change |
|||
|
|||||
|
|
||||
|
|
||||
on_reliable_reader_activity_changed |
A change to RELIABLE_READER_ACTIVITY_CHANGED |
||||
|
|||||
|
|
||||
|
|
|
|
|
|
6.3.5Checking DataWriter Status
You can access an individual communication status for a DataWriter with the operations shown in Table 6.5.
Table 6.5 DataWriter Status Operations
Use this operation... |
...to retrieve this status: |
|
|
|
|
|
|
|
get_datawriter_cache_status |
|
|
|
|
|
get_datawriter_protocol_status |
|
|
|
|
|
get_matched_subscription_datawriter_ |
|
|
protocol_status |
||
|
|
|
get_matched_subscription_datawriter_ |
|
|
protocol_status_by_locator |
|
|
|
|
|
get_liveliness_lost_status |
|
|
|
|
|
get_offered_deadline_missed_status |
||
|
|
|
get_offered_incompatible_qos_status |
||
|
|
|
get_publication_match_status |
|
|
|
|
|
get_reliable_writer_cache_changed_status |
||
|
||
|
|
|
|
|
|
get_reliable_reader_activity_changed_status |
||
|
|
|
get_status_changes |
A list of what changed in all of the above. |
|
|
|
|
These methods are useful in the event that no Listener callback is set to receive notifications of status changes. If a Listener is used, the callback will contain the new status information, in which case calling these methods is unlikely to be necessary.
The get_status_changes() operation provides a list of statuses that have changed since the last time the status changes were ‘reset.’ A status change is reset each time the application calls the
corresponding get_*_status(), as well as each time Connext returns from calling the Listener callback associated with that status.
For more on status, see Setting Up DataWriterListeners (Section 6.3.4), Statuses for DataWriters (Section 6.3.6), and Listeners (Section 4.4).
6.3.6Statuses for DataWriters
There are several types of statuses available for a DataWriter. You can use the get_*_status() operations (Section 6.3.15) to access them, or use a DataWriterListener (Section 6.3.4) to listen for changes in their values. Each status has an associated data structure and is described in more detail in the following sections.
❏DATA_WRITER_CACHE_STATUS (Section 6.3.6.1)
❏DATA_WRITER_PROTOCOL_STATUS (Section 6.3.6.2)
❏LIVELINESS_LOST Status (Section 6.3.6.3)
❏OFFERED_DEADLINE_MISSED Status (Section 6.3.6.4)
❏OFFERED_INCOMPATIBLE_QOS Status (Section 6.3.6.5)
❏PUBLICATION_MATCHED Status (Section 6.3.6.6)
❏RELIABLE_WRITER_CACHE_CHANGED Status (DDS Extension) (Section 6.3.6.7)
❏RELIABLE_READER_ACTIVITY_CHANGED Status (DDS Extension) (Section 6.3.6.8)
6.3.6.1DATA_WRITER_CACHE_STATUS
This status keeps track of the number of samples in the DataWriter’s queue.
This status does not have an associated Listener. You can access this status by calling the DataWriter’s get_datawriter_cache_status() operation, which will return the status structure described in Table 6.6.
Table 6.6 DDS_DataWriterCacheStatus
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_Long |
sample_count_peak |
Highest number of samples in the DataWriter’s queue over the |
|
lifetime of the DataWriter. |
|||
|
|
||
|
|
|
|
DDS_Long |
sample_count |
Current number of samples in the DataWriter’s queue. |
|
|
|
|
6.3.6.2DATA_WRITER_PROTOCOL_STATUS
This status includes internal protocol related metrics (such as the number of samples pushed, pulled, filtered) and the status of
❏Pulled samples are samples sent for repairs (that is, samples that had to be resent), for late joiners, and all samples sent by the local DataWriter when push_on_write (in DATA_WRITER_PROTOCOL QosPolicy (DDS Extension) (Section 6.5.3)) is DDS_BOOLEAN_FALSE.
❏Pushed samples are samples sent on write() when push_on_write is DDS_BOOLEAN_TRUE.
❏Filtered samples are samples that are not sent due to DataWriter filtering
This status does not have an associated Listener. You can access this status by calling the following operations on the DataWriter (all of which return the status structure described in Table 6.7 on page
❏get_datawriter_protocol_status() returns the sum of the protocol status for all the matched subscriptions for the DataWriter.
❏get_matched_subscription_datawriter_protocol_status() returns the protocol status of a particular matched subscription, identified by a subscription_handle.
❏get_matched_subscription_datawriter_protocol_status_by_locator() returns the protocol status of a particular matched subscription, identified by a locator. (See Locator Format (Section 14.2.1.1).)
Note: Status for a remote entity is only kept while the entity is alive. Once a remote entity is no longer alive, its status is deleted. If you try to get the matched subscription status for a remote entity that is no longer alive, the ‘get status’ call will return an error.
Table 6.7 DDS_DataWriterProtocolStatus
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
pushed_sample_count |
The number of user samples pushed on write from a |
|
|
local DataWriter to a matching remote DataReader. |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of user samples |
|
|
pushed_sample_count_change |
pushed on write from a local DataWriter to a matching |
|
|
remote DataReader since the last time the status was |
||
|
|
||
|
|
read. |
|
DDS_LongLong |
|
|
|
|
The number of bytes of user samples pushed on write |
||
|
pushed_sample_bytes |
from a local DataWriter to a matching remote |
|
|
|
DataReader. |
|
|
|
|
|
|
|
The incremental change in the number of bytes of user |
|
|
pushed_sample_bytes_change |
samples pushed on write from a local DataWriter to a |
|
|
matching remote DataReader since the last time the |
||
|
|
||
|
|
status was read. |
|
|
|
|
|
|
filtered_sample_count |
The number of user samples preemptively filtered by a |
|
|
local DataWriter due to |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of user samples |
|
|
filtered_sample_count_change |
preemptively filtered by a local DataWriter due to |
|
|
ContentFilteredTopics since the last time the status |
||
|
|
||
DDS_LongLong |
|
was read. |
|
|
|
||
filtered_sample_bytes |
The number of user samples preemptively filtered by a |
||
|
|||
|
local DataWriter due to ContentFilteredTopics. |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of user samples |
|
|
filtered_sample_bytes_change |
preemptively filtered by a local DataWriter due to |
|
|
ContentFilteredTopics since the last time the status |
||
|
|
||
|
|
was read. |
|
|
|
|
|
|
sent_heartbeat_count |
The number of Heartbeats sent between a local |
|
|
DataWriter and matching remote DataReaders. |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of Heartbeats |
|
|
sent_heartbeat_count_change |
sent between a local DataWriter and matching remote |
|
|
|
DataReaders since the last time the status was read. |
|
DDS_LongLong |
|
|
|
sent_heartbeat_bytes |
The number of bytes of Heartbeats sent between a |
||
|
local DataWriter and matching remote DataReader. |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of bytes of |
|
|
sent_heartbeat_bytes_change |
Heartbeats sent between a local DataWriter and |
|
|
matching remote DataReaders since the last time the |
||
|
|
||
|
|
status was read. |
|
|
|
|
Table 6.7 DDS_DataWriterProtocolStatus
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
pulled_sample_count |
The number of user samples pulled from local |
|
|
DataWriter by matching DataReaders. |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of user samples |
|
|
pulled_sample_count_change |
pulled from local DataWriter by matching DataReaders |
|
DDS_LongLong |
|
since the last time the status was read. |
|
|
|
||
pulled_sample_bytes |
The number of bytes of user samples pulled from local |
||
|
|||
|
DataWriter by matching DataReaders. |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of bytes of user |
|
|
pulled_sample_bytes_change |
samples pulled from local DataWriter by matching |
|
|
|
DataReaders since the last time the status was read. |
|
|
|
|
|
|
received_ack_count |
The number of ACKs from a remote DataReader |
|
|
received by a local DataWriter. |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of ACKs from a |
|
|
received_ack_count_change |
remote DataReader received by a local DataWriter since |
|
DDS_LongLong |
|
the last time the status was read. |
|
|
|
||
received_ack_bytes |
The number of bytes of ACKs from a remote |
||
|
|||
|
DataReader received by a local DataWriter. |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of bytes of |
|
|
received_ack_bytes_change |
ACKs from a remote DataReader received by a local |
|
|
|
DataWriter since the last time the status was read. |
|
|
|
|
|
|
received_nack_count |
The number of NACKs from a remote DataReader |
|
|
received by a local DataWriter. |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of NACKs from |
|
|
received_nack_count_change |
a remote DataReader received by a local DataWriter |
|
DDS_LongLong |
|
since the last time the status was read. |
|
|
|
||
received_nack_bytes |
The number of bytes of NACKs from a remote |
||
|
|||
|
DataReader received by a local DataWriter. |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of bytes of |
|
|
received_nack_bytes_change |
NACKs from a remote DataReader received by a local |
|
|
|
DataWriter since the last time the status was read. |
|
|
|
|
|
|
sent_gap_count |
The number of GAPs sent from local DataWriter to |
|
|
matching remote DataReaders. |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of GAPs sent |
|
|
sent_gap_count_change |
from local DataWriter to matching remote DataReaders |
|
DDS_LongLong |
|
since the last time the status was read. |
|
|
|
||
sent_gap_bytes |
The number of bytes of GAPs sent from local |
||
|
|||
|
DataWriter to matching remote DataReaders. |
||
|
|
||
|
|
|
|
|
|
The incremental change in the number of bytes of |
|
|
sent_gap_bytes_change |
GAPs sent from local DataWriter to matching remote |
|
|
|
DataReaders since the last time the status was read. |
|
|
|
|
|
|
rejected_sample_count |
The number of times a sample is rejected for |
|
|
unanticipated reasons in the send path. |
||
|
|
||
DDS_LongLong |
|
|
|
|
The incremental change in the number of times a |
||
|
rejected_sample_count_change |
sample is rejected due to exceptions in the send path |
|
|
|
since the last time the status was read. |
|
|
|
|
|
DDS_Long |
send_window_size |
Current maximum number of outstanding samples |
|
allowed in the DataWriter's queue. |
|||
|
|
||
|
|
|
Table 6.7 DDS_DataWriterProtocolStatus
Type |
Field Name |
|
Description |
|
|
|
|
||
|
|
|
||
|
first_available_sample_ |
Sequence number of the first available sample in the |
||
|
sequence_number |
DataWriter's reliability queue. |
||
|
|
|
||
|
last_available_sample_ |
Sequence number of the last available sample in the |
||
|
sequence_number |
DataWriter's reliability queue. |
||
|
|
|
||
|
first_unacknowledged_sample_ |
Sequence number of the first unacknowledged sample |
||
DDS_Sequence |
sequence_number |
in the DataWriter's reliability queue. |
||
Number_t |
first_available_sample_virtual_ |
Virtual sequence number of the first available sample |
||
|
sequence_number |
in the DataWriter's reliability queue. |
||
|
|
|
||
|
last_available_sample_virtual_ |
Virtual sequence number of the last available sample |
||
|
sequence_number |
in the DataWriter's reliability queue. |
||
|
|
|
||
|
first_unacknowledged_sample_virtual_ |
Virtual sequence number of the first unacknowledged |
||
|
sequence_number |
sample in the DataWriter's reliability queue. |
||
|
|
|
||
|
first_unacknowledged_sample_ |
Instance Handle of the matching remote DataReader for |
||
|
which the DataWriter has kept the first available |
|||
|
subscription_handle |
|||
DDS_Sequence |
sample in the reliability queue. |
|||
|
||||
Number_t |
first_unelapsed_keep_duration_ |
Sequence number |
of the first sample kept in the |
|
|
DataWriter's queue |
whose keep_duration (applied |
||
|
sample_sequence_number |
|||
|
when disable_positive_acks is set) has not yet elapsed. |
|||
|
|
|||
|
|
|
|
6.3.6.3LIVELINESS_LOST Status
A change to this status indicates that the DataWriter failed to signal its liveliness within the time specified by the LIVELINESS QosPolicy (Section 6.5.13).
It is different than the RELIABLE_READER_ACTIVITY_CHANGED Status (DDS Extension) (Section 6.3.6.8) status that provides information about the liveliness of a DataWriter’s matched DataReaders; this status reflects the DataWriter’s own liveliness.
The structure for this status appears in Table 6.8 on page
Table 6.8 DDS_LivelinessLostStatus
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_Long |
total_count |
Cumulative number of times the DataWriter failed to explicitly signal |
|
its liveliness within the liveliness period. |
|||
|
|
||
|
|
|
|
DDS_Long |
total_count_change |
The change in total_count since the last time the Listener was called |
|
|
|
or the status was read. |
The DataWriterListener’s on_liveliness_lost() callback is invoked when this status changes. You can also retrieve the value by calling the DataWriter’s get_liveliness_lost_status() operation.
6.3.6.4OFFERED_DEADLINE_MISSED Status
A change to this status indicates that the DataWriter failed to write data within the time period set in its DEADLINE QosPolicy (Section 6.5.5).
The structure for this status appears in Table 6.9.
The DataWriterListener’s on_offered_deadline_missed() operation is invoked when this status changes. You can also retrieve the value by calling the DataWriter’s get_deadline_missed_status() operation.
Table 6.9 DDS_OfferedDeadlineMissedStatus
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_Long |
total_count |
Cumulative number of times the DataWriter failed to write |
|
within its offered deadline. |
|||
|
|
||
|
|
|
|
DDS_Long |
total_count_change |
The change in total_count since the last time the Listener was |
|
called or the status was read. |
|||
|
|
||
|
|
|
|
DDS_Instance |
last_instance_handle |
Handle to the last |
|
Handle_t |
offered deadline was missed. |
||
|
|||
|
|
|
6.3.6.5OFFERED_INCOMPATIBLE_QOS Status
A change to this status indicates that the DataWriter discovered a DataReader for the same Topic, but that DataReader had requested QoS settings incompatible with this DataWriter’s offered QoS.
The structure for this status appears in Table 6.10.
Table 6.10 DDS_OfferedIncompatibleQoSStatus
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
|
Cumulative number of times the DataWriter discovered a |
|
DDS_Long |
total_count |
DataReader for the same Topic with a requested QoS that is |
|
|
|
incompatible with that offered by the DataWriter. |
|
|
|
|
|
DDS_Long |
total_count_change |
The change in total_count since the last time the Listener was |
|
called or the status was read. |
|||
|
|
|
|
|
|
The ID of the QosPolicy that was found to be incompatible the |
|
DDS_QosPolicyId_t |
last_policy_id |
last time an incompatibility was detected. (Note: if there are |
|
multiple incompatible policies, only one of them is reported |
|||
|
|
||
|
|
here.) |
|
|
|
|
|
|
|
A list |
|
DDS_ |
policies |
that the DataWriter discovered a DataReader for the same Topic |
|
QosPolicyCountSeq |
with a requested QoS that is incompatible with that offered by |
||
|
|
the DataWriter. |
|
|
|
|
The DataWriterListener’s on_offered_incompatible_qos() changes. You can also retrieve the value get_offered_incompatible_qos_status() operation.
callback is invoked when this status by calling the DataWriter’s
6.3.6.6PUBLICATION_MATCHED Status
A change to this status indicates that the DataWriter discovered a matching DataReader.
A ‘match’ occurs only if the DataReader and DataWriter have the same Topic, same data type (implied by having the same Topic), and compatible QosPolicies. In addition, if user code has directed Connext to ignore certain DataReaders, then those DataReaders will never be matched. See Section 16.4.2 for more on setting up a DomainParticipant to ignore specific DataReaders.
The structure for this status appears in Table 6.11.
The DataWriterListener’s on_publication_matched() callback is invoked when this status changes. You can also retrieve the value by calling the DataWriter’s get_publication_match_status() operation.
Table 6.11 DDS_PublicationMatchedStatus
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
total_count |
Cumulative number of times |
the DataWriter discovered a |
|
"match" with a DataReader. |
|
|
|
|
|
|
|
|
|
|
|
total_count_change |
The change in total_count since the last time the Listener was |
|
|
called or the status was read. |
|
|
|
|
|
|
DDS_Long |
|
|
|
current_count |
The number of DataReaders |
currently matched to the |
|
|
DataWriter. |
|
|
|
|
|
|
|
|
|
|
|
current_count_peak |
The highest value that current_count has reached until now. |
|
|
|
|
|
|
current_count_change |
The change in current_count since the last time the listener |
|
|
was called or the status was read. |
||
|
|
||
|
|
|
|
DDS_Instance |
last_subscription_handle |
Handle to the last DataReader that matched the DataWriter |
|
Handle_t |
causing the status to change. |
|
|
|
|
|
|
6.3.6.7RELIABLE_WRITER_CACHE_CHANGED Status (DDS Extension)
A change to this status indicates that the number of unacknowledged samples1 in a reliable DataWriter's cache has reached one of these trigger points:
❏The cache is empty (contains no unacknowledged samples)
❏The cache is full (the number of unacknowledged samples has reached the value specified in DDS_ResourceLimitsQosPolicy::max_samples)
❏The number of unacknowledged samples has reached a high or low watermark. See the high_watermark and low_watermark fields in Table 6.36 of the DATA_WRITER_PROTOCOL QosPolicy (DDS Extension) (Section 6.5.3).
For more about the reliable protocol used by Connext and specifically, what it means for a sample to be ‘unacknowledged,’ see Chapter 10: Reliable Communications.
|
The structure for |
this status appears in |
Table 6.12. The |
supporting |
structure, |
||||||
|
DDS_ReliableWriterCacheEventCount, is described in Table 6.13. |
|
|
|
|
|
|||||
Table 6.12 DDS_ReliableWriterCacheChangedStatus |
|
|
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
||
|
Type |
|
Field Name |
|
|
Description |
|
|
|
||
|
|
|
|
|
|||||||
|
|
|
|
|
|||||||
|
|
|
empty_reliable_writer_ |
How many times the reliable DataWriter's cache of |
|||||||
|
|
|
cache |
unacknowledged samples has become empty. |
|
|
|||||
|
|
|
|
|
|||||||
|
|
|
full_reliable_writer_ |
How many times the reliable DataWriter's cache of |
|||||||
|
|
|
cache |
unacknowledged samples has become full. |
|
|
|
||||
|
DDS_ReliableWriter |
|
|
|
|
|
|
|
|||
|
|
low_watermark_ |
How |
many times |
the reliable |
DataWriter's |
cache of |
||||
|
CacheEventCount |
|
unacknowledged |
samples |
has |
fallen |
to |
the |
low |
||
|
|
reliable_writer_cache |
|||||||||
|
|
|
watermark. |
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|||
|
|
|
high_watermark_ |
How |
many times |
the reliable |
DataWriter's |
cache of |
|||
|
|
|
unacknowledged |
samples |
has |
risen to |
the |
high |
|||
|
|
|
reliable_writer_cache |
||||||||
|
|
|
watermark. |
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|||||||
|
|
|
unacknowledged_ |
The current number of unacknowledged samples in the |
|||||||
|
DDS_Long |
|
sample_count |
DataWriter's cache. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
unacknowledged_ |
The highest value that unacknowledged_sample_count |
||||||||
|
|
|
|||||||||
|
|
|
sample_count_peak |
has reached until now. |
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
1. If batching is enabled, this still refers to a number of samples, not batches.
Table 6.13 DDS_ReliableWriterCacheEventCount
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_Long |
total_count |
The total number of times the event has occurred. |
|
|
|
|
|
DDS_Long |
total_count_change |
The number of times the event has occurred since the Listener was |
|
last invoked or the status read. |
|||
|
|
||
|
|
|
The DataWriterListener’s on_reliable_writer_cache_changed() status changes. You can also retrieve the value get_reliable_writer_cache_changed_status() operation.
callback is invoked when this by calling the DataWriter’s
6.3.6.8RELIABLE_READER_ACTIVITY_CHANGED Status (DDS Extension)
This status indicates that one or more reliable DataReaders has become active or inactive.
This status is the reciprocal status to the LIVELINESS_CHANGED Status (Section 7.3.7.4) on the DataReader. It is different than LIVELINESS_LOST Status (Section 6.3.6.3) status on the DataWriter, in that the latter informs the DataWriter about its own liveliness; this status informs the DataWriter about the liveliness of its matched DataReaders.
A reliable DataReader is considered active by a reliable DataWriter with which it is matched if that DataReader acknowledges the samples that it has been sent in a timely fashion. For the definition of "timely" in this context, see DATA_WRITER_PROTOCOL QosPolicy (DDS Extension) (Section 6.5.3).
This status is only used for DataWriters whose RELIABILITY QosPolicy (Section 6.5.19) is set to RELIABLE. For
The structure for this status appears in Table 6.14.
Table 6.14 DDS_ReliableReaderActivityChangedStatus
Type |
Field Name |
Description |
|
|
|
|
|
|
|
active_count |
The current number of reliable readers currently matched with |
|
this reliable DataWriter. |
|
|
|
|
|
|
|
|
|
The number of reliable readers that have been dropped by this |
|
not_active_count |
reliable DataWriter because they failed to send |
DDS_Long |
|
acknowledgements in a timely fashion. |
|
active_count_change |
The change in the number of active reliable DataReaders since the |
|
Listener was last invoked or the status read. |
|
|
|
|
|
|
|
|
inactive_count_change |
The change in the number of inactive reliable DataReaders since |
|
|
the Listener was last invoked or the status read. |
DDS_Instance |
last_instance_handle |
The instance handle of the last reliable DataReader to be |
Handle_t |
|
determined to be inactive. |
The DataWriterListener’s on_reliable_reader_activity_changed() callback is invoked when this status changes. You can also retrieve the value by calling the DataWriter’s get_reliable_reader_activity_changed_status() operation.
6.3.7Using a
Recall that a Topic is bound to a data type that specifies the format of the data associated with the Topic. Data types are either defined dynamically or in code generated from definitions in IDL or XML; see Chapter 3: Data Types and Data Samples. For each of your application's generated data types, such as 'Foo', there will be a FooDataWriter class (or a set of functions in C). This class allows the application to use a
You will use the FooDataWriter's write() operation used to send data. For dynamically defined
In fact, you will use the FooDataWriter any time you need to perform
You may notice that the Publisher’s create_datawriter() operation returns a pointer to an object of type DDSDataWriter; this is because the create_datawriter() method is used to create DataWriters of any data type. However, when executed, the function actually returns a specialization (an object of a derived class) of the DataWriter that is specific for the data type of the associated Topic. For a Topic of type ‘Foo’, the object actually returned by create_datawriter() is a FooDataWriter.
To safely cast a generic DDSDataWriter pointer to a FooDataWriter pointer, you should use the static narrow() method of the FooDataWriter class. The narrow() method will return NULL if the generic DDSDataWriter pointer is not pointing at an object that is really a FooDataWriter.
For instance, if you create a Topic bound to the type ‘Alarm’, all DataWriters created for that Topic will be of type ‘AlarmDataWriter.’ To access the
DDSDataWriter* writer =
AlarmDataWriter *alarm_writer = AlarmDataWriter::narrow(writer); if (alarm_writer == NULL) {
// ... error
};
In the C API, there is also a way to do the opposite of narrow(). FooDataWriter_as_datawriter() casts a FooDataWriter as a DDSDataWriter, and FooDataReader_as_datareader() casts a FooDataReader as a DDSDataReader.
6.3.8Writing Data
The write() operation informs Connext that there is a new value for a
When you call write(), Connext automatically attaches a stamp of the current time that is sent with the data sample to the DataReader(s). The timestamp appears in the source_timestamp field of the DDS_SampleInfo structure that is provided along with your data using DataReaders (see The SampleInfo Structure (Section 7.4.6)).
DDS_ReturnCode_t write (const Foo &instance_data,
const DDS_InstanceHandle_t &handle)
You can use an alternate DataWriter operation called write_w_timestamp(). This performs the same action as write(), but allows the application to explicitly set the source_timestamp. This is
1. In the C API, the non
useful when you want the user application to set the value of the timestamp instead of the default clock used by Connext.
DDS_ReturnCode_t write_w_timestamp (const Foo &instance_data, const DDS_InstanceHandle_t &handle, const DDS_Time_t &source_timestamp)
Note that, in general, the application should not mix these two ways of specifying timestamps. That is, for each DataWriter, the application should either always use the automatic timestamping mechanism (by calling the normal operations) or always specify a timestamp (by calling the “w_timestamp” variants of the operations). Mixing the two methods may result in not receiving sent data.
You can also use an alternate DataWriter operation, write_w_params(), which performs the same action as write(), but allows the application to explicitly set the fields contained in the DDS_WriteParams structure, see Table 6.15 on page
Table 6.15 DDS_WriteParams_t
Type |
Field Name |
|
|
|
Description |
|
|
|
||
|
|
|
||||||||
|
|
|
||||||||
|
|
Allows retrieving the actual value of those fields that were |
||||||||
|
|
automatic. |
|
|
|
|
|
|
||
DDS_Boolean |
replace_auto |
When this field is set to true, the fields that were configured |
||||||||
with |
an |
automatic |
value |
(for |
example, |
|||||
|
|
|||||||||
|
|
DDS_AUTO_SAMPLE_IDENTITY in identity) receive their |
||||||||
|
|
actual value after write_w_params is called. |
|
|
|
|||||
|
|
|
||||||||
|
|
Identity of the sample being written. The identity consists of a |
||||||||
|
|
pair (Virtual Writer GUID, Virtual Sequence Number). |
|
|||||||
|
|
When the value DDS_AUTO_SAMPLE_IDENTITY is used, the |
||||||||
|
|
write_w_params() operation will determine the sample |
||||||||
|
|
identity as follows: |
|
|
|
|
|
|||
|
|
• The Virtual Writer GUID (writer_guid) is the virtual |
||||||||
|
|
|
GUID associated with the DataWriter writing the |
|||||||
|
|
|
sample. This virtual GUID is configured using the |
|||||||
|
|
|
member |
|
|
virtual_guid |
|
|
in |
|
DDS_ |
identity |
|
||||||||
• The Virtual |
|
|
|
|
|
|||||
SampleIdentity_t |
Sequence Number (sequence_number) is |
|||||||||
|
||||||||||
|
|
|
increased by one with respect to the previous value. |
|
||||||
|
|
The virtual sequence numbers for a given virtual GUID must be |
||||||||
|
|
strictly monotonically increasing. If you try to write a sample |
||||||||
|
|
with a sequence number smaller or equal to the last sequence |
||||||||
|
|
number, the write operation will fail. |
|
|
|
|||||
|
|
A DataReader can inspect the identity of a received sample by |
||||||||
|
|
accessing the fields original_publication_virtual_guid and |
||||||||
|
|
original_publication_virtual_sequence_number |
in |
|||||||
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
Table 6.15 DDS_WriteParams_t
Type |
Field Name |
|
|
Description |
|
|
|
|||
|
|
|
|
|
||||||
|
|
|
|
|
||||||
|
|
The identity of another sample related to this one. |
|
|
||||||
|
|
The value of this field identifies another sample that is logically |
||||||||
|
|
related to the one that is written. |
|
|
|
|
||||
|
|
For example, the DataWriter created by a Replier (sets |
||||||||
|
|
|||||||||
|
|
Pattern) uses this field to associate the identity of the request |
||||||||
DDS_ |
related_sample_ |
sample to reponse sample. |
|
|
|
|
|
|||
SampleIdentity_t |
identity |
To specify that there is no related sample identity use the value |
||||||||
|
|
DDS_UNKNOWN_SAMPLE_IDENTITY, |
|
|
|
|||||
|
|
A DataReader can inspect the related sample identity of a |
||||||||
|
|
received |
sample |
by |
|
accessing |
the |
fields |
||
|
|
related_original_publication_virtual_guid |
|
|
and |
|||||
|
|
related_original_publication_virtual_sequence_number |
|
in |
||||||
|
|
|
|
|
||||||
|
|
|
||||||||
|
|
Source timestamp that will be associated to the sample that is |
||||||||
|
|
written. |
|
|
|
|
|
|
|
|
|
|
If source_timestamp is set to DDS_TIMER_INVALID, the |
||||||||
DDS_Time |
source_timestamp |
middleware will assign the value. |
|
|
|
|
||||
|
|
A DataReader can inspect the source_timestamp value of a |
||||||||
|
|
received sample by accessing the field source_timestamp |
||||||||
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
||
DDS_ |
|
The instance handle. |
|
|
|
|
|
|
||
handle |
This value can be either the handle returned by a previous call |
|||||||||
InstanceHandle_t |
||||||||||
|
to register_instance or the special value DDS_HANDLE_NIL. |
|||||||||
|
|
|||||||||
|
|
|
||||||||
|
|
Positive integer designating the relative priority of the sample, |
||||||||
|
|
used to determine the transmission order of pending |
||||||||
|
|
transmissions. |
|
|
|
|
|
|
||
|
|
To use publication priorities, the DataWriter’s PUBLISH_MODE |
||||||||
|
|
QosPolicy (DDS Extension) (Section 6.5.18) must be set for |
||||||||
|
|
asynchronous publishing and the DataWriter must use a |
||||||||
DDS_Long |
priority |
FlowController |
with |
a |
|
first |
||||
|
|
scheduling_policy. |
|
|
|
|
|
|
||
|
|
For |
||||||||
|
|
a sample may be used as a filter criteria for determining |
||||||||
|
|
channel membership. |
|
|
|
|
|
|||
|
|
For additional information in Priority Samples see Prioritized |
||||||||
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
Note: Prioritized samples are not supported when using the Java, Ada, or .NET APIs. Therefore the priority field in DDS_WriteParams_t does not exist when using these APIs.
When using the C API, a newly created variable of type DDS_WriteParams_t should be initialized by setting it to DDS_WRITEPARAMS_DEFAULT.
The write() operation also asserts liveliness on the DataWriter, the associated Publisher, and the associated DomainParticipant. It has the same effect with regards to liveliness as an explicit call to assert_liveliness(), see Section 6.3.17 and the LIVELINESS QosPolicy (Section 6.5.13). Maintaining liveliness is important for DataReaders to know that the DataWriter still exists and for the proper behavior of the OWNERSHIP QosPolicy (Section 6.5.15).
See also: Clock Selection (Section 8.6).
6.3.8.1Blocking During a write()
The write() operation may block if the RELIABILITY QosPolicy (Section 6.5.19) kind is set to Reliable and the modification would cause data to be lost or cause one of the limits specified in
the RESOURCE_LIMITS QosPolicy (Section 6.5.20) to be exceeded. Specifically, write() may block in the following situations (note that the list may not be exhaustive), even if its HISTORY QosPolicy (Section 6.5.10) is KEEP_LAST:
❏If max_samples1 < max_instances, then the DataWriter may block regardless of the depth field in the HISTORY QosPolicy (Section 6.5.10).
❏If max_samples < (max_instances * depth), then in the situation where the max_samples resource limit is exhausted, Connext may discard samples of some other instance, as long as at least one sample remains for such an instance. If it is still not possible to make space available to store the modification, the writer is allowed to block.
❏If min_send_window_size < max_samples), then it is possible for the send_window_size limit to be reached before Connext is allowed to discard samples, in which case the DataWriter will block.
This operation may also block when using BEST_EFFORT Reliability (Section 6.5.20) and ASYNCHRONOUS Publish Mode (Section 6.5.18) QoS settings. In this case, the DataWriter will queue samples until they are sent by the asynchronous publishing thread. The number of samples that can be stored is determined by the HISTORY QosPolicy (Section 6.5.10). If the asynchronous thread does not send samples fast enough (such as when using a slow FlowController (Section 6.6)), the queue may fill up. In that case, subsequent write calls will block.
If this operation does block for any of the above reasons, the RELIABILITY max_blocking_time configures the maximum time the write operation may block (waiting for space to become available). If max_blocking_time elapses before the DataWriter can store the modification without exceeding the limits, the operation will fail and return RETCODE_TIMEOUT.
6.3.9Flushing Batches of Data Samples
The flush() operation makes a batch of data samples available to be sent on the network.
DDS_ReturnCode_t flush ()
If the DataWriter’s PUBLISH_MODE QosPolicy (DDS Extension) (Section 6.5.18) kind is not ASYNCHRONOUS, the batch will be sent on the network immediately in the context of the calling thread.
If the DataWriter’s PublishModeQosPolicy kind is ASYNCHRONOUS, the batch will be sent in the context of the asynchronous publishing thread.
The flush() operation may block based on the conditions described in Blocking During a write() (Section 6.3.8.1).
If this operation does block, the max_blocking_time in the RELIABILITY QosPolicy (Section 6.5.19) configures the maximum time the write operation may block (waiting for space to become available). If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return TIMEOUT.
For more information on batching, see the BATCH QosPolicy (DDS Extension) (Section 6.5.2).
6.3.10Writing Coherent Sets of Data Samples
A publishing application can request that a set of
This is useful in cases where the values are
1.max_samples in is DDS_ResourceLimitsQosPolicy
changed, it may be important to ensure that reader see both together (otherwise, it may erroneously interpret that the aircraft is on a collision course).
To use this mechanism:
1.Call the Publisher’s begin_coherent_changes() operation to indicate the start a coherent set.
2.For each sample in the coherent set: call the FooDataWriter’s write() operation.
3.Call the Publisher’s end_coherent_changes() operation to terminate the set.
Calls to begin_coherent_changes() and end_coherent_changes() can be nested.
See also: the coherent_access field in the PRESENTATION QosPolicy (Section 6.4.6).
6.3.11Waiting for Acknowledgments in a DataWriter
The DataWriter’s wait_for_acknowledgments() operation blocks the calling thread until either all data written by the reliable DataWriter is acknowledged by (a) all reliable DataReaders that are matched and alive and (b) by all required subscriptions (see Required Subscriptions (Section 6.3.13)), or until the duration specified by the max_wait parameter elapses, whichever happens first.
Note that if a thread is blocked in the call to wait_for_acknowledgments() on a DataWriter and a different thread writes new samples on the same DataWriter, the new samples must be acknowledged before unblocking the thread waiting on wait_for_acknowledgments().
DDS_ReturnCode_t wait_for_acknowledgments (
const DDS_Duration_t & max_wait)
This operation returns DDS_RETCODE_OK if all the samples were acknowledged, or
DDS_RETCODE_TIMEOUT if the max_wait duration expired first.
If the DataWriter does not have its RELIABILITY QosPolicy (Section 6.5.19) kind set to RELIABLE, the operation will immediately return DDS_RETCODE_OK.
There is a similar operation available at the Publisher level, see Waiting for Acknowledgments in a Publisher (Section 6.2.7).
The reliability protocol used by Connext is discussed in Chapter 10: Reliable Communications. The application acknowledgment mechanism is discussed in Application Acknowledgment (Section 6.3.12) and Chapter 13: Guaranteed Delivery of Data.
6.3.12Application Acknowledgment
The RELIABILITY QosPolicy (Section 6.5.19) determines whether or not data published by a DataWriter will be reliably delivered by Connext to matching DataReaders. The reliability protocol used by Connext is discussed in Chapter 10: Reliable Communications.
With
The mechanism to let a DataWriter know to keep the sample around, not just until it has been acknowledged by the reliability protocol, but until the application has been able to process the sample is aptly called Application Acknowledgment. A reliable DataWriter will keep the samples until the application acknowledges the samples. When the subscriber application is restarted,
the middleware will know that the application did not acknowledge successfully processing the samples and will resend them.
6.3.12.1Application Acknowledgment Kinds
Connext supports three kinds of application acknowledgment, which is configured in the RELIABILITY QosPolicy (Section 6.5.19)):
1.DDS_PROTOCOL_ACKNOWLEDGMENT_MODE (Default): In essence, this mode is identical to using no
2.DDS_APPLICATION_AUTO_ACKNOWLEDGMENT_MODE: Samples are automatically acknowledged by the middleware after the subscribing application accesses them, either through calling take() or read() on the sample. The samples are acknowledged after return_loan() is called.
3.DDS_APPLICATION_EXPLICIT_ACKNOWLEDGMENT_MODE: Samples are acknowledged after the subscribing application explicitly calls acknowledge on the sample. This can be done by either calling the DataReader’s acknowledge_sample() or acknowledge_all() operations. When using acknowledge_sample(), the application will provide the DDS_SampleInfo to identify the sample being acknowledge. When using acknowledge_all, all the samples that have been read or taken by the reader will be acknowledged.
Note: Even in DDS_APPLICATION_EXPLICIT_ACKNOWLEDGMENT_MODE, some samples may be automatically acknowledged. This is the case when samples are filtered out by the reader using
6.3.12.2Explicitly Acknowledging a Single Sample (C++)
void MyReaderListener::on_data_available(DDSDataReader *reader)
{
Foo sample; DDS_SampleInfo info;
FooDataReader* fooReader = FooDataReader::narrow(reader); DDS_ReturnCode_t retcode =
if (info.valid_data) { // Process sample
..
retcode =
// Error
}
}
}else {
//Not OK or NO DATA
}
6.3.12.3Explicitly Acknowledging All Samples (C++)
void MyReaderListener::on_data_available(DDSDataReader *reader)
{
...
// Loop while samples available
for(;;) {
retcode =
// No more samples break;
}
// Process sample
...
}
retcode =
// Error
}
}
6.3.12.4Notification of Delivery with Application Acknowledgment
A DataWriter can use the wait_for_acknowledgments() operation to be notified when all the samples in the DataWriter’s queue have been acknowledged. See Waiting for Acknowledgments in a DataWriter (Section 6.3.11).
retCode =
// Error
}
retcode =
if (retCode != DDS_RETCODE_TIMEOUT) {
//Timeout: Sample not acknowledged yet
}else {
//Error
}
}
Connext does not provide a way to get delivery notifications on a per DataReader and sample basis. If your application requires acknowledgment of message receipt, use the Request/Reply communication pattern with an Acknowledgment type (see Chapter 22: Introduction to the
6.3.12.5
When the subscribing application confirms it has successfully processed a sample, an AppAck RTPS message is sent to the publishing application. This message will be resent until the
publishing application confirms receipt of the AppAck message by sending an AppAckConf RTPS message. See Figures 6.10 through 6.12.
Figure 6.10 AppAck RTPS Messages Sent when Application Acknowledges a Sample
6.3.12.6Periodic and
You can configure whether AppAck RTPS messages are sent immediately or periodically through the DATA_READER_PROTOCOL QosPolicy (DDS Extension) (Section 7.6.1 on page 7- 50). The samples_per_app_ack (in Table 7.20, “DDS_RtpsReliableReaderProtocol_t,” on page 7- 52) determines the minimum number of samples acknowledged by one
Figure 6.11 AppAck RTPS Messages Resent Until Acknowledged Through AppAckConf RTPS Message
Table 7.20, “DDS_RtpsReliableReaderProtocol_t,” on page
6.3.12.7Application Acknowledgment and Persistence Service
Application Acknowledgment is fully supported by RTI Persistence Service. The combination of Application Acknowledgment and Persistence Service is actually a common configuration. In addition to keeping samples available until fully acknowledged, Persistence Service, when used in
Figure 6.12 AppAck RTPS Messages Sent as a Sequence of Intervals, Combined to Optimize for Bandwidth
writers, when the subscriber acknowledges the original publisher, Persistence Service will also be notified of this event and will not send out duplicate messages. This is illustrated in Figure 6.13.
Figure 6.13 Application Acknowledgment and Persistence Service
A single AppAck notifies both the original DataWriter and Persistence Service.
|
AppAck |
DataWriter |
Global |
|
Dataspace |
AppAck
Persistence Service
AppAck
DataReader
Samples acknowledged to the Original DataWriter are not sent by the Persistence service.
6.3.12.8Application Acknowledgment and Routing Service
Application Acknowledgment is supported by RTI Routing Service: That is, Routing Service will acknowledge the sample it has processed. Routing Service is an active participant in the Connext system and not transparent to the publisher or subscriber. As such, Routing Service will acknowledge to the publisher, and the subscriber will acknowledge to Routing Service. However, the publisher will not get a notification from the subscriber directly.
6.3.13Required Subscriptions
The DURABILITY QosPolicy (Section 6.5.7) specifies whether acknowledged samples need to be kept in the DataWriter’s queue and made available to
There are scenarios where you know a priori that a particular set of applications will join the system: e.g., a logging service or a known processing application. The Required Subscription feature is designed to keep data until these known late joining applications acknowledge the data.
Another use case is when DataReaders become temporarily inactive due to not responding to heartbeats, or when the subscriber temporarily became disconnected and purged from the discovery database. In both cases, the DataWriter will no longer keep the sample for this
DataReader. The Required Subscription feature will keep the data until these known DataReaders have acknowledged the data.
To use Required Subscriptions, the DataReaders and DataWriters must have their RELIABILITY QosPolicy (Section 6.5.19) kind set to RELIABLE.
6.3.13.1Named, Required and Durable Subscriptions
Before describing the Required Subscriptions, it is important to understand a few concepts:
❏Named Subscription: Through the ENTITY_NAME QosPolicy (DDS Extension) (Section 6.5.9), each DataReader can be given a specific name. This name can be used by tools to identify a specific DataReader. Additionally, the DataReader can be given a role_name. For example: LOG_APP_1 DataReader belongs to the logger applications (role_name = “LOGGER”).
❏Required Subscription is a named subscription to which a DataWriter is configured to deliver data to. This is true even if the DataReaders serving those subscriptions are not available yet. The DataWriter must store the sample until it has been acknowledged by all active reliable DataReaders and acknowledged by all required subscriptions. The DataWriter is not waiting for a specific DataReader, rather it is waiting for DataReaders belonging to the required subscription by setting their role_name to the subscription name.
❏Durable Subscription is a required subscription where samples are stored and forwarded by an external service. In this case, the required subscription is served by RTI Persistence Service. See Configuring Durable Subscriptions in Persistence Service (Section 27.9).
6.3.13.2Durability QoS and Required Subscriptions
The DURABILITY QosPolicy (Section 6.5.7) and the Required Subscriptions feature complement each other.
The DurabilityQosPolicy determines whether or not Connext will store and deliver previously acknowledged samples to new DataReaders that join the network later. You can specify to either not make the samples available (DDS_VOLATILE_DURABILITY_QOS kind), or to make them available and declare you are storing the samples in memory (DDS_TRANSIENT_LOCAL_DURABILITY_QOS or DDS_TRANSIENT_DURABILITY_QOS kind) or in permanent storage (DDS_PERSISTENT_DURABILITY_QOS).
Required subscriptions help answer the question of when a sample is considered acknowledged before the DurabilityQosPolicy determines whether to keep it. When required subscriptions are used, a sample is considered acknowledged by a DataWriter when both the active DataReaders and a quorum of required subscriptions have acknowledged the sample. (Acknowledging a sample can be done either at the protocol or application
6.3.13.3Required Subscriptions Configuration
Each DataReader can be configured to be part of a named subscription, by giving it a role_name using the ENTITY_NAME QosPolicy (DDS Extension) (Section 6.5.9). A DataWriter can then be configured using the AVAILABILITY QosPolicy (DDS Extension) (Section 6.5.1) (required_matched_endpoint_groups) with a list of required named subscriptions identified by the role_name. Additionally, the DataWriter can be configured with a quorum or minimum number of DataReaders from a given named subscription that must receive a sample.
When configured with a list of required subscriptions, a DataWriter will store a sample until the sample is acknowledged by all active reliable DataReaders, as well as all required subscriptions. When a quorum is specified, a minimum number of DataReaders of the required subscription must acknowledge a sample in order for the sample to be considered acknowledged. Specifying a quorum provides a level of redundancy in the system as multiple applications or services acknowledge they have received the sample. Each individual DataReader is identified using its
own virtual GUID (see DATA_READER_PROTOCOL QosPolicy (DDS Extension) (Section 7.6.1)).
6.3.14Managing Data Instances (Working with Keyed Data Types)
This section applies only to data types that use keys, see Samples, Instances, and Keys (Section 2.2.2). Using the following operations for
Topics come in two flavors: those whose associated data type has specified some fields as defining the ‘key,’ and those whose associated data type has not. An example of a
Figure 6.14 Data Type with a Key
typedef struct Flight { long flightId; //@key string departureAirport; string arrivalAirport; Time_t departureTime;
Time_t estimatedArrivalTime; Location_t currentPosition;
};
If the data type has some fields that act as a ‘key,’ the Topic essentially defines a collection of
Since the key fields are contained within the data structure, Connext could examine the key fields each time it needs to determine which
The register_instance() operation provides a handle to the instance (of type DDS_InstanceHandle_t) that can be used later to refer to the instance.
6.3.14.1Registering and Unregistering Instances
If your data type has a key, you may improve performance by registering an instance (data associated with a particular value of the key) before you write data for the instance. You can do this for any number of instances up the maximum number of instances configured in the DataWriter’s RESOURCE_LIMITS QosPolicy (Section 6.5.20). Instance registration is completely optional.
Registration tells Connext that you are about to modify (write or dispose of) a specific instance. This allows Connext to
If you write without registering, you can pass the NIL instance handle as part of the write() call.
If you register the instance first, Connext can look up the instance beforehand and return a handle to that instance. Then when you pass this handle to the write() operation, Connext no longer needs to analyze the data to check what instance it is for. Instead, it can directly update the instance pointed to by the instance handle.
In summary, by registering an instance, all subsequent write() calls to that instance become more efficient. If you only plan to write once to a particular instance, registration does not ‘buy’ you much in performance, but in general, it is good practice.
To register an instance, use the DataWriter’s register_instance() operation. For best performance, it should be invoked prior to calling any operation that modifies the instance, such as write(), write_w_timestamp(), dispose(), or dispose_w_timestamp().
When you are done using that instance, you can unregister it. To unregister an instance, use the DataWriter’s unregister_instance() operation. Unregistering tells Connext that the DataWriter does not intend to modify that
Figure 6.15 Registering an Instance
Flight myFlight;
// writer is a
DDS_InstanceHandle_t fl265Handle =
...
// Each time we update the flight, we can pass the handle
myFlight.departureAirport |
= “SJC”; |
|
myFlight.arrivalAirport |
= “LAX”; |
|
myFlight.departureTime |
= {120000, 0}; |
|
myFlight.estimatedArrivalTime |
= |
{130200, 0}; |
myFlight.currentPosition |
= |
{ {37, 20}, {121, 53} }; |
if
}
...
// Once we are done updating the flight, it can be unregistered if
DDS_RETCODE_OK) { // ... handle error
}
Once an instance has been unregistered, and assuming that no other DataWriters are writing values for the instance, the matched DataReaders will eventually get an indication that the instance no longer has any DataWriters. This is communicated to the DataReaders by means of the DDS_SampleInfo that accompanies each
The unregister_instance() operation may affect the ownership of the data instance (see the OWNERSHIP QosPolicy (Section 6.5.15)). If the DataWriter was the exclusive owner of the instance, then calling unregister_instance() relinquishes that ownership, and another DataWriter can become the exclusive owner of the instance.
The unregister_instance() operation indicates only that a particular DataWriter no longer has anything to say about the instance.
Note that this is different than the dispose() operation discussed in the next section, which informs DataReaders that the
6.3.14.2Disposing of Data
The dispose() operation informs DataReaders that, as far as the DataWriter knows, the data- instance no longer exists and can be considered “not alive.” When the dispose() operation is called, the instance state stored in the DDS_SampleInfo structure, accessed through DataReaders, will change to NOT_ALIVE_DISPOSED for that particular instance.
autodispose_unregistered_instances in the WRITER_DATA_LIFECYCLE QoS Policy (Section 6.5.26) controls whether instances are automatically disposed when they are unregistered.
For example, in a flight tracking system, when a flight lands, a DataWriter may dispose the data- instance corresponding to the flight. In that case, all DataReaders who are monitoring the flight will see the instance state change to NOT_ALIVE_DISPOSED, indicating that the flight has landed.
Note that this is different than unregister_instance() (Section 6.3.14.1), which indicates only that a particular DataWriter no longer wishes to modify an
If a particular instance is never disposed, its instance state will eventually change from ALIVE to NOT_ALIVE_NO_WRITERS once all the DataWriters that were writing that instance unregister the instance or lose their liveliness. For more information on DataWriter liveliness, see the LIVELINESS QosPolicy (Section 6.5.13).
See also: Propagating Serialized Keys with
6.3.14.3Looking Up an Instance Handle
Some operations, such as write(), require an instance_handle parameter. If you need to get such as handle, you can call the FooDataWriter’s lookup_instance() operation, which takes an instance as a parameter and returns a handle to that instance. This is useful for keyed data types.
DDS_InstanceHandle_t lookup_instance (const Foo & key_holder)
The instance must have already been registered (see Section 6.3.14.1). If the instance is not registered, this operation returns DDS_HANDLE_NIL.
6.3.14.4Getting the Key Value for an Instance
Once you have an instance handle (using register_instance() or lookup_instance()), you can use the DataWriter’s get_key_value() operation to retrieve the value of the key of the corresponding instance. The key fields of the data structure passed into get_key_value() will be filled out with the original values used to generate the instance handle. The key fields are defined when the data type is defined, see Samples, Instances, and Keys (Section 2.2.2) for more information.
Following our example in Figure 6.15 on page
DDS_InstanceHandle_t (fl265Handle) that can be used in the call to the FlightDataWriter’s get_key_value() operation. The value of the key is returned in a structure of type Flight with the flightId field filled in with the integer 265.
See also: Propagating Serialized Keys with
6.3.15Setting DataWriter QosPolicies
The DataWriter’s QosPolicies control its resources and behavior.
The DDS_DataWriterQos structure has the following format:
DDS_DataWriterQos struct { |
|
DDS_DurabilityQosPolicy |
durability; |
DDS_DurabilityServiceQosPolicy |
durability_service; |
DDS_DeadlineQosPolicy |
deadline; |
DDS_LatencyBudgetQosPolicy |
latency_budget; |
DDS_LivelinessQosPolicy |
liveliness; |
DDS_ReliabilityQosPolicy |
reliability; |
DDS_DestinationOrderQosPolicy |
destination_order; |
DDS_HistoryQosPolicy |
history; |
DDS_ResourceLimitsQosPolicy |
resource_limits; |
DDS_TransportPriorityQosPolicy |
transport_priority; |
DDS_LifespanQosPolicy |
lifespan; |
DDS_UserDataQosPolicy |
user_data; |
DDS_OwnershipQosPolicy |
ownership; |
DDS_OwnershipStrengthQosPolicy |
ownership_strength; |
DDS_WriterDataLifecycleQosPolicy |
writer_data_lifecycle; |
// extensions to the DDS standard: DDS_DataWriterResourceLimitsQosPolicy writer_resource_limits;
DDS_DataWriterProtocolQosPolicy |
protocol; |
DDS_TransportSelectionQosPolicy |
transport_selection; |
DDS_TransportUnicastQosPolicy |
unicast; |
DDS_PublishModeQosPolicy |
publish_mode; |
DDS_PropertyQosPolicy |
property; |
DDS_BatchQosPolicy |
batch; |
DDS_MultiChannelQosPolicy |
multi_channel; |
DDS_AvailabilityQosPolicy |
availability; |
DDS_EntityNameQosPolicy |
publication_name; |
DDS_TypeSupportQosPolicy |
type_support; |
} DDS_DataWriterQos; |
|
Note: set_qos() cannot always be used within a listener callback, see Restricted Operations in Listener Callbacks (Section 4.5.1).
Table 6.16 summarizes the meaning of each policy. (They appear alphabetically in the table.) For information on why you would want to change a particular QosPolicy, see the referenced section. For defaults and valid ranges, please refer to the API Reference HTML documentation.
Table 6.16 DataWriter QosPolicies
QosPolicy |
Description |
|
|
|
|
|
|
|
|
This QoS policy is used in the context of two features: |
|
|
• Availability QoS Policy and Collaborative DataWriters (Section 6.5.1.1) |
|
|
• Availability QoS Policy and Required Subscriptions (Section 6.5.1.2) |
|
|
For Collaborative DataWriters, Availability specifies the group of |
|
Availability |
DataWriters expected to collaboratively provide data and the timeouts that |
|
|
control when to allow data to be available that may skip samples. |
|
|
For Required Subscriptions, Availability configures a set of Required |
|
|
Subscriptions on a DataWriter. |
|
|
See Section 6.5.1 |
|
|
|
|
|
Specifies and configures the mechanism that allows Connext to collect |
|
Batch |
multiple user data samples to be sent in a single network packet, to take |
|
advantage of the efficiency of sending larger packets and thus increase |
||
|
||
|
effective throughput. See Section 6.5.2. |
|
|
|
|
DataWriterProtocol |
This QosPolicy configures the Connext |
|
|
||
|
|
|
DataWriterResourceLimits |
Controls how many threads can concurrently block on a write() call of this |
|
DataWriter. See Section 6.5.4. |
||
|
||
|
|
|
|
• For a DataReader, it specifies the maximum expected elapsed time |
|
|
between arriving data samples. |
|
Deadline |
• For a DataWriter, it specifies a commitment to publish samples with no |
|
|
greater elapsed time between them. |
|
|
See Section 6.5.5. |
|
|
|
|
|
Controls how Connext will deal with data sent by multiple DataWriters for |
|
DestinationOrder |
the same topic. Can be set to "by reception timestamp" or to "by source |
|
|
timestamp". See Section 6.5.6. |
|
|
|
Table 6.16 DataWriter QosPolicies
QosPolicy |
Description |
|
|
|
|
|
|
|
Durability |
Specifies whether or not Connext will store and deliver data that were |
|
previously published to new DataReaders. See Section 6.5.7. |
||
|
||
|
|
|
|
Various settings to configure the external Persistence Servicea used by Connext |
|
DurabilityService |
for DataWriters with a Durability QoS setting of Persistent Durability. See |
|
|
||
|
|
|
EntityName |
Assigns a name to a DataWriter. See Section 6.5.9. |
|
|
|
|
|
Specifies how much data must to stored by Connextfor the DataWriter or |
|
History |
DataReader. This QosPolicy affects the RELIABILITY QosPolicy (Section |
|
6.5.19) as well as the DURABILITY QosPolicy (Section 6.5.7). See |
||
|
||
|
||
|
|
|
LatencyBudget |
Suggestion to Connext on how much time is allowed to deliver data. See |
|
|
||
|
|
|
Lifespan |
Specifies how long Connext should consider data sent by an user application |
|
to be valid. See Section 6.5.12. |
||
|
||
|
|
|
Liveliness |
Specifies and configures the mechanism that allows DataReaders to detect |
|
when DataWriters become disconnected or "dead." See Section 6.5.13. |
||
|
||
|
|
|
MultiChannel |
Configures a DataWriter’s ability to send data on different multicast groups |
|
(addresses) based on the value of the data. See Section 6.5.14. |
||
|
||
|
|
|
Ownership |
Along with OwnershipStrength, specifies if DataReaders for a topic can |
|
receive data from multiple DataWriters at the same time. See Section 6.5.15. |
||
|
||
|
|
|
OwnershipStrength |
Used to arbitrate among multiple DataWriters of the same instance of a Topic |
|
when Ownership QosPolicy is EXLUSIVE. See Section 6.5.16. |
||
|
||
|
|
|
Partition |
Adds string identifiers that are used for matching DataReaders and |
|
DataWriters for the same Topic. See Section 6.4.5. |
||
|
||
|
|
|
|
Stores name/value (string) pairs that can be used to configure certain |
|
|
parameters of Connext that are not exposed through formal QoS policies. It |
|
Property |
can also be used to store and propagate |
|
|
pairs, which can be retrieved by user code during discovery. See |
|
|
||
|
|
|
|
Specifies how Connext sends application data on the network. By default, |
|
PublishMode |
data is sent in the user thread that calls the DataWriter’s write() operation. |
|
However, this QosPolicy can be used to tell Connext to use its own thread to |
||
|
||
|
send the data. See Section 6.5.18. |
|
|
|
|
Reliability |
Specifies whether or not Connext will deliver data reliably. See Section 6.5.19. |
|
|
|
|
|
Controls the amount of physical memory allocated for entities, if dynamic |
|
ResourceLimits |
allocations are allowed, and how they occur. Also controls memory usage |
|
|
among different instance values for keyed topics. See Section 6.5.20. |
|
|
|
|
TransportPriority |
Set by a DataWriter to tell Connext that the data being sent is a different |
|
"priority" than other data. See Section 6.5.21. |
||
|
||
|
|
|
TransportSelection |
Allows you to select which physical transports a DataWriter or DataReader |
|
may use to send or receive its data. See Section 6.5.22. |
||
|
||
|
|
|
TransportUnicast |
Specifies a subset of transports and port number that can be used by an |
|
Entity to receive data. See Section 6.5.23. |
||
|
||
|
|
|
|
Used to attach |
|
TypeSupport |
These values are passed to the serialization or deserialization routine of the |
|
|
associated data type. See Section 6.5.24. |
|
|
|
Table 6.16 DataWriter QosPolicies
QosPolicy |
Description |
UserData
Along with Topic Data QosPolicy and Group Data QosPolicy, used to attach a buffer of bytes to Connext's discovery
WriterDataLifeCycle
Controls how a DataWriter handles the lifecycle of the instances (keys) that the DataWriter is registered to manage. See Section 6.5.26.
a. Persistence Service is included with Connext Messaging.
Many of the DataWriter QosPolicies also apply to DataReaders (see Section 7.3). For a DataWriter to communicate with a DataReader, their QosPolicies must be compatible. Generally, for the QosPolicies that apply both to the DataWriter and the DataReader, the setting in the DataWriter is considered an “offer” and the setting in the DataReader is a “request.” Compatibility means that what is offered by the DataWriter equals or surpasses what is requested by the DataReader. Each policy’s description includes compatibility restrictions. For more information on compatibility, see QoS Requested vs. Offered
Some of the policies may be changed after the DataWriter has been created. This allows the application to modify the behavior of the DataWriter while it is in use. To modify the QoS of an
ageneral pattern for all Entities, described in Section 4.1.7.3.
6.3.15.1Configuring QoS Settings when the DataWriter is Created
As described in Creating DataWriters (Section 6.3.1), there are different ways to create a DataWriter, depending on how you want to specify its QoS (with or without a QoS Profile).
❏In Figure 6.9 on page
Publisher will use the new default values. As described in Section 4.1.7, this is a general pattern that applies to the construction of all Entities.
❏To create a DataWriter with
❏You can also create a DataWriter and specify its QoS settings via a QoS Profile. To do so, you will call create_datawriter_with_profile(), as seen in Figure 6.17 on page
❏If you want to use a QoS profile, but then make some changes to the QoS before creating the DataWriter, call get_datawriter_qos_from_profile() and create_datawriter() as seen in Figure 6.18 on page
For more information, see Creating DataWriters (Section 6.3.1) and Chapter 17: Configuring QoS with XML.
Figure 6.16 Creating a DataWriter with Modified QosPolicies (not from a profile)
DDS_DataWriterQos writer_qos;1
//initialize writer_qos with default values
//make QoS changes
writer_qos.history.depth = 5;
// Create the writer with modified qos
DDSDataWriter * writer = publisher->create_datawriter( topic, writer_qos,
NULL, DDS_STATUS_MASK_NONE);
if (writer == NULL) { // ... error
}
// narrow it for your specific data type
FooDataWriter* foo_writer = FooDataWriter::narrow(writer);
1. Note: In C, you must initialize the QoS structures before they are used, see Section 4.2.2.
Figure 6.17 Creating a DataWriter with a QoS Profile
// Create the datawriter DDSDataWriter * writer =
publisher->create_datawriter_with_profile( topic,
“MyWriterLibrary”,
“MyWriterProfile”,
NULL, DDS_STATUS_MASK_NONE);
if (writer == NULL) { // ... error
};
// narrow it for your specific data type
FooDataWriter* foo_writer = FooDataWriter::narrow(writer);
Figure 6.18 Getting QoS Values from a Profile, Changing QoS Values, Creating a DataWriter with Modified QoS Values
DDS_DataWriterQos writer_qos;1
// Get writer QoS from profile
retcode = factory->get_datawriter_qos_from_profile( writer_qos, “WriterProfileLibrary”, “WriterProfile”);
if (retcode != DDS_RETCODE_OK) { // handle error
}
// Makes QoS changes writer_qos.history.depth = 5;
DDSDataWriter * writer = publisher->create_datawriter( topic, writer_qos,
NULL, DDS_STATUS_MASK_NONE);
if (participant == NULL) { // handle error
}
1. Note: In C, you must initialize the QoS structures before they are used, see Section 4.2.2.
6.3.15.2Changing QoS Settings After the DataWriter Has Been Created
There are 2 ways to change an existing DataWriter’s QoS after it is has been
❏To change QoS programmatically (that is, without using a QoS Profile), use get_qos() and set_qos(). See the example code in Figure 6.19. It retrieves the current values by calling the DataWriter’s get_qos() operation. Then it modifies the value and calls set_qos() to apply the new value. Note, however, that some QosPolicies cannot be changed after the DataWriter has been
❏You can also change a DataWriter’s (and all other Entities’) QoS by using a QoS Profile and calling set_qos_with_profile(). For an example, see Figure 6.20. For more information, see Chapter 17: Configuring QoS with XML.
Figure 6.19 Changing the QoS of an Existing DataWriter (without a QoS Profile)
DDS_DataWriterQos writer_qos;1
// Get current QoS.
if
}
//Makes QoS changes here writer_qos.history.depth = 5;
//Set the new QoS
if
}
1. For the C API, you need to use DDS_ParticipantQos_INITIALIZER or DDS_ParticipantQos_initialize(). See Section 4.2.2
Figure 6.20 Changing the QoS of an Existing DataWriter with a QoS Profile
retcode = writer->set_qos_with_profile( “WriterProfileLibrary”,”WriterProfile”);
if (retcode != DDS_RETCODE_OK) { // handle error
}
6.3.15.3Using a Topic’s QoS to Initialize a DataWriter’s QoS
Several DataWriter QosPolicies can also be found in the QosPolicies for Topics (see Section 5.1.3). The QosPolicies set in the Topic do not directly affect the DataWriters (or DataReaders) that use that Topic. In many ways, some QosPolicies are a
There are many ways to use the QosPolicies’ values set in the Topic when setting the QosPolicies’ values in a DataWriter. The most straightforward way is to get the values of policies directly from the Topic and use them in the policies for the DataWriter, as shown in Figure 6.21.
Figure 6.21 Copying Selected QoS from a Topic when Creating a DataWriter
DDS_DataWriterQos writer_qos;1
DDS_TopicQos topic_qos;
//topic and publisher already created
//get current QoS for the topic, default QoS for the writer if
//handle error
}
if
}
//Copy specific policies from the topic QoS to the writer QoS writer_qos.deadline = topic_qos.deadline; writer_qos.reliability = topic_qos.reliability;
//Create the DataWriter with the modified QoS
DDSDataWriter* writer =
1. Note in C, you must initialize the QoS structures before they are used, see Section 4.2.2.
You can use the Publisher’s copy_from_topic_qos() operation to copy all of the common policies from the Topic QoS to a DataWriter QoS. This is illustrated in Figure 6.22.
Figure 6.22 Copying all QoS from a Topic when Creating a DataWriter
DDS_DataWriterQos writer_qos;1 DDS_TopicQos topic_qos;
// topic, publisher, writer_listener already created
if
}
if
{
// handle error
}
//copy relevant QosPolicies from topic’s qos into writer’s qos
//Optionally, modify policies as desired
writer_qos.deadline.duration.sec = 1; writer_qos.deadline.duration.nanosec = 0;
// Create the DataWriter with the modified QoS DDSDataWriter* writer =
writer_qos, writer_listener, DDS_STATUS_MASK_ALL);
1. Note in C, you must initialize the QoS structures before they are used, see Section 4.2.2.
In another design pattern, you may want to start with the default QoS values for a DataWriter and override them with the QoS values of the Topic. Figure 6.23 gives an example of how to do this.
Because this is a common pattern, Connext provides a special macro,
DDS_DATAWRITER_QOS_USE_TOPIC_QOS, that can be used to indicate that the
DataWriter should be created with the set of QoS values that results from modifying the default DataWriter QosPolicies with the QoS values specified by the Topic. Figure 6.24 shows how the macro is used.
The code fragments shown in Figure 6.23 and Figure 6.24 result in identical QoS settings for the created DataWriter.
Figure 6.23 Combining Default Topic and DataWriter QoS (Option 1)
DDS_DataWriterQos writer_qos;1 DDS_TopicQos topic_qos;
// topic, publisher, writer_listener already created
if
}
if
}
if
// handle error
}
// Create the DataWriter with the combined QoS
DDSDataWriter* writer =
1. Note in C, you must initialize the QoS structures before they are used, see Section 4.2.2.
Figure 6.24 Combining Default Topic and DataWriter QoS (Option 2)
// topic, publisher, writer_listener already created
DDSDataWriter* writer =
For more information on the general use and manipulation of QosPolicies, see Section 4.1.7.
6.3.16Navigating Relationships Among Entities
6.3.16.1Finding Matching Subscriptions
The following DataWriter operations can be used to get information on the DataReaders that are currently associated with the DataWriter (that is, the DataReaders to which Connext will send the data written by the DataWriter).
❏get_matched_subscriptions()
❏get_matched_subscription_data()
❏get_matched_subscription_locators()
get_matched_subscriptions() will return a sequence of handles to matched DataReaders. You can use these handles in the get_matched_subscription_data() method to get information about the DataReader such as the values of its QosPolicies.
get_matched_subscription_locators() retrieves a list of locators for subscriptions currently "associated" with the DataWriter. Matched subscription locators include locators for all those subscriptions in the same domain that have a matching Topic, compatible QoS, and a common partition that the DomainParticipant has not indicated should be "ignored." These are the locators that Connext uses to communicate with matching DataReaders. (See Locator Format (Section 14.2.1.1).)
You can also get the DATA_WRITER_PROTOCOL_STATUS for matching subscriptions with these operations (see Section 6.3.6.2):
❏get_matched_subscription_datawriter_protocol_status()
❏get_matched_subscription_datawriter_protocol_status_by_locator()
Notes:
❏Status/data for a matched subscription is only kept while the matched subscription is alive. Once a matched subscription is no longer alive, its status is deleted. If you try to get the status/data for a matched subscription that is no longer alive, the 'get status' or ' get data' call will return an error.
❏DataReaders that have been ignored using the DomainParticipant’s ignore_subscription() operation are not considered to be matched even if the DataReader has the same Topic and compatible QosPolicies. Thus, they will not be included in the list of DataReaders returned by get_matched_subscriptions() or get_matched_subscription_locators(). See Section 16.4.2 for more on ignore_subscription().
❏The get_matched_subscription_data() operation does not retrieve the following information from
6.3.16.2Finding Related Entities
These operations are useful for obtaining a handle to various related entities:
❏get_publisher()
❏get_topic()
get_publisher() returns the Publisher that created the DataWriter. get_topic() returns the Topic with which the DataWriter is associated.
6.3.17Asserting Liveliness
The assert_liveliness() operation can be used to manually assert the liveliness of the DataWriter without writing data. This operation is only useful if the kind of LIVELINESS QosPolicy (Section 6.5.13) is MANUAL_BY_PARTICIPANT or MANUAL_BY_TOPIC.
How DataReaders determine if DataWriters are alive is configured using the LIVELINESS QosPolicy (Section 6.5.13). The lease_duration parameter of the LIVELINESS QosPolicy is a contract by the DataWriter to all of its matched DataReaders that it will send a packet within the time value of the lease_duration to state that it is still alive.
There are three ways to assert liveliness. One is to have Connext itself send liveliness packets periodically when the kind of LIVELINESS QosPolicy is set to AUTOMATIC. The other two ways to assert liveliness, used when liveliness is set to MANUAL, are to call write() to send data or to call the assert_liveliness() operation without sending data.
6.4Publisher/Subscriber QosPolicies
This section provides detailed information on the QosPolicies associated with a Publisher. Note that Subscribers have the exact same set of policies. Table 6.2 on page
❏ ASYNCHRONOUS_PUBLISHER QosPolicy (DDS Extension) (Section 6.4.1)
❏ENTITYFACTORY QosPolicy (Section 6.4.2)
❏EXCLUSIVE_AREA QosPolicy (DDS Extension) (Section 6.4.3)
❏GROUP_DATA QosPolicy (Section 6.4.4)
❏PARTITION QosPolicy (Section 6.4.5)
❏PRESENTATION QosPolicy (Section 6.4.6)
6.4.1ASYNCHRONOUS_PUBLISHER QosPolicy (DDS Extension)
This QosPolicy is used to enable or disable asynchronous publishing and asynchronous batch flushing for the Publisher.
This QosPolicy can be used to reduce amount of time spent in the user thread to send data. You can use it to send large data reliably. Large in this context means that the data cannot be sent as a single packet by a transport. For example, to send data larger than 63K reliably using UDP/IP, you must configure Connext to send the data using asynchronous Publishers.
If so configured, the Publisher will spawn two threads, one for asynchronous publishing and one for asynchronous batch flushing. The asynchronous publisher thread will be shared by all DataWriters (belonging to this Publisher) that have their PUBLISH_MODE QosPolicy (DDS Extension) (Section 6.5.18) kind set to ASYNCHRONOUS. The asynchronous publishing thread will then handle the data transmission chores for those DataWriters. This thread will only be spawned when the first of these DataWriters is enabled.
The asynchronous batch flushing thread will be shared by all DataWriters (belonging to this Publisher) that have batching enabled and max_flush_delay different than DURATION_INFINITE in BATCH QosPolicy (DDS Extension) (Section 6.5.2). This thread will only be spawned when the first of these DataWriters is enabled.
This QosPolicy allows you to adjust the asynchronous publishing and asynchronous batch flushing threads independently.
Batching and asynchronous publication are independent of one another. Flushing a batch on an asynchronous DataWriter makes it available for sending to the DataWriter's FlowControllers (DDS Extension) (Section 6.6). From the point of view of the FlowController, a batch is treated like one large sample.
Connext will sometimes coalesce multiple samples into a single network datagram. For example, samples buffered by a FlowController or sent in response to a negative acknowledgement (NACK) may be coalesced. This behavior is distinct from sample batching. Data samples sent by different asynchronous DataWriters belonging to the same Publisher to the same destination will not be coalesced into a single network packet. Instead, two separate network packets will be sent. Only samples written by the same DataWriter and intended for the same destination will be coalesced.
This QosPolicy includes the members in Table 6.17.
Table 6.17 DDS_AsynchronousPublisherQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
|
Disables asynchronous publishing. To write |
|
DDS_Boolean |
disable_asynchronous_write |
asynchronously, this field must be FALSE |
|
|
|
(the default). |
|
|
|
|
|
DDS_ThreadSettings_t |
thread |
Settings for the publishing thread. These |
|
settings are |
|||
|
|
||
|
|
|
Table 6.17 DDS_AsynchronousPublisherQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
Disables asynchronous batch flushing. To flush |
DDS_Boolean |
disable_asynchronous_batch |
asynchronously, this field must be FALSE (the |
|
|
default). |
|
|
|
|
|
Settings for the asynchronous batch flushing |
DDS_ThreadSettings_t |
asynchronous_batch_thread |
thread. |
|
|
These settings are |
|
|
|
6.4.1.1Properties
This QosPolicy cannot be modified after the Publisher is created.
Since it is only for Publishers, there are no compatibility restrictions for how it is set on the publishing and subscribing sides.
6.4.1.2Related QosPolicies
❏If disable_asynchronous_write is TRUE (not the default), then any DataWriters created from this Publisher must have their PUBLISH_MODE QosPolicy (DDS Extension) (Section 6.5.18) kind set to SYNCHRONOUS. (Otherwise create_datawriter() will return INCONSISTENT_QOS.)
❏If disable_asynchronous_batch is TRUE (not the default), then any DataWriters created from this Publisher must have max_flush_delay in BATCH QosPolicy (DDS Extension) (Section 6.5.2) set to DURATION_INFINITE. (Otherwise create_datawriter() will return INCONSISTENT_QOS.)
❏DataWriters configured to use the MULTI_CHANNEL QosPolicy (DDS Extension) (Section 6.5.14) do not support asynchronous publishing; an error is returned if a multi- channel DataWriter is configured for asynchronous publishing.
6.4.1.3Applicable Entities
6.4.1.4System Resource Considerations
Two threads can potentially be created.
For asynchronous publishing, system resource usage depends on the activity of the asynchronous thread controlled by the FlowController (see FlowControllers (DDS Extension) (Section 6.6)).
For asynchronous batch flushing, system resource usage depends on the activity of the asynchronous thread controlled by max_flush_delay in BATCH QosPolicy (DDS Extension) (Section 6.5.2).
6.4.2ENTITYFACTORY QosPolicy
This QosPolicy controls whether or not child entities are created in the enabled state.
This QosPolicy applies to the DomainParticipantFactory, DomainParticipants, Publishers, and
Subscribers, which act as ‘factories’ for the creation of subordinate entities. A
DomainParticipantFactory is used to create DomainParticipants. A DomainParticipant is used to create both Publishers and Subscribers. A Publisher is used to create DataWriters, similarly a
Subscriber is used to create DataReaders.
Entities can be created either in an ‘enabled’ or ‘disabled’ state. An enabled entity can actively participate in communication. A disabled entity cannot be discovered or take part in
communication until it is explicitly enabled. For example, Connext will not send data if the write() operation is called on a disabled DataWriter, nor will Connext deliver data to a disabled DataReader. You can only enable a disabled entity. Once an entity is enabled, you cannot disable it, see Section 4.1.2 about the enable() method.
The ENTITYFACTORY contains only one member, as illustrated in Table 6.18.
Table 6.18 DDS_EntityFactoryQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
DDS_BOOLEAN_TRUE: enable entities when they are |
DDS_Boolean |
autoenable_created_entities |
created |
|
|
DDS_BOOLEAN_FALSE: do not enable entities when they |
|
|
are created |
|
|
|
The ENTITYFACTORY QosPolicy controls whether the entities created from the factory are automatically enabled upon creation or are left disabled. For example, if a Publisher is configured to
Note: if an entity is disabled, then all of the child entities it creates are also created in a disabled state, regardless of the setting of this QosPolicy. However, enabling a disabled entity will enable all of its children if this QosPolicy is set to autoenable child entities.
Note: an entity can only be enabled; it cannot be disabled after its been enabled.
See Section 6.4.2.1 for an example of how to set this policy.
There are various reasons why you may want to create entities in the disabled state:
❏To get around a “chicken and
For example, if you create a DomainParticipant in the enabled state, it will immediately start sending packets to other nodes trying to discover if other Connext applications exist. However, you may want to configure the
❏You may want to create entities without having them automatically start to work. This especially pertains to DataReaders. If you create a DataReader in an enabled state and you are using DataReaderListeners, Connext will immediately search for matching DataWriters and callback the listener as soon as data is published. This may not be what you want to happen if your application is still in the middle of initialization when data arrives.
So typically, you would create all entities in a disabled state, and then when all parts of the application have been initialized, one would enable all entities at the same time using the enable() operation on the DomainParticipant, see Section 4.1.2.
❏An entity’s existence is not advertised to other participants in the network until the entity is enabled. Instead of sending an individual declaration packet to other applications announcing the existence of the entity, Connext can be more efficient in bundling multiple declarations into a single packet when you enable all entities at the same time.
See Section 4.1.2 for more information about enabled/disabled entities.
6.4.2.1Example
The code in Figure 6.25 illustrates how to use the ENTITYFACTORY QoS.
Figure 6.25 Configuring a Publisher so that New DataWriters are Disabled
DDS_PublisherQos publisher_qos;1
// topic, publisher, writer_listener already created
if
}
publisher_qos.entity_factory.autoenable_created_entities = DDS_BOOLEAN_FALSE;
if
}
//Subsequently created DataWriters are created disabled and
//must be explicitly enabled by the
DDS_DATAWRITER_QOS_DEFAULT, writer_listener, DDS_STATUS_MASK_ALL);
... // now do other initialization
//Now explicitly enable the DataWriter, this will allow other
//applications to discover the DataWriter and for this application
//to send data when the DataWriter’s write() method is called
1.Note in C, you must initialize the QoS structures before they are used, see Section 4.2.2.
6.4.2.2Properties
This QosPolicy can be modified at any time.
It can be set differently on the publishing and subscribing sides.
6.4.2.3Related QosPolicies
This QosPolicy does not interact with any other policies.
6.4.2.4Applicable Entities
❏DomainParticipantFactory (Section 8.2)
6.4.2.5System Resource Considerations
This QosPolicy does not significantly impact the use of system resources.
6.4.3EXCLUSIVE_AREA QosPolicy (DDS Extension)
This QosPolicy controls the creation and use of Exclusive Areas. An exclusive area (EA) is a mutex with
EAs allow Connext to be
Within an EA, all calls to the code protected by the EA are single threaded. Each
DomainParticipant, Publisher and Subscriber represents a separate EA. All DataWriters of the same
Publisher and all DataReaders of the same Subscriber share the EA of its parent. This means that the DataWriters of the same Publisher and the DataReaders of the same Subscriber are inherently single threaded.
Within an EA, there are limitations on how code protected by a different EA can be accessed. For example, when data is being processed by user code received in the DataReaderListener of a Subscriber EA, the user code may call the write() function of a DataWriter that is protected by the EA of its Publisher. So you can send data in the function called to process received data. However, you cannot create entities or call functions that are protected by the EA of the DomainParticipant. See Exclusive Areas (EAs) (Section 4.5) for the complete documentation on Exclusive Areas.
With this QoS, you can force a Publisher or Subscriber to share the same EA as its DomainParticipant. Using this capability, the restriction of not being to create entities in a DataReaderListener's on_data_available() callback is lifted. However, the
Note that the restrictions on calling methods in a different EA only exists for user code that is called in registered Listeners by internal DomainParticipant threads. User code may call all Connext functions for any Entities from their own threads at any time.
The EXCLUSIVE_AREA includes a single member, as listed in Table 6.19. For the default value, please refer to the API Reference HTML documentation.
Table 6.19 DDS_ExclusiveAreaQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
|
DDS_BOOLEAN_FALSE: |
|
DDS_Boolean |
use_shared_exclusive_area |
subordinates will not use the same EA |
|
DDS_BOOLEAN_TRUE: |
|||
|
|
||
|
|
subordinates will use the same EA |
|
|
|
|
The implications and restrictions of using a private or shared EA are discussed in Section 4.5. The basic
Behavior when the Publisher or Subscriber’s use_shared_exclusive_area is set to FALSE:
❏The creation of the Publisher/Subscriber will create an EA that will be used only by the
Publisher/Subscriber and the DataWriters/DataReaders that belong to them.
❏Consequences: This setting maximizes concurrency at the expense of creating a mutex for the Publisher or Subscriber. In addition, using a separate EA may restrict certain Connext operations (see Operations Allowed within Listener Callbacks (Section 4.4.5))
from being called from the callbacks of Listeners attached to those entities and the entities that they create. This limitation results from a
Behavior when the Publisher or Subscriber’s use_shared_exclusive_area is set to TRUE:
❏The creation of the Publisher/Subscriber does not create a new EA. Instead, the Publisher/ Subscriber, along with the DataWriters/DataReaders that they create, will use a common EA shared with the DomainParticipant.
❏Consequences: By sharing the same EA among multiple entities, you may decrease the amount of concurrency in the application, which can adversely impact performance. However, this setting does use less resources and allows you to call almost any operation on any Entity within a listener callback (see Exclusive Areas (EAs) (Section 4.5) for full details).
6.4.3.1Example
The code in Figure 6.26 illustrates how to change the EXCLUSIVE_AREA policy.
Figure 6.26 Creating a Publisher with a Shared Exclusive Area
DDS_PublisherQos publisher_qos;1
// domain, publisher_listener have been previously created if
DDS_RETCODE_OK) {
// handle error
}
publisher_qos.exclusive_area.use_shared_exclusive_area = DDS_BOOLEAN_TRUE;
DDSPublisher* publisher =
1.Note in C, you must initialize the QoS structures before they are used, see Section 4.2.2.
6.4.3.2Properties
This QosPolicy cannot be modified after the Entity has been created. It can be set differently on the publishing and subscribing sides.
6.4.3.3Related QosPolicies
This QosPolicy does not interact with any other policies.
6.4.3.4Applicable Entities
6.4.3.5System Resource Considerations
This QosPolicy affects the use of
6.4.4GROUP_DATA QosPolicy
This QosPolicy provides an area where your application can store additional information related to the Publisher and Subscriber. This information is passed between applications during
discovery (see Chapter 14: Discovery) using
Use cases are often
The value of the GROUP_DATA QosPolicy is sent to remote applications when they are first discovered, as well as when the Publisher or Subscriber’s set_qos() method is called after changing the value of the GROUP_DATA. User code can set listeners on the
Currently, GROUP_DATA of the associated Publisher or Subscriber is only propagated with the information that declares a DataWriter or DataReader. Thus, you will need to access the value of GROUP_DATA through DDS_PublicationBuiltinTopicData or DDS_SubscriptionBuiltinTopicData (see Chapter 16:
The structure for the GROUP_DATA QosPolicy includes just one field, as seen in Table 6.20. The field is a sequence of octets that translates to a contiguous buffer of bytes whose contents and length is set by the user. The maximum size for the data are set in the DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4).
Table 6.20 DDS_GroupDataQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
DDS_OctetSeq |
value |
Empty by default |
|
|
|
This policy is similar to the USER_DATA QosPolicy (Section 6.5.25) and TOPIC_DATA QosPolicy (Section 5.2.1) that apply to other types of Entities.
6.4.4.1Example
One possible use of GROUP_DATA is to pass some credential or certificate that your subscriber application can use to accept or reject communication with the DataWriters that belong to the Publisher (or vice versa, where the publisher application can validate the permission of DataReaders of a Subscriber to receive its data). The value of the GROUP_DATA of the Publisher is propagated in the ‘group_data’ field of the DDS_PublicationBuiltinTopicData that is sent with the declaration of each DataWriter. Similarly, the value of the GROUP_DATA of the Subscriber is propagated in the ‘group_data’ field of the DDS_SubscriptionBuiltinTopicData that is sent with the declaration of each DataReader.
When Connext discovers a DataWriter/DataReader, the application can be notified of the discovery of the new entity and retrieve information about the DataWriter/DataReader QoS by reading the DCPSPublication or DCPSSubscription
DomainParticipant’s ignore_publication() or ignore_subscription() operation to reject the newly discovered remote entity as one with which the application allows Connext to communicate. See Figure 16.2, “Ignoring Publications,” on page
The code in Figure 6.27 illustrates how to change the GROUP_DATA policy.
6.4.4.2Properties
This QosPolicy can be modified at any time.
It can be set differently on the publishing and subscribing sides.
Figure 6.27 Creating a Publisher with GROUP_DATA
DDS_PublisherQos publisher_qos;1 int i = 0;
//Bytes that will be used for the group data. In this case 8 bytes
//of some information that is meaningful to the user application char myGroupData[GROUP_DATA_SIZE] =
{ 0x34, 0xaa, 0xfe, 0x31, 0x7a, 0xf2, 0x34, 0xaa};
//assume that domainparticipant and publisher_listener
//are already created
if
// handle error
}
// Must set the size of the sequence first publisher_qos.group_data.value.maximum(GROUP_DATA_SIZE); publisher_qos.group_data.value.length(GROUP_DATA_SIZE);
for (i = 0; i < GROUP_DATA_SIZE; i++) { publisher_qos.group_data.value[i] = myGroupData[i]
}
DDSPublisher* publisher = participant->create_publisher( publisher_qos, publisher_listener, DDS_STATUS_MASK_ALL);
1.Note in C, you must initialize the QoS structures before they are used, see Section 4.2.2.
6.4.4.3Related QosPolicies
❏TOPIC_DATA QosPolicy (Section 5.2.1)
❏USER_DATA QosPolicy (Section 6.5.25)
❏DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4)
6.4.4.4Applicable Entities
6.4.4.5System Resource Considerations
As mentioned earlier, the maximum size of the GROUP_DATA is set in the publisher_group_data_max_length and subscriber_group_data_max_length fields of the DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4). Because Connext will allocate memory based on this value, you should only increase this value if you need to. If your system does not use GROUP_DATA, then you can set this value to zero to save memory. Setting the value of the GROUP_DATA QosPolicy to hold data longer than the value set in the [publisher/subscriber]_group_data_max_length fields will result in failure and an INCONSISTENT_QOS_POLICY return code.
However, should you decide to change the maximum size of GROUP_DATA, you must make certain that all applications in the domain have changed the value of [publisher/ subscriber]_group_data_max_length to be the same. If two applications have different limits on
the size of GROUP DATA, and one application sets the GROUP_DATA QosPolicy to hold data that is greater than the maximum size set by another application, then the matching DataWriters and DataReaders of the Publisher and Subscriber between the two applications will not connect. This is also true for the TOPIC_DATA (Section 5.2.1) and USER_DATA (Section 6.5.25) QosPolicies.
6.4.5PARTITION QosPolicy
The PARTITION QoS provides another way to control which DataWriters will
The PARTITION QoS applies to Publishers and Subscribers, therefore the DataWriters and DataReaders belong to the partitions as set on the Publishers and Subscribers that created them. The mechanism implementing the PARTITION QoS is relatively lightweight, and membership in a partition can be dynamically changed. Unlike the creation and destruction of DomainParticipants, there is no spawning and killing of threads or allocation and deallocation of memory when Publishers and Subscribers add or remove themselves from partitions.
The PARTITION QoS consists of a set of partition names that identify the partitions of which the Entity is a member. These names are simply strings, and DataWriters and DataReaders are considered to be in the same partition if they have more than one partition name in common in the PARTITION QoS set on their Publishers or Subscribers.
Conceptually each partition name can be thought of as defining a “visibility plane” within the domain. DataWriters will make their data available on all the visibility planes that correspond to its Publisher’s partition names, and the DataReaders will see the data that is placed on any of the visibility planes that correspond to its Subscriber’s partition names.
Figure 6.28 illustrates the concept of PARTITION QoS. In this figure, all DataWriters and DataReaders belong to the same domain and refer to the same Topic. DataWriter1 is configured to belong to three partitions: partition_A, partition_B, and partition_C. DataWriter2 belongs to partition_C and partition_D.
Figure 6.28 Controlling Visibility of Data with the PARTITION QoS
Similarly, DataReader1 is configured to belong to partition_A and partition_B, and DataReader2 belongs only to partition_C. Given this topology, the data written by DataWriter1 is visible in partitions A, B, and C. The oval tagged with the number “1” represents one
Similarly, the data written by DataWriter2 is visible in partitions C and D. The oval tagged with the number “2” represents one
The result is that the data written by DataWriter1 will be received by both DataReader1 and DataReader2, but the data written by DataWriter2 will only be visible by DataReader2.
Publishers and Subscribers always belong to a partition. By default, Publishers and Subscribers belong to a single partition whose name is the empty string, ““. If you set the PARTITION QoS to be an empty set, Connext will assign the Publisher or Subscriber to the default partition, ““. Thus, for the example above, without using the PARTITION QoS, DataReaders 1 and 2 would have received all data samples written by DataWriters 1 and 2.
6.4.5.1Rules for PARTITION Matching
On the Publisher side, the PARTITION QosPolicy associates a set of strings (partition names) with the Publisher. On the Subscriber side, the application also uses the PARTITION QoS to associate partition names with the Subscriber.
Taking into account the PARTITION QoS, a DataWriter will communicate with a DataReader if and only if the following conditions apply:
1.The DataWriter and DataReader belong to the same domain. That is, their respective DomainParticipants are bound to the same domain ID (see Section 8.3.1).
2.The DataWriter and DataReader have matching Topics. That is, each is associated with a Topic with the same topic_name and data type.
3.The QoS offered by the DataWriter is compatible with the QoS requested by the
DataReader.
4.The application has not used the ignore_participant(), ignore_datareader(), or ignore_datawriter() APIs to prevent the association (see Section 16.4).
5.The Publisher to which the DataWriter belongs and the Subscriber to which the DataReader belongs must have at least one matching partition name.
The last condition reflects the visibility of the data introduced by the PARTITION QoS. Matching partition names is done by string comparison, thus partition names are case sensitive.
NOTE: Failure to match partitions is not considered an incompatible QoS and does not trigger any listeners or change any status conditions.
6.4.5.2Pattern Matching for PARTITION Names
You may also add strings that are regular expressions1 to the PARTITION QosPolicy. A regular expression does not define a set of partitions to which the Publisher or Subscriber belongs, as much as it is used in the partition matching process to see if a remote entity has a partition name that would be matched with the regular expression. That is, the regular expressions in the PARTITION QoS of a Publisher are never matched against those found in the PARTITION QoS of a Subscriber. Regular expressions are always matched against “concrete” partition names. Thus, a concrete partition name may not contain any reserved characters that are used to define regular expressions, for example ‘*’, ‘.’, ‘+’, etc.
If a PARTITION QoS only contains regular expressions, then the Publisher or Subscriber will be assigned automatically to the default partition with the empty string name (““). Thus, do not be fooled into thinking that a PARTITION QoS that only contains the string “*” matches another
1.As defined by the POSIX fnmatch API
PARTITION QoS that only contains the string “*”. Yes, the Publisher will match the Subscriber, but it is because they both belong to the default ““ partition.
DataWriters and DataReaders are considered to have a partition in common if the sets of partitions that their associated Publishers and Subscribers have defined have:
❏at least one concrete partition name in common
❏a regular expression in one Entity that matches a concrete partition name in another Entity
The programmatic representation of the PARTITION QoS is shown in Table 6.21. The QosPolicy contains the single string sequence, name. Each element in the sequence can be a concrete name or a regular expression. The Entity will be assigned to the default ““ partition if the sequence is empty.
Table 6.21 DDS_PartitionQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
Empty by default. |
DDS_StringSeq |
name |
There can be up to 64 names, with a maximum of 256 characters |
|
|
summed across all names. |
|
|
|
You can have one long partition string of 256 chars, or multiple shorter strings that add up to 256 or less characters. For example, you can have one string of 4 chars and one string of 252 chars.
6.4.5.3Example
Since the set of partitions for a Publisher or Subscriber can be dynamically changed, the Partition QosPolicy is useful to control which DataWriters can send data to which DataReaders and vice
Note when using Partitions and Durability: If a Publisher changes partitions after startup, it is possible for a reliable,
The code in Figure 6.29 illustrates how to change the PARTITION policy.
The ability to dynamically control which DataWriters are matched to which DataReaders (of the same Topic) offered by the PARTITION QoS can be used in many different ways. Using partitions, connectivity can be controlled based on
Example of
Example of
Figure 6.29 Setting Partition Names on a Publisher
DDS_PublisherQos publisher_qos;1
// domain, publisher_listener have been previously created
if
// handle error
}
// Set the partition QoS publisher_qos.partition.name.maximum(3); publisher_qos.partition.name.length(3); publisher_qos.partition.name[0] = DDS_String_dup(“partition_A”); publisher_qos.partition.name[1] = DDS_String_dup(“partition_B”); publisher_qos.partition.name[2] = DDS_String_dup(“partition_C”);
DDSPublisher* publisher =
1. Note in C, you must initialize the QoS structures before they are used, see Section 4.2.2.
Table 6.22 Example of Using
Publisher Partitions |
Subscriber Partitions |
|
Result |
|
|
|
|
||
|
|
|
||
Specify a single partition name |
Specify multiple partition |
Limits the visibility of the data to |
||
using the pattern: |
names, one per region of |
Subscribers that express interest in |
||
“<country>/<state>/<city>” |
interest |
|
|
the geographical region. |
|
|
|
|
|
“USA/California/Santa Clara” |
(Subscriber |
participant |
is |
Send only information for Santa |
irrelevant here.) |
|
Clara, California. |
||
|
|
|||
|
|
|
||
|
“USA/California/Santa Clara” |
Receive only information for Santa |
||
|
|
|
|
Clara, California. |
|
“USA/California/Santa Clara” |
Receive information for Santa Clara |
||
(Publisher partition is irrelevant |
“USA/California/Sunnyvale” |
or Sunnyvale, California. |
||
|
|
|
||
“USA/California/*” |
|
Receive information for California |
||
here.) |
“USA/Nevada/*” |
|
or Nevada. |
|
|
|
|||
|
|
|
|
|
|
“USA/California/*” |
|
Receive information for California |
|
|
“USA/Nevada/Reno” |
|
||
|
|
and two cities in Nevada. |
||
|
“USA/Nevada/Las Vegas” |
|
||
|
|
|
||
|
|
|
|
|
payroll, financial,
A slight variation of this pattern could be used to confine the information based on security levels.
Example of
Table 6.23 Example of
Publisher Partitions |
Subscriber Partitions |
|
Result |
||||
|
|
|
|
|
|
||
|
|
|
|
|
|
||
Specify |
several partition |
Specify |
multiple |
partition |
Limits the visibility of the data to Subscribers |
||
names, |
one per group |
names, one per group to which |
that belong to the |
||||
that is allowed access: |
the Subscriber belongs. |
|
|
the Publisher. |
|||
|
|
|
|
|
|||
“payroll” |
(Subscriber |
participant |
is |
Makes information available only to |
|||
“financial” |
irrelevant here.) |
|
|
Subscribers that have access to either |
|||
|
|
financial or payroll information. |
|||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(Publisher participant is |
“executives” |
|
|
|
Gain access to information that is intended |
||
|
|
|
for executives or people with access to the |
||||
irrelevant here.) |
“financial” |
|
|
|
|
||
|
|
|
|
finances. |
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
which it belongs and a set of partition names in the Subscriber that model the worlds that it can observe.
6.4.5.4Properties
This QosPolicy can be modified at any time.
Strictly speaking, this QosPolicy does not have
6.4.5.5Related QosPolicies
❏DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4).
6.4.5.6Applicable Entities
6.4.5.7System Resource Considerations
Partition names are propagated along with the declarations of the DataReaders and the DataWriters and can be examined by user code through
The maximum number of partitions and the maximum number of characters that can be used for the
However, should you decide to change the maximum number of partitions or maximum cumulative length of partition names, then you must make certain that all applications in the domain have changed the values of max_partitions and max_partition_cumulative_characters to be the same. If two applications have different values for those settings, and one application sets the PARTITION QosPolicy to hold more partitions or longer names than set by another application, then the matching DataWriters and DataReaders of the Publisher and Subscriber between the two applications will not connect. This similar to the restrictions for the GROUP_DATA (Section 6.4.4), USER_DATA (Section 6.5.25), and TOPIC_DATA (Section 5.2.1) QosPolicies.
6.4.6PRESENTATION QosPolicy
Usually DataReaders will receive data in the order that it was sent by a DataWriter. In addition, data is presented to the DataReader as soon as the application receives the next value expected.
Sometimes, you may want a set of data for the same DataWriter to be presented to the receiving DataReader only after ALL the elements of the set have been received, but not before. You may also want the data to be presented in a different order than it was received. Specifically, for keyed data, you may want Connext to present the data in keyed or instance order.
The Presentation QosPolicy allows you to specify different scopes of presentation: within a DataWriter, across instances of a DataWriter, and even across different DataWriters of a publisher. It also controls whether or not a set of changes within the scope must be delivered at the same time or delivered as soon as each element is received.
There are three components to this QoS, the boolean flag coherent_access, the boolean flag ordered_access, and an enumerated setting for the access_scope. The structure used is shown in Table 6.24.
Table 6.24 DDS_PresentationQosPolicy
Type |
Field Name |
|
Description |
|
|
|
|
|
|
|
|
||||||
|
|
|
||||||
|
|
Controls the granularity used when coherent_access and/or |
||||||
|
|
ordered_access are TRUE. |
|
|
|
|
|
|
|
|
If both coherent_access and ordered_access are FALSE, |
||||||
|
|
access_scope’s setting has no effect. |
|
|
|
|
||
|
|
• DDS_INSTANCE_PRESENTATION_QOS: |
|
|
||||
|
|
|
Queue is ordered/sorted per instance |
|
|
|
||
DDS_Presentation_ |
|
• DDS_TOPIC_PRESENTATION_QOS: |
|
|
|
|||
|
|
Queue is ordered/sorted per topic (across all instances) |
|
|||||
QosPolicyAccessScope |
access_scope |
|
|
|||||
• DDS_GROUP_PRESENTATION_QOS: |
|
|
|
|||||
Kind |
|
|
|
|
||||
|
|
Queue is ordered/sorted per topic across all instances |
||||||
|
|
|
||||||
|
|
|
belonging to DataWriter (or DataReaders) within the same |
|||||
|
|
|
Publisher (or Subscriber). Not |
supported |
for |
|||
|
|
|
coherent_access = TRUE. |
|
|
|
|
|
|
|
• |
DDS_HIGHEST_OFFERED_PRESENTATION_QOS: Only |
|||||
|
|
|
applies to Subscribers. With this setting, the Subscriber will |
|||||
|
|
|
use the access scope specified by each remote Publisher. |
|
||||
|
|
|
||||||
|
|
Controls whether Connext will preserve the groupings of |
||||||
|
|
changes made by the publishing application by means of |
||||||
|
|
begin_coherent_changes() and end_coherent_changes(). |
|
|||||
DDS_Boolean |
coherent_access |
• |
DDS_BOOLEAN_FALSE: Coherency is not preserved. |
|||||
|
The value of access_scope is ignored. |
|
|
|
||||
|
|
• DDS_BOOLEAN_TRUE: |
Changes |
made |
to instances |
|||
|
|
|
within each DataWriter will be available to the DataReader as |
|||||
|
|
|
a coherent set, based on the value of access_scope. Not |
|||||
|
|
|
supported for access_scope = GROUP. |
|
|
|
||
|
|
|
|
|||||
|
|
Controls whether Connext will preserve the order of changes. |
|
|||||
|
|
• |
DDS_BOOLEAN_FALSE: |
The order of samples is only |
||||
DDS_Boolean |
ordered_access |
|
preserved for each instance, not across instances. The value |
|||||
|
of access_scope is ignored. |
|
|
|
|
|
||
|
|
• |
DDS_BOOLEAN_TRUE: |
The order of samples from a |
||||
|
|
|
DataWriter is preserved, |
based |
on |
the |
value set |
in |
|
|
|
access_scope. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6.4.6.1Coherent Access
A 'coherent set' is a set of
Coherency enables a publishing application to change the value of several
Setting coherent_access to TRUE only behaves as described in the DDS specification when the
DataWriter and DataReader are configured for reliable delivery.
To send a coherent set of data samples, the publishing application uses the Publisher’s begin_coherent_changes() and end_coherent_changes() operations (see Writing Coherent Sets of Data Samples (Section 6.3.10)).
If coherent_access is TRUE, then the access_scope controls the maximum extent of the coherent changes, as follows:
❏If access_scope is INSTANCE, the use of begin_coherent_changes() and end_coherent_changes() has no effect on how the subscriber can access the data. This is because, with the scope limited to each instance, changes to separate instances are considered independent and thus cannot be grouped by a coherent change.
❏If access_scope is TOPIC, then coherent changes (indicated by their enclosure within calls to begin_coherent_changes() and end_coherent_changes()) will be made available as such to each remote DataReader independently. That is, changes made to instances within the each individual DataWriter will be available as a coherent set with respect to other changes to instances in that same DataWriter, but will not be grouped with changes made to instances belonging to a different DataWriter.
❏If access_scope is GROUP, coherent changes made to instances through a DataWriter attached to a common Publisher are made available as a unit to remote subscribers. Coherent access with GROUP access scope is currently not supported.
6.4.6.2Ordered Access
If ordered_access is TRUE, then access_scope controls the scope of the order in which samples are presented to the subscribing application, as follows:
❏If access_scope is INSTANCE, the relative order of samples sent by a DataWriter is only preserved on an
❏If access_scope is TOPIC, the relative order of samples sent by a DataWriter is preserved for all samples of all instances. The coherent grouping and/or order in which samples appear in the DataReader’s queue is consistent with the grouping/order in which the changes
❏If access_scope is GROUP, the scope spans all instances belonging to DataWriter entities within the same
❏If access_scope is HIGHEST_OFFERED, the Subscriber will use the access scope specified by each remote Publisher.
The data stored in the DataReader is accessed by the DataReader’s read()/take() APIs. The application does not have to access the data samples in the same order as they are stored in the
queue. How the application actually gets the data from the DataReader is ultimately under the control of the user code, see Using DataReaders to Access Data (Read & Take) (Section 7.4).
6.4.6.3Example
Coherency is useful in cases where the values are
Ordered access is useful when you need to ensure that samples appear on the DataReader’s queue in the order sent by one or multiple DataWriters within the same Publisher.
To illustrate the effect of the PRESENTATION QosPolicy with TOPIC and INSTANCE access scope, assume the following sequence of samples was written by the DataWriter: {A1, B1, C1, A2, B2, C2}. In this example, A, B, and C represent different instances (i.e., different keys). Assume all of these samples have been propagated to the DataReader’s history queue before your application invokes the read() operation. The
Table 6.25 Effect of ordered_access for access_scope INSTANCE and TOPIC
|
Sequence retrieved via “read()”. |
|
PRESENTATION QoS |
Order sent was {A1, B1, C1, A2, B2, C2} |
|
|
Order received was {A1, A2, B1, B2, C1, C2} |
|
|
|
|
|
|
|
ordered_access = FALSE |
{A1, A2, B1, B2, C1, C2} |
|
access_scope = <any> |
||
|
||
|
|
|
ordered_access = TRUE |
{A1, A2, B1, B2, C1, C2} |
|
access_scope = INSTANCE |
||
|
||
|
|
|
ordered_access = TRUE |
{A1, B1, C1, A2, B2, C2} |
|
access_scope = TOPIC |
||
|
||
|
|
To illustrate the effect of a PRESENTATION QosPolicy with GROUP access_scope, assume the following sequence of samples was written by two DataWriters, W1 and W2, within the same Publisher: {(W1,A1), (W2,B1), (W1,C1), (W2,A2), (W1,B2), (W2,C2)}. As in the previous example, A, B, and C represent different instances (i.e., different keys). With access_scope set to INSTANCE or TOPIC, the middleware cannot guarantee that the application will receive the samples in the same order they were published by W1 and W2. With access_scope set to GROUP, the middleware is able to provide the samples in order to the application as long as the read()/take() operations are invoked within a begin_access()/end_access() block (see Section 7.2.5).
Table 6.26 Effect of ordered_access for access_scope GROUP
|
Sequence retrieved via “read()”. |
|
PRESENTATION QoS |
Order sent was {(W1,A1), (W2,B1), (W1,C1), (W2,A2), |
|
|
(W1,B2), (W2,C2)} |
|
|
|
|
|
|
|
ordered_access = FALSE |
The order across DataWriters will not be preserved. Samples |
|
may be delivered in multiple orders. For example: |
||
or |
||
{(W1,A1), (W1,C1), (W1,B2), (W2,B1), (W2,A2), (W2,C2)} |
||
access_scope = TOPIC or INSTANCE |
||
{(W1,A1), (W2,B1), (W1,B2), (W1,C1), (W2,A2), (W2,C2)} |
||
|
||
|
|
|
ordered_access = TRUE |
Samples are delivered in the same order they were published: |
|
access_scope = GROUP |
{(W1,A1), (W2,B1), (W1,C1), (W2,A2), (W1,B2), (W2,C2)} |
|
|
|
6.4.6.4Properties
This QosPolicy cannot be modified after the Publisher or Subscriber is enabled.
This QoS must be set compatibly between the DataWriter’s Publisher and the DataReader’s Subscriber. The compatible combinations are shown in Table 6.27 and Table 6.28 for ordered_access and Table 6.29 for coherent_access.
Table 6.27 Valid Combinations of ordered_access and access_scope, with Subscriber’s ordered_access = False
{ordered_access/access_scope} |
|
|
Subscriber Requests: |
|
||
|
|
|
|
|
||
|
False/Instance |
False/Topic |
False/Group |
False/Highest |
||
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
False/Instance |
|
✔ |
incompatible |
incompatible |
✔ |
|
|
|
|
|
|
|
|
False/Topic |
|
✔ |
✔ |
incompatible |
✔ |
|
|
|
|
|
|
|
Publisher |
False/Group |
|
✔ |
✔ |
✔ |
✔ |
|
|
|
|
|
|
|
offers: |
True/Instance |
|
✔ |
incompatible |
incompatible |
✔ |
|
|
|
|
|
|
|
|
True/Topic |
|
✔ |
✔ |
incompatible |
✔ |
|
|
|
|
|
|
|
|
True/Group |
|
✔ |
✔ |
✔ |
✔ |
|
|
|
|
|
|
|
Table 6.28 Valid Combinations of ordered_access and access_scope, with Subscriber’s ordered_access = True
|
{ordered_access/access_scope} |
|
|
|
|
Subscriber Requests: |
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
||
|
|
True/Instance |
|
True/Topic |
|
True/Group |
|
True/Highest |
|
|||
|
|
|
|
|
|
|
|
|||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
False/Instance |
|
|
incompatible |
|
incompatible |
|
incompatible |
|
incompatible |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
False/Topic |
|
|
incompatible |
|
incompatible |
|
incompatible |
|
incompatible |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Publisher |
False/Group |
|
|
incompatible |
|
incompatible |
|
incompatible |
|
incompatible |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
offers: |
True/Instance |
|
|
✔ |
|
incompatible |
|
incompatible |
|
✔ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
True/Topic |
|
|
✔ |
|
✔ |
|
incompatible |
|
✔ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
True/Group |
|
|
✔ |
|
✔ |
|
✔ |
|
✔ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Table 6.29 Valid Combinations of Presentation Coherent Access and Access Scope |
|
|
|
|||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
{coherent_access/access_scope} |
|
|
|
|
Subscriber requests: |
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
False/Instance |
|
False/Topic |
|
True/Instance |
|
True/Topic |
|
||
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
False/Instance |
|
|
✔ |
|
incompatible |
|
incompatible |
|
incompatible |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Publisher |
False/Topic |
|
|
✔ |
|
✔ |
|
incompatible |
|
incompatible |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
offers: |
True/Instance |
|
|
✔ |
|
incompatible |
|
✔ |
|
incompatible |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
True/Topic |
|
|
✔ |
|
✔ |
|
✔ |
|
✔ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6.4.6.5Related QosPolicies
The DESTINATION_ORDER QosPolicy (Section 6.5.6) is closely related and also affects the ordering of data samples on a
The DATA_READER_PROTOCOL QosPolicy (DDS Extension) (Section 7.6.1) may be used to configure the sample ordering process in the Subscribers configured with GROUP or
HIGHEST_OFFERED access_scope.
6.4.6.6Applicable Entities
6.4.6.7System Resource Considerations
The use of this policy does not significantly impact the usage of resources.
6.5DataWriter QosPolicies
This section provides detailed information about the QosPolicies associated with a DataWriter. Table 6.16 on page
❏AVAILABILITY QosPolicy (DDS Extension) (Section 6.5.1)
❏BATCH QosPolicy (DDS Extension) (Section 6.5.2)
❏DATA_WRITER_PROTOCOL QosPolicy (DDS Extension) (Section 6.5.3)
❏DATA_WRITER_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 6.5.4)
❏DEADLINE QosPolicy (Section 6.5.5)
❏DESTINATION_ORDER QosPolicy (Section 6.5.6)
❏DURABILITY QosPolicy (Section 6.5.7)
❏DURABILITY SERVICE QosPolicy (Section 6.5.8)
❏ENTITY_NAME QosPolicy (DDS Extension) (Section 6.5.9)
❏HISTORY QosPolicy (Section 6.5.10)
❏LATENCYBUDGET QoS Policy (Section 6.5.11)
❏LIFESPAN QoS Policy (Section 6.5.12)
❏LIVELINESS QosPolicy (Section 6.5.13)
❏MULTI_CHANNEL QosPolicy (DDS Extension) (Section 6.5.14)
❏OWNERSHIP QosPolicy (Section 6.5.15)
❏OWNERSHIP_STRENGTH QosPolicy (Section 6.5.16)
❏PROPERTY QosPolicy (DDS Extension) (Section 6.5.17)
❏PUBLISH_MODE QosPolicy (DDS Extension) (Section 6.5.18)
❏RELIABILITY QosPolicy (Section 6.5.19)
❏RESOURCE_LIMITS QosPolicy (Section 6.5.20)
❏TRANSPORT_PRIORITY QosPolicy (Section 6.5.21)
❏TRANSPORT_SELECTION QosPolicy (DDS Extension) (Section 6.5.22)
❏TRANSPORT_UNICAST QosPolicy (DDS Extension) (Section 6.5.23)
❏TYPESUPPORT QosPolicy (DDS Extension) (Section 6.5.24)
❏USER_DATA QosPolicy (Section 6.5.25)
❏WRITER_DATA_LIFECYCLE QoS Policy (Section 6.5.26)
6.5.1AVAILABILITY QosPolicy (DDS Extension)
This QoS policy configures the availability of data and it is used in the context of two features:
❏Collaborative DataWriters (Section 6.5.1.1)
❏ Required Subscriptions (Section 6.5.1.2)
It contains the members listed in Table 6.30.
Table 6.30 DDS_AvailabilityQosPolicy
|
Type |
|
Field Name |
Description |
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Enables support for required subscriptions in a |
|
DDS_Boolean |
enable_required_subscripti |
DataWriter. |
|||
|
|
|||||
|
ons |
|
For Collaborative DataWriters: Not applicable. |
|||
|
|
|
|
|
||
|
|
|
|
|
|
For Required Subscriptions: See Table 6.33. |
|
|
|
|
|
|
|
|
|
|
|
|
|
Defines how much time to wait before delivering a |
|
struct |
|
max_data_availability_ |
sample to the application without having received some |
||
|
|
of the previous samples. |
||||
|
DDS_Duration_t |
waiting_time |
For Collaborative DataWriters: See Table 6.32. |
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
For Required Subscriptions: Not applicable. |
|
|
|
|
|
|
|
|
|
|
|
|
|
Defines how much time to wait to discover DataWriters |
|
struct |
|
max_endpoint_availability_ |
providing samples for the same data source. |
||
|
DDS_Duration_t |
waiting_time |
For Collaborative DataWriters: See Table 6.32. |
|||
|
|
|
|
|
|
For Required Subscriptions: Not applicable. |
|
|
|
|
|
|
|
|
struct |
|
required_matched_ |
A sequence of endpoint groups, described in Table 6.31. |
||
|
|
|
||||
|
DDS_Endpoint- |
For Collaborative DataWriters: See Table 6.32. |
||||
|
endpoint_groups |
|||||
|
GroupSeq |
|
For Required Subscriptions: See Table 6.33 |
|||
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Table 6.31 struct DDS_EndpointGroup_t |
|
|||||
|
|
|
|
|
|
|
|
Type |
|
Field Name |
|
Description |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Defines the role name of the endpoint group. |
|
|
char * |
|
role_name |
If used in the AvailabilityQosPolicy on a DataWriter, it specifies the name that |
||
|
|
|
|
|
identifies a Required Subscription. |
|
|
|
|
|
|
|
|
|
|
|
|
|
Defines the minimum number of members that satisfies the endpoint group. |
|
|
int |
|
quorum_count |
If used in the AvailabilityQosPolicy on a DataWriter, it specifies the number of |
||
|
|
|
|
|
DataReaders with a specific role name that must acknowledge a sample before |
|
|
|
|
|
|
the sample is considered to be acknowledged by the Required Subscription. |
|
|
|
|
|
|
|
|
6.5.1.1Availability QoS Policy and Collaborative DataWriters
The Collaborative DataWriters feature allows you to have multiple DataWriters publishing samples from a common logical data source. The DataReaders will combine the samples coming from the DataWriters in order to reconstruct the correct order at the source. The Availability QosPolicy allows you to configure the sample combination (synchronization) process in the
DataReader.
Each sample published in a DDS domain for a given logical data source is uniquely identified by a pair (virtual GUID, virtual sequence number). Samples from the same data source (same virtual GUID) can be published by different DataWriters.
A DataReader will deliver a sample (VGUIDn, VSNm) to the application if one of the following conditions is satisfied:
❏(GUIDn,
❏All the known DataWriters publishing VGUIDn have announced that they do not have (VGUIDn,
❏None of the known DataWriters publishing VGUIDn have announced potential availability of (VGUIDn,
A DataWriter announces potential availability of samples by using virtual heartbeats. The frequency at which virtual heartbeats are sent is controlled by the protocol parameters virtual_heartbeat_period and samples_per_virtual_ heartbeat (see Table 6.36, “DDS_RtpsReliableWriterProtocol_t,” on page
Table 6.32 describes the fields of this policy when used for a Collaborative DataWriter.
For further information, see Chapter 11: Collaborative DataWriters.
Table 6.32 Configuring Collaborative DataWriters with DDS_AvailabilityQosPolicy
Field Name |
|
Description for Collaborative DataWriters |
|
||
|
|
||||
|
|
||||
|
Defines how much time to wait before delivering a sample to the |
||||
|
application without having received some of the previous samples. |
|
|||
|
A sample identified by (VGUIDn, VSNm) will be delivered to the |
||||
|
application if this timeout expires for the sample and the following two |
||||
max_data_availability_ |
conditions are satisfied: |
|
|
||
waiting_time |
• |
None of the known DataWriters publishing VGUIDn have announced |
|||
|
|
potential availability of (VGUIDn, |
|
|
|
|
• |
The DataWriters for all the endpoint groups |
specified |
in |
|
|
|
required_matched_endpoint_groups have been |
discovered |
or |
|
|
|
max_endpoint_availability_waiting_time has expired. |
|
|
|
|
|
||||
|
Defines how much time to wait to discover DataWriters providing samples |
||||
|
for the same data source. |
|
|
||
|
The set of endpoint groups that are required to provide samples for a data |
||||
max_endpoint_availability_ |
source can be configured using required_matched_endpoint_groups. |
|
|||
waiting_time |
A |
||||
|
|||||
|
to the application unless the DataWriters for all the endpoint groups in |
||||
|
required_matched_endpoint_groups are discovered or this timeout |
||||
|
expires. |
|
|
||
|
|
||||
|
Specifies the set of endpoint groups that are expected to provide samples for |
||||
|
the same data source. |
|
|
||
|
The quorum count in a group represents the number of DataWriters that |
||||
|
must be discovered for that group before the DataReader is allowed to |
||||
required_matched_ |
provide non consecutive samples to the application. |
|
|
||
A DataWriter becomes a member of an endpoint group by configuring the |
|||||
endpoint_groups |
|||||
role_name in the DataWriter’s ENTITY_NAME QosPolicy (DDS Extension) |
|||||
|
|||||
|
|
|
|||
|
The DataWriters created by RTI Persistence Service have a predefined |
||||
|
role_name of ‘PERSISTENCE_SERVICE’. For other DataWriters, the |
||||
|
role_name is not set by default. |
|
|
||
|
|
|
|
|
6.5.1.2Availability QoS Policy and Required Subscriptions
In the context of Required Subscriptions, the Availability QosPolicy can be used to configure a set of required subscriptions on a DataWriter.
Required Subscriptions are preconfigured, named subscriptions that may leave and subsequently rejoin the network from time to time, at the same or different physical locations. Any time a required subscription is disconnected, any samples that would have been delivered to it are stored for delivery if and when the subscription rejoins the network.
Table 6.33 describes the fields of this policy when used for a Required Subscription. For further information, see Required Subscriptions (Section 6.3.13).
Table 6.33 Configuring Required Subscriptions with DDS_AvailabilityQosPolicy
Field Name |
Description for Required Subscriptions |
|
|
|
|
|
|
|
enable_required_subscriptio |
Enables support for Required Subscriptions in a DataWriter. |
|
ns |
|
|
max_data_availability_ |
|
|
waiting_time |
Not applicable to Required Subscriptions. |
|
|
||
max_endpoint_availability_ |
||
|
||
waiting_time |
|
|
|
|
|
|
A sequence of endpoint groups that specify the Required Subscriptions on a |
|
|
DataWriter. |
|
|
Each Required Subscription is specified by a name and a quorum count. |
|
required_matched_ |
The quorum count represents the number of DataReaders that have to |
|
acknowledge the sample before it can be considered fully acknowledged |
||
endpoint_groups |
||
for that Required Subscription. |
||
|
||
|
A DataReader is associated with a Required Subscription by configuring the |
|
|
role_name in the DataReader’s ENTITY_NAME QosPolicy (DDS Extension) |
|
|
||
|
|
6.5.1.3Properties
For DataWriters, all the members in this QosPolicy can be changed after the DataWriter is created except for the member enable_required_subscriptions.
For DataReaders, this QosPolicy cannot be changed after the DataReader is created.
There are no compatibility restrictions for how it is set on the publishing and subscribing sides.
6.5.1.4Related QosPolicies
❏ENTITY_NAME QosPolicy (DDS Extension) (Section 6.5.9)
❏DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4)
❏DURABILITY QosPolicy (Section 6.5.7)
6.5.1.5Applicable Entities
6.5.1.6System Resource Considerations
The resource limits for the endpoint groups in required_matched_endpoint_groups are determined by two values in the DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4):
❏max_endpoint_groups
❏max_endpoint_group_cumulative_characters
The maximum number of virtual writers (identified by a virtual GUID) that can be managed by a DataReader is determined by the max_remote_virtual_writers in DATA_READER_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 7.6.2). When the
Subscriber’s access_scope is GROUP, max_remote_virtual_writers determines the maximum
number of DataWriter groups supported by the Subscriber. Since the Subscriber may contain more than one DataReader, only the setting of the first applies.
6.5.2BATCH QosPolicy (DDS Extension)
This QosPolicy can be used to decrease the amount of communication overhead associated with the transmission and (in the case of reliable communication) acknowledgement of small samples, in order to increase throughput.
It specifies and configures the mechanism that allows Connext to collect multiple user data samples to be sent in a single network packet, to take advantage of the efficiency of sending larger packets and thus increase effective throughput.
This QosPolicy can be used to increase effective throughput dramatically for small data samples. Throughput for small samples (size < 2048 bytes) is typically limited by CPU capacity and not by network bandwidth. Batching many smaller samples to be sent in a single large packet will increase network utilization and thus throughput in terms of samples per second.
It contains the members listed in Table 6.34.
Table 6.34 DDS_BatchQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_Boolean |
enable |
Enables/disables batching. |
|
|
|
|
|
|
|
Sets the maximum cumulative length of all serialized |
|
|
|
samples in a batch. |
|
DDS_Long |
max_data_bytes |
Before or when this limit is reached, the batch is |
|
automatically flushed. |
|||
|
|
||
|
|
The size does not include the |
|
|
|
batch samples. |
|
|
|
|
|
|
|
Sets the maximum number of samples in a batch. |
|
DDS_Long |
max_samples |
When this limit is reached, the batch is automatically |
|
|
|
flushed. |
|
|
|
|
|
|
|
Sets the maximum flush delay. |
|
|
|
When this duration is reached, the batch is automatically |
|
struct DDS_Duration_t |
max_flush_delay |
flushed. |
|
|
|
The delay is measured from the time the first sample in the |
|
|
|
batch is written by the application. |
|
|
|
|
Table 6.34 DDS_BatchQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Sets the batch source timestamp resolution. |
|
|
|
|
|
The value of this field determines how the source |
|||
|
|
timestamp is associated with the samples in a batch. |
|
||
|
|
A sample written with timestamp 't' inherits the source |
|||
|
|
timestamp 't2' associated with the previous sample, unless |
|||
|
|
('t' - 't2') is greater than source_timestamp_resolution. |
|
||
|
source_timestamp_ |
If source_timestamp_resolution is DURATION_INFINITE, |
|||
struct DDS_Duration_t |
every sample in the batch will share the source timestamp |
||||
resolution |
associated with the first sample. |
|
|
|
|
|
|
If source_timestamp_resolution is zero, every sample in |
|||
|
|
the batch will contain its own source timestamp |
|||
|
|
corresponding to the moment when the sample was |
|||
|
|
written. |
|
|
|
|
|
The performance of the batching process is better when |
|||
|
|
source_timestamp_resolution |
is |
set |
to |
|
|
DURATION_INFINITE. |
|
|
|
|
|
|
|||
|
|
Determines whether or not the write operation is thread- |
|||
|
|
safe. |
|
|
|
DDS_Boolean |
thread_safe_write |
If TRUE, multiple threads can call write on the DataWriter |
|||
concurrently. |
|
|
|
||
|
|
|
|
|
|
|
|
A setting of FALSE can be used to increase batching |
|||
|
|
throughput for batches with many small samples. |
|
||
|
|
|
|
|
|
If batching is enabled (not the default), samples are not immediately sent when they are written. Instead, they get collected into a "batch." A batch always contains whole number of
A batch is sent on the network ("flushed") when one of the following things happens:
❏
•A batch size limit (max_data_bytes) is reached.
•A number of samples are in the batch (max_samples).
•A
•The application explicitly calls a DataWriter's flush() operation.
❏
•A coherent set starts or ends.
•The number of samples in the batch is equal to max_samples in RESOURCE_LIMITS for unkeyed topics or max_samples_per_instance in RESOURCE_LIMITS for keyed topics.
Additional batching configuration takes place in the Publisher’s ASYNCHRONOUS_PUBLISHER QosPolicy (DDS Extension) (Section 6.4.1).
The flush() operation is described in Flushing Batches of Data Samples (Section 6.3.9).
6.5.2.1Synchronous and Asynchronous Flushing
Usually, a batch is flushed synchronously:
❏When a batch reaches its
❏When an application manually flushes a batch, the batch is flushed immediately in the context of the calling thread.
❏When the first sample in a coherent set is written, the batch in progress (without including the sample in the coherent set) is immediately flushed in the context of the writing thread.
❏When a coherent set ends, the batch in progress is immediately flushed in the context of the calling thread.
❏When the number of samples in a batch is equal to max_samples in RESOURCE_LIMITS for unkeyed topics or max_samples_per_instance in RESOURCE_LIMITS for keyed topics, the batch is flushed immediately in the context of the writing thread.
However, some behavior is asynchronous:
❏To flush batches based on a time limit (max_flush_delay), enable asynchronous batch flushing in the ASYNCHRONOUS_PUBLISHER QosPolicy (DDS Extension) (Section 6.4.1) of the DataWriter's Publisher. This will cause the Publisher to create an additional thread that will be used to flush batches of that Publisher's DataWriters. This behavior is analogous to the way asynchronous publishing works.
❏You may also use batching alongside asynchronous publication with FlowControllers (DDS Extension) (Section 6.6). These features are independent of one another. Flushing a batch on an asynchronous DataWriter makes it available for sending to the DataWriter's FlowController. From the point of view of the FlowController, a batch is treated like one large sample.
6.5.2.2Batching vs. Coalescing
Even when batching is disabled, Connext will sometimes coalesce multiple samples into a single network datagram. For example, samples buffered by a FlowController or sent in response to a negative acknowledgement (NACK) may be coalesced. This behavior is distinct from sample batching.
Samples that are sent individually (not part of a batch) are always treated as separate samples by Connext. Each sample is accompanied by a complete RTPS header on the network (although samples may share UDP and IP headers) and (in the case of reliable communication) a unique physical sequence number that must be positively or negatively acknowledged.
In contrast, batched samples share an RTPS header and an entire batch is acknowledged — positively or
Batching can also improve latency relative to simply coalescing. Consider two use cases:
1.A DataWriter is configured to write asynchronously with a FlowController. Even if the FlowController's rules would allow it to publish a new sample immediately, the send will always happen in the context of the asynchronous publishing thread. This context switch can add latency to the send path.
2.A DataWriter is configured to write synchronously but with batching turned on. When the batch is full, it will be sent on the wire immediately, eliminating a thread context switch from the send path.
6.5.2.3Batching and ContentFilteredTopics
When batching is enabled, content filtering is always done on the reader side.
6.5.2.4Performance Considerations
The purpose of batching is to increase throughput when writing small samples at a high rate. In such cases, throughput can be increased
However, collecting samples into a batch implies that they are not sent on the network immediately when the application writes them; this can potentially increase latency. However, if the application sends data faster than the network can support, an increased proportion of the network's available bandwidth will be spent on acknowledgements and sample resends. In this case, reducing that overhead by turning on batching could decrease latency while increasing throughput.
As a general rule, to improve batching throughput:
❏Set thread_safe_write to FALSE when the batch contains a big number of small samples. If you do not use a
❏Set source_timestamp_resolution to DURATION_INFINITE. Note that you set this value, every sample in the batch will share the same source timestamp.
Batching affects how often piggyback heartbeats are sent; see heartbeats_per_max_samples in Table 6.36, “DDS_RtpsReliableWriterProtocol_t,” on page
6.5.2.5Maximum Transport Datagram Size
Batches cannot be fragmented. As a result, the maximum batch size (max_data_bytes) must be set no larger than the maximum transport datagram size. For example, a UDP datagram is limited to 64 KB, so any batches sent over UDP must be less than or equal to that size.
6.5.2.6Properties
This QosPolicy cannot be modified after the DataWriter is enabled.
Since it is only for DataWriters, there are no compatibility restrictions for how it is set on the publishing and subscribing sides.
All batching configuration occurs on the publishing side. A subscribing application does not configure anything specific to receive batched samples, and in many cases, it will be oblivious to whether the samples it processes were received individually or as part of a batch.
Consistency rules:
❏max_samples must be consistent with max_data_bytes: they cannot both be set to LENGTH_UNLIMITED.
❏If max_flush_delay is not DURATION_INFINITE, disable_asynchronous_batch in the ASYNCHRONOUS_PUBLISHER QosPolicy (DDS Extension) (Section 6.4.1) must be FALSE.
❏If thread_safe_write is FALSE, source_timestamp_resolution must be DURATION_INFINITE.
6.5.2.7Related QosPolicies
❏To flush batches based on a time limit, enable batching in the ASYNCHRONOUS_PUBLISHER QosPolicy (DDS Extension) (Section 6.4.1) of the
DataWriter's Publisher.
❏Be careful when configuring a DataWriter's LIFESPAN QoS Policy (Section 6.5.12) with a duration shorter than the batch flush period (max_flush_delay). If the batch does not fill up before the flush period elapses, the short duration will cause the samples to be lost without being sent.
❏Do not configure the DataReader’s or DataWriter’s HISTORY QosPolicy (Section 6.5.10) to be shallower than the DataWriter's maximum batch size (max_samples). When the HISTORY QosPolicy is shallower on the DataWriter, some samples may not be sent. When the HISTORY QosPolicy is shallower on the DataReader, samples may be dropped before being provided to the application.
❏The initial and maximum numbers of batches that a DataWriter will manage is set in the DATA_WRITER_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 6.5.4).
❏The maximum number of samples that a DataWriter can store is determined by the value max_samples in the RESOURCE_LIMITS QosPolicy (Section 6.5.20) and max_batches in the DATA_WRITER_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 6.5.4). The limit that is reached first is applied.
❏The amount of resources required for batching depends on the configuration of the RESOURCE_LIMITS QosPolicy (Section 6.5.20) and the DATA_WRITER_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 6.5.4). See Section 6.5.2.9.
6.5.2.8Applicable Entities
6.5.2.9System Resource Considerations
❏Batching requires additional resources to store the
•For unkeyed topics, the
•For keyed topics, the
❏Other resource considerations are described in Section 6.5.2.7.
6.5.3DATA_WRITER_PROTOCOL QosPolicy (DDS Extension)
Connext uses a standard protocol for packet (user and meta data) exchange between applications. The DataWriterProtocol QosPolicy gives you control over configurable portions of the protocol, including the configuration of the reliable data delivery mechanism of the protocol on a per DataWriter basis.
These configuration parameters control timing and timeouts, and give you the ability to trade off between speed of data loss detection and repair, versus network and CPU bandwidth used to maintain reliability.
It is important to tune the reliability protocol on a per DataWriter basis to meet the requirements of the
control how Connext responds to "slow" reliable DataReaders or ones that disconnect or are otherwise lost.
This policy includes the members presented in Table 6.35, “DDS_DataWriterProtocolQosPolicy,” on page
For details on the reliability protocol used by Connext, see Chapter 10: Reliable Communications. See the RELIABILITY QosPolicy (Section 6.5.19) for more information on per- DataReader/DataWriter reliability configuration. The HISTORY QosPolicy (Section 6.5.10) and RESOURCE_LIMITS QosPolicy (Section 6.5.20) also play important roles in the DDS reliability protocol.
Table 6.35 DDS_DataWriterProtocolQosPolicy
Type |
Field Name |
|
|
|
Description |
|
|
|
||
|
|
|
||||||||
|
|
|
||||||||
|
|
The virtual GUID (Global Unique Identifier) is used to uniquely identify |
||||||||
|
|
the same DataWriter across multiple incarnations. In other words, this |
||||||||
|
|
value allows Connext to remember information about a DataWriter that |
||||||||
|
|
may be deleted and then recreated. |
|
|
|
|
||||
|
|
Connext uses the virtual GUID to associate a durable writer history to a |
||||||||
|
|
DataWriter. |
|
|
|
|
|
|
||
|
|
Persistence Servicea uses the virtual GUID to send samples on behalf of the |
||||||||
|
|
original DataWriter. |
|
|
|
|
|
|||
DDS_GUID_t |
virtual_guid |
A DataReader persists its state based on the virtual GUIDs of matching |
||||||||
|
|
remote DataWriters. |
|
|
|
|
|
|||
|
|
For more information, see Durability and Persistence Based on Virtual |
||||||||
|
|
|
|
|
|
|
||||
|
|
By default, Connext will assign a virtual GUID automatically. If you want |
||||||||
|
|
to restore the state of the durable writer history after a restart, you can |
||||||||
|
|
retrieve the value of the writer's virtual GUID using the DataWriter’s |
||||||||
|
|
get_qos() operation, and set the virtual GUID of the restarted DataWriter to |
||||||||
|
|
the same value. |
|
|
|
|
|
|||
|
|
|
||||||||
|
|
Determines the DataWriter’s RTPS object ID, according to the |
||||||||
|
|
Interoperability Wire Protocol. |
|
|
|
|
||||
|
|
Only the last 3 bytes are used; the most significant byte is ignored. |
|
|||||||
DDS_Unsigned- |
rtps_object_id |
The |
rtps_host_id, |
rtps_app_id, |
rtps_instance_id |
in |
the |
|||
Long |
WIRE_PROTOCOL QosPolicy (DDS Extension) (Section 8.5.9), together |
|||||||||
|
||||||||||
|
|
with the 3 least significant bytes in rtps_object_id, and another byte |
||||||||
|
|
assigned by Connext to identify the entity type, forms the BuiltinTopicKey |
||||||||
|
|
in PublicationBuiltinTopicData. |
|
|
|
|
||||
|
|
|
||||||||
|
|
Controls when a sample is sent after write() is called on a DataWriter. If |
||||||||
DDS_Boolean |
push_on_write |
TRUE, the sample is sent immediately; if FALSE, the sample is put in a |
||||||||
|
|
queue until an ACK/NACK is received from a reliable DataReader. |
|
|||||||
|
|
|
|
|
|
|
|
|||
|
|
Determines |
whether |
matching |
DataReaders |
send |
positive |
|||
|
|
acknowledgements (ACKs) to the DataWriter. |
|
|
|
|||||
DDS_Boolean |
disable_positive_ |
When TRUE, the DataWriter will keep samples in its queue for ACK- |
||||||||
acks |
disabled readers for a minimum keep duration (see Section 6.5.3.3). |
|
||||||||
|
|
|||||||||
|
|
When strict reliability is not required, setting this to TRUE reduces |
||||||||
|
|
overhead network traffic. |
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
Table 6.35 DDS_DataWriterProtocolQosPolicy
Type |
Field Name |
|
Description |
|
|
|
|
|
|
|
|
|
|||
|
|
|
|
|
|||
|
|
Controls whether or not the |
|||||
|
|
samples. |
|
|
|
||
|
|
This field only applies to keyed writers. |
|
|
|
||
|
|
Connext associates a |
|||||
|
|
each key. |
|
|
|
||
|
|
When FALSE, the |
|||||
|
|
When TRUE, the |
|||||
DDS_Boolean |
disable_inline_ |
compute the value using the received data). |
|
|
|
||
If the reader is CPU bound, sending the |
|||||||
keyhash |
|||||||
|
|||||||
|
|
performance, because the reader does not have to get the |
|||||
|
|
the data. |
|
|
|
||
|
|
If the writer is CPU bound, sending the |
|||||
|
|
decrease performance, because it requires more bandwidth (16 more |
|||||
|
|
bytes per sample). |
|
|
|
||
|
|
|
|
|
|
||
|
|
|
Note: Setting disable_inline_keyhash to TRUE is not compatible |
|
|||
|
|
|
with using RTI |
|
|
||
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
Controls whether or not the serialized key is propagated on the wire with |
|||||
|
|
dispose notifications. |
|
|
|
||
|
|
This field only applies to keyed writers. |
|
|
|
||
|
serialize_key_ |
RTI recommends setting this field to TRUE if there are DataReaders with |
|||||
DDS_Boolean |
propagate_dispose_of_unregistered_instances |
(in |
the |
||||
with_dispose |
DATA_READER_PROTOCOL QosPolicy (DDS Extension) (Section 7.6.1)) |
||||||
|
|||||||
|
|
also set to TRUE. |
|
|
|
||
|
|
Important: When this field TRUE, batching will not be compatible with |
|||||
|
|
RTI Data Distribution Service 4.3e, 4.4b, or |
|||||
|
|
receive incorrect data and/or encounter deserialization errors. |
|
||||
|
|
|
|
|
|
|
|
DDS_RtpsReliable |
rtps_reliable_ |
This structure includes the fields in Table 6.36. |
|
|
|
||
WriterProtocol_t |
writer |
|
|
|
|||
|
|
|
|
|
|||
|
|
|
|
|
|
|
a. Persistence Service is included with Connext Messaging. It saves data samples so they can be delivered to subscribing applications that join the system at a later time (see Chapter 26: Introduction to RTI Persistence Service).
Table 6.36 DDS_RtpsReliableWriterProtocol_t
Type |
Field Name |
|
|
|
|
Description |
|
|
||
|
|
|
||||||||
|
|
|
||||||||
|
low_watermark |
Queue levels that control when to switch between the regular |
||||||||
DDS_Long |
|
and |
fast |
heartbeat |
rates |
(heartbeat_period |
and |
|||
high_watermark |
||||||||||
|
fast_heartbeat_period). See Section 6.5.3.1. |
|
||||||||
|
|
|
||||||||
|
|
|
|
|
|
|
|
|
|
|
|
heartbeat_period |
Rates at which to sent heartbeats to DataReaders with |
||||||||
|
|
|||||||||
DDS_Duration_t |
fast_heartbeat_period |
|||||||||
unacknowledged |
samples. |
See |
and |
|||||||
|
late_joiner_heartbeat_ |
|
|
|
|
|
||||
|
period |
|
|
|
|
|
|
|
|
|
|
|
|
||||||||
|
|
The rate at which a reliable DataWriter will send virtual |
||||||||
DDS_Duration_t |
virtual_heartbeat_period |
heartbeats. Virtual |
heartbeat informs |
the reliable DataReader |
||||||
about the range of samples currently present for each virtual |
||||||||||
|
|
|||||||||
|
|
GUID in the reliable writer's queue. See Section 6.5.3.6. |
|
|||||||
|
|
|
||||||||
DDS_Long |
samples_per_virtual_ |
The number of samples that a reliable DataWriter must publish |
||||||||
heartbeat |
before sending a virtual heartbeat. See Section 6.5.3.6. |
|
||||||||
|
|
|||||||||
|
|
|
|
|
|
|
|
|
|
Table 6.36 DDS_RtpsReliableWriterProtocol_t
Type |
Field Name |
|
|
Description |
|
|
|
|
|
|
|||||
|
|
|
|||||
|
|
Maximum number of periodic heartbeats sent without receiving |
|||||
|
|
an ACK/NACK packet before marking a DataReader ‘inactive.’ |
|
||||
|
|
When a DataReader has not acknowledged all the samples the |
|||||
|
|
reliable DataWriter has sent to it, and max_heartbeat_retries |
|||||
DDS_Long |
max_heartbeat_retries |
number of periodic heartbeats have been sent without receiving |
|||||
any ACK/NACK packets in return, the DataReader will be |
|||||||
|
|
||||||
|
|
marked as inactive (not alive) and be ignored until it resumes |
|||||
|
|
sending ACK/NACKs. |
|
|
|
||
|
|
Note that piggyback heartbeats do not count towards this value. |
|
||||
|
|
See Section 10.3.4.4. |
|
|
|
||
|
|
|
|||||
|
inactivate_nonprogressing_ |
Allows the DataWriter to treat DataReaders that send successive |
|||||
DDS_Boolean |
|
||||||
|
readers |
See Section 10.3.4.5. |
|
|
|
||
|
|
|
|
|
|||
|
|
|
|
|
|||
|
|
For |
|
|
|||
|
|
If batching is disabled: |
|
|
|
||
|
|
|
A piggyback heartbeat will be sent every |
|
|||
|
|
|
[max_samples/heartbeats_per_max_samples] |
|
|||
|
|
|
number of samples. |
|
|
|
|
|
|
|
heartbeats_per_max_samples must be |
|
|||
|
|
|
<= writer_qos.resource_limits.max_samples |
|
|||
|
|
If batching is enabled: |
|
|
|
||
|
|
|
A piggyback heartbeat will be sent every |
|
|||
|
|
|
[max_batches/heartbeats_per_max_samples] |
|
|||
|
|
|
number of samples. |
|
|
|
|
DDS_Long |
heartbeats_per_max_ |
|
heartbeats_per_max_samples must be |
|
|||
samples |
|
<= writer_qos.resource_limits.max_batches |
|
||||
|
|
|
|||||
|
|
For |
|
|
|||
|
|
|
A piggyback heartbeat will be sent on a channel every |
|
|||
|
|
|
[max_samples/heartbeats_per_max_samples] number of |
||||
|
|
|
samples sent of that channel. |
|
|
||
|
|
|
heartbeats_per_max_samples must be |
|
|||
|
|
|
<= writer_qos.resource_limits.max_samples. |
|
|||
|
|
See Section 18.6.2 for additional details related to the multi- |
|||||
|
|
channel DataWriter reliability protocol. |
|
|
|||
|
|
If |
max_samples |
or |
max_batches |
is |
|
|
|
DDS_LENGTH_UNLIMITED, 100 million is assumed as the |
|||||
|
|
maximum value in the calculation. |
|
|
|||
|
|
|
|
||||
|
|
Minimum delay to respond to an ACK/NACK. |
|
||||
|
|
When a reliable DataWriter receives an ACK/NACK from a |
|||||
DDS_Duration_t |
min_nack_response_delay |
DataReader, the DataWriter can choose to delay a while before it |
|||||
sends repair samples or a heartbeat. This set the value of the |
|||||||
|
|
||||||
|
|
minimum delay. |
|
|
|
||
|
|
See Section 10.3.4.6. |
|
|
|
||
|
|
|
|
||||
|
|
Maximum delay to respond to a ACK/NACK. |
|
||||
|
|
This sets the value of maximum delay between receiving an |
|||||
|
|
ACK/NACK and sending repair samples or a heartbeat. |
|
||||
DDS_Duration_t |
max_nack_response_delay |
A longer wait can help prevent storms of repair packets if many |
|||||
DataReaders send NACKs at the same time. However, it delays |
|||||||
|
|
||||||
|
|
the repair, and hence increases the latency of the |
|||||
|
|
communication. |
|
|
|
||
|
|
See Section 10.3.4.6. |
|
|
|
||
|
|
|
|
|
|
|
Table 6.36 DDS_RtpsReliableWriterProtocol_t
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
|
How long consecutive NACKs are suppressed. |
|
|
|
When a reliable DataWriter receives consecutive NACKs within |
|
DDS_Duration_t |
nack_suppression_duration |
a short duration, this may trigger the DataWriter to send |
|
|
|
redundant repair messages. This value sets the duration during |
|
|
|
which consecutive NACKs are ignored, thus preventing |
|
|
|
redundant repairs from being sent. |
|
|
|
|
|
|
|
Maximum bytes in a repair package. |
|
DDS_Long |
max_bytes_per_nack_ |
When a reliable DataWriter resends samples, the total package |
|
response |
size is limited to this value. |
||
|
|||
|
|
See Section 10.3.4.3. |
|
|
|
|
|
|
disable_positive_acks_ |
Minimum duration that a sample will be kept in the DataWriter’s |
|
|
min_sample_keep_ |
queue for |
|
DDS_Duration_t |
duration |
See Section 6.5.3.3 and Section 10.3.4.7. |
|
|
|
||
disable_positive_acks_ |
Maximum duration that a sample will be kept in the |
||
|
|||
|
max_sample_keep_ |
||
|
DataWriter’s queue for |
||
|
duration |
||
|
|
||
|
|
|
|
|
disable_positive_acks_ |
Enables automatic dynamic adjustment of the ‘keep duration’ in |
|
DDS_Boolean |
enable_adaptive_ |
||
response to network congestion. |
|||
|
sample_keep_duration |
||
|
|
||
|
|
|
|
|
|
When the ‘keep duration’ is dynamically controlled, the |
|
|
|
lengthening of the ‘keep duration’ is controlled by this factor, |
|
|
|
which is expressed as a percentage. |
|
|
disable_positive_acks_ |
When the adaptive algorithm determines that the keep duration |
|
|
increase_sample_ |
should be increased, this factor is multiplied with the current |
|
|
keep_duration_factor |
keep duration to get the new longer keep duration. For example, |
|
|
|
if the current keep duration is 20 milliseconds, using the default |
|
|
|
factor of 150% would result in a new keep duration of 30 |
|
DDS_Long |
|
milliseconds. |
|
|
|
||
|
When the ‘keep duration’ is dynamically controlled, the |
||
|
|
||
|
|
shortening of the ‘keep duration’ is controlled by this factor, |
|
|
|
which is expressed as a percentage. |
|
|
disable_positive_acks_ |
When the adaptive algorithm determines that the keep duration |
|
|
decrease_sample_ |
should be decreased, this factor is multiplied with the current |
|
|
keep_duration_factor |
keep duration to get the new shorter keep duration. For |
|
|
|
example, if the current keep duration is 20 milliseconds, using |
|
|
|
the default factor of 95% would result in a new keep duration of |
|
|
|
19 milliseconds. |
|
|
|
|
|
|
min_send_window_size |
Minimum and maximum size for the window of outstanding |
|
DDS_Long |
|
samples. |
|
max_send_window_size |
|||
|
|||
|
|
||
|
|
|
|
|
|
Scales the current |
|
DDS_Long |
send_window_decrease_ |
decrease the effective |
|
factor |
acknowledgement. |
||
|
|||
|
|
||
|
|
|
|
|
|
Controls whether or not periodic heartbeat messages are sent |
|
|
|
over multicast. |
|
|
enable_multicast_periodic_ |
When enabled, if a reader has a multicast destination, the writer |
|
DDS_Boolean |
will send its periodic HEARTBEAT messages to that destination. |
||
|
heartbeat |
Otherwise, if not enabled or the reader does not have a multicast |
|
|
|
||
|
|
destination, the writer will send its periodic HEARTBEATs over |
|
|
|
unicast. |
|
|
|
|
Table 6.36 DDS_RtpsReliableWriterProtocol_t
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
|
Sets the minimum number of requesting readers needed to |
|
DDS_Long |
multicast_resend_threshold |
trigger a multicast resend. |
|
|
|
||
|
|
|
|
|
|
Scales the current |
|
DDS_Long |
send_window_increase_ |
increase the effective |
|
factor |
without any received negative acknowledgements. |
||
|
|||
|
|
||
|
|
|
|
|
|
Period in which DataWriter checks for received negative |
|
DDS_Duration |
send_window_update_ |
acknowledgements and conditionally increases the send- |
|
period |
window size when none are received. |
||
|
|||
|
|
||
|
|
|
6.5.3.1High and Low Watermarks
When the number of unacknowledged samples in the queue of a reliable DataWriter meets or exceeds high_watermark, the RELIABLE_WRITER_CACHE_CHANGED Status (DDS Extension) (Section 6.3.6.7) will be changed appropriately, a listener callback will be triggered, and the DataWriter will start heartbeating its matched DataReaders at fast_heartbeat_rate.
When the number of samples meets or falls below low_watermark, the RELIABLE_WRITER_CACHE_CHANGED Status (DDS Extension) (Section 6.3.6.7) will be changed appropriately, a listener callback will be triggered, and the heartbeat rate will return to the "normal" rate (heartbeat_rate).
Having both high and low watermarks (instead of one) helps prevent rapid flickering between the rates, which could happen if the number of samples hovers near the
Increasing the high and low watermarks will make the DataWriters less aggressive about seeking acknowledgments for sent data, decreasing the size of traffic spikes but slowing performance.
Decreasing the watermarks will make the DataWriters more aggressive, increasing both network utilization and performance.
If batching is used and the DataWriter is not a
6.5.3.2Normal, Fast, and
The normal heartbeat_period is used until the number of samples in the reliable DataWriter’s queue meets or exceeds high_watermark; then fast_heartbeat_period is used. Once the number of samples meets or drops below low_watermark, heartbeat_period is used again.
❏fast_heartbeat_period must be <= heartbeat_period
Increasing fast_heartbeat_period increases the speed of discovery, but results in a larger surge of traffic when the DataWriter is waiting for acknowledgments.
Decreasing heartbeat_period decreases the steady state traffic on the wire, but may increase latency by decreasing the speed of repairs for lost packets when the writer does not have very many outstanding unacknowledged samples.
Having two periodic heartbeat rates, and switching between them based on watermarks:
❏Ensures that all DataReaders receive all their data as quickly as possible (the sooner they receive a heartbeat, the sooner they can send a NACK, and the sooner the DataWriter can send repair samples);
❏Helps prevent the DataWriter from overflowing its resource limits (as its queue starts the fill, the DataWriter sends heartbeats faster, prompting the DataReaders to acknowledge sooner, allowing the DataWriter to purge these acknowledged samples from its queue);
❏Tunes the amount of network traffic. (Heartbeats and NACKs use up network bandwidth like any other traffic; decreasing the heartbeat rates, or increasing the threshold before the fast rate starts, can smooth network
The late_joiner_heartbeat_period is used when a reliable DataReader joins after a reliable DataWriter (with
DataReaders.
❏late_joiner_heartbeat_period must be <= heartbeat_period
6.5.3.3Disabling Positive Acknowledgements
When strict reliable communication is not required, you can configure Connext so that it does not send positive acknowledgements (ACKs). In this case, reliability is maintained solely based on negative acknowledgements (NACKs). The removal of ACK traffic may improve middleware performance. For example, when sending samples over multicast,
By default, DataWriters and DataReaders are configured with positive ACKS enabled. To disable ACKs, either:
❏Configure the DataWriter to disable positive ACKs for all matching DataReaders (by setting disable_positive_acks to TRUE in the DATA_WRITER_PROTOCOL QosPolicy (DDS Extension) (Section 6.5.3)).
❏Disable ACKs for individual DataReaders (by setting disable_positive_acks to TRUE in the DATA_READER_PROTOCOL QosPolicy (DDS Extension) (Section 7.6.1)).
If ACKs are disabled, instead of the DataWriter holding a sample in its send queue until all of its DataReaders have ACKed it, the DataWriter will hold a sample for a configurable duration. This
The length of the
❏When the length is static, the
❏When the length is dynamic, the
Dynamic adjustment maximizes throughput and reliability in response to current network conditions: when the network is congested, durations are increased to decrease the effective send rate and relieve the congestion; when the network is not congested, durations are decreased to increase the send rate and maximize throughput.
You should configure the minimum
See |
also: |
|||
(disable_postive_acks_min_sample_keep_duration) (Section 10.3.4.7). |
|
6.5.3.4Configuring the Send Window Size
When a reliable DataWriter writes a sample, it keeps the sample in its queue until it has received acknowledgements from all of its subscribing DataReaders. The number of these outstanding samples is referred to as the DataWriter's "send window." Once the number of outstanding samples has reached the send window size, subsequent writes will block until an outstanding sample is acknowledged.
Configuration of the send window sets a minimum and maximum size, which may be unlimited. The min and max send windows can be the same. When set differently, the send window will dynamically change in response to detected network congestion, as signaled by received negative acknowledgements. When NACKs are received, the DataWriter responds to the slowed reader by decreasing the send window by the send_window_decrease_factor to throttle down its effective send rate. The send window will not be decreased to less than the min_send_window_size. After a period (send_window_update_period) during which no NACKs are received, indicating that the reader is catching up, the DataWriter will increase the send window size to increase the effective send rate by the percentage specified by send_window_increase_factor. The send window will increase to no greater than the max_send_window_size.
6.5.3.5Propagating Serialized Keys with
This section describes the interaction between these two fields:
❏serialize_key_with_dispose in DATA_WRITER_PROTOCOL QosPolicy (DDS Extension) (Section 6.5.3)
❏propagate_dispose_of_unregistered_instances in DATA_READER_PROTOCOL QosPolicy (DDS Extension) (Section 7.6.1)
RTI recommends setting serialize_key_with_dispose to TRUE if there are DataReaders with propagate_dispose_of_unregistered_instances also set to TRUE. However, it is permissible to set one to TRUE and the other to FALSE. The following examples will help you understand how these fields work.
See also: Disposing of Data (Section 6.3.14.2).
Example 1
1.DataWriter’s serialize_key_with_dispose = FALSE
2.DataReader’s propagate_dispose_of_unregistered_instances = TRUE
3.DataWriter calls dispose() before writing any samples
4.DataReader calls take() and receives a
5.DataReader calls get_key_value(), which returns an error because there is no key associated with the
Example 2
1.DataWriter’s serialize_key_with_dispose = TRUE
2.DataReader’s propagate_dispose_of_unregistered_instances = FALSE
3.DataWriter calls dispose() before writing any samples
4.DataReader calls take(), which does not return any samples because none were written, and it does not receive any
Example 3
1.DataWriter’s serialize_key_with_dispose = TRUE
2.DataReader’s propagate_dispose_of_unregistered_instances = TRUE
3.DataWriter calls dispose() before writing any samples
4.DataReader calls take() and receives the
5.DataReader calls get_key_value() and receives the key for the
Example 4
1.DataWriter’s serialize_key_with_dispose = TRUE
2.DataReader’s propagate_dispose_of_unregistered_instances = TRUE
3.DataWriter calls write(), which writes a sample with a key
4.DataWriter calls dispose(), which writes a
5.DataReader calls take() and receives a data sample and a
6.DataReader calls get_key_value() with no errors
6.5.3.6Virtual Heartbeats
Virtual heartbeats announce the availability of samples with the Collaborative DataWriters feature described in Section 7.6.1, where multiple DataWriters publish samples from a common logical
When PRESENTATION QosPolicy (Section 6.4.6) access_scope is set to TOPIC or INSTANCE on the Publisher, the virtual heartbeat contains information about the samples contained in the
DataWriter queue.
When presentation access_scope is set to GROUP on the Publisher, the virtual heartbeat contains information about the samples in the queues of all DataWriters that belong to the Publisher.
6.5.3.7Resending Over Multicast
Given DataReaders with multicast destinations, when a DataReader sends a NACK to request for samples to be resent, the DataWriter can either resend them over unicast or multicast. Though resending over multicast would save bandwidth and processing for the DataWriter, the potential problem is that there could be DataReaders of the multicast group that did not request for any resends, yet they would have to process, and drop, the resent samples.
Thus, to make each multicast resend more efficient, the multicast_resend_threshold is set as the minimum number of DataReaders of the same multicast group that the DataWriter must receive NACKs from within a single
The multicast_resend_threshold must be set to a positive value. Note that a threshold of 1 means that all resends will be sent over multicast. Also, note that a DataWriter with a zero NACK
6.5.3.8Example
For information on how to use the fields in Table 6.36, see Controlling Heartbeats and Retries with DataWriterProtocol QosPolicy (Section 10.3.4).
The following describes a use case for when to change push_on_write to DDS_BOOLEAN_FALSE. Suppose you have a system in which the data packets being sent is very small. However, you want the data to be sent reliably, and the latency between the time that data is sent to the time that data is received is not an issue. However, the total network bandwidth between the DataWriter and DataReader applications is limited.
If the DataWriter sends a burst of data a a high rate, it is possible that it will overwhelm the limited bandwidth of the network. If you allocate enough space for the DataWriter to store the data burst being sent (see RESOURCE_LIMITS QosPolicy (Section 6.5.20)), then you can use the push_on_write parameter of the DATA_WRITER_PROTOCOL QosPolicy to delay sending the data until the reliable DataReader asks for it.
By setting push_on_write to DDS_BOOLEAN_FALSE, when write() is called on the
DataWriter, no data is actually sent. Instead data is stored in the DataWriter’s send queue. Periodically, Connext will be sending heartbeats informing the DataReader about the data that is available. So every heartbeat period, the DataReader will realize that the DataWriter has new data, and it will send an ACK/NACK, asking for them.
When DataWriter receives the ACK/NACK packet, it will put together a package of data, up to the size set by the parameter max_bytes_per_nack_response, to be sent to the DataReader. This method not only
6.5.3.9Properties
This QosPolicy cannot be modified after the DataWriter is created.
Since it is only for DataWriters, there are no compatibility restrictions for how it is set on the publishing and subscribing sides.
When setting the fields in this policy, the following rules apply. If any of these are false, Connext returns DDS_RETCODE_INCONSISTENT_POLICY:
❏min_nack_response_delay <= max_nack_response_delay
❏fast_heartbeat_period <= heartbeat_period
❏late_joiner_heartbeat_period <= heartbeat_period
❏low_watermark < high_watermark
❏If batching is disabled or the DataWriter is a
•heartbeats_per_max_samples <= writer_qos.resource_limits.max_samples
❏If batching is enabled and the DataWriter is not a
•heartbeats_per_max_samples <= writer_qos.resource_limits.max_batches
6.5.3.10Related QosPolicies
❏DATA_READER_PROTOCOL QosPolicy (DDS Extension) (Section 7.6.1)
❏HISTORY QosPolicy (Section 6.5.10)
❏RELIABILITY QosPolicy (Section 6.5.19)
6.5.3.11Applicable Entities
6.5.3.12System Resource Considerations
A high max_bytes_per_nack_response may increase the instantaneous network bandwidth required to send a single burst of traffic for resending dropped packets.
6.5.4DATA_WRITER_RESOURCE_LIMITS QosPolicy (DDS Extension)
This QosPolicy defines various settings that configure how DataWriters allocate and use physical memory for internal resources.
It includes the members in Table 6.37. For defaults and valid ranges, please refer to the API Reference HTML documentation.
Table 6.37 DDS_DataWriterResourceLimitsQosPolicy
Type |
Field Name |
|
|
|
Description |
|
|
||
|
|
|
|||||||
|
|
|
|||||||
DDS_Long |
initial_concurrent_ |
Initial number of threads that are allowed to concurrently |
|||||||
blocking_threads |
block on the write() call on the same DataWriter. |
|
|||||||
|
|
||||||||
|
|
|
|
|
|
|
|
||
DDS_Long |
max_concurrent_ |
Maximum |
number |
|
of threads |
that |
are allowed to |
||
blocking_threads |
concurrently block on write() call on the same DataWriter. |
||||||||
|
|||||||||
|
|
|
|||||||
DDS_Long |
max_remote_reader_ |
Maximum number of remote DataReaders for which this |
|||||||
filters |
DataWriter will perform |
|
|||||||
|
|
||||||||
|
|
|
|||||||
DDS_Long |
initial_batches |
Initial number of batches that a DataWriter will manage if |
|||||||
batching is enabled. |
|
|
|
|
|
||||
|
|
|
|
|
|
|
|||
|
|
|
|||||||
|
|
Maximum number of batches that a DataWriter will manage |
|||||||
|
|
if batching is enabled. |
|
|
|
|
|||
DDS_Long |
max_batches |
When batching is enabled, the maximum number of samples |
|||||||
that a DataWriter can store is limited by this value and |
|||||||||
|
|
||||||||
|
|
max_samples in RESOURCE_LIMITS QosPolicy (Section |
|||||||
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
DDS_DataWriter |
|
Sets the kinds of instances allowed to be replaced when a |
|||||||
ResourceLimits |
|
||||||||
instance_replacement |
DataWriter reaches instance resource limits. (see Configuring |
||||||||
InstanceReplaceme |
|||||||||
ntKind |
|
|
|||||||
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|||
|
replace_empty_ |
Whether to |
replace |
empty instances |
during |
instance |
|||
DDS_Boolean |
replacement.(see |
||||||||
instances |
|||||||||
|
|
|
|
||||||
|
|
|
|
|
|||||
|
|
|
|||||||
|
|
Whether to register automatically instances written with |
|||||||
DDS_Boolean |
autoregister_instances |
||||||||
otherwise return an error. This can be especially useful if the |
|||||||||
|
|
instance has been replaced. |
|
|
|
||||
|
|
|
|||||||
DDS_Long |
initial_virtual_writers |
Initial number of virtual writers supported by a DataWriter. |
|||||||
|
|
|
|||||||
|
|
Maximum number of virtual writers supported by a |
|||||||
|
|
DataWriter. |
|
|
|
|
|
|
|
|
|
Sets the maximum number of unique virtual writers |
|||||||
DDS_Long |
max_virtual_writers |
supported by a DataWriter, where virtual writers are added |
|||||||
when samples are written with the virtual writer GUID. |
|||||||||
|
|
||||||||
|
|
This field is especially relevant in the configuration of |
|||||||
|
|
Persistence |
Servicea |
|
DataWriters, |
since they |
publish |
||
|
|
information on behalf of multiple virtual writers. |
|
||||||
|
|
|
|||||||
DDS_Long |
max_remote_readers |
The maximum number of remote readers supported by a |
|||||||
DataWriter. |
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
||
|
|
|
|||||||
DDS_Long |
max_app_ack_remote |
The maximum number of |
|||||||
_readers |
remote readers supported by a DataWriter. |
|
|
||||||
|
|
|
|||||||
|
|
|
|
|
|
|
|
|
a. Persistence Service is included with Connext Messaging. It saves data samples so they can be delivered to subscribing applications that join the system at a later time (see Chapter 26: Introduction to RTI Persistence Service).
DataWriters must allocate internal structures to handle the simultaneous blocking of threads trying to call write() on the same DataWriter, for the storage used to batch small samples, and for
Most of these internal structures start at an initial size and by default, will grow as needed by dynamically allocating additional memory. You may set fixed, maximum sizes for these internal structures if you want to bound the amount of memory that a DataWriter can use. By setting the initial size to the maximum size, you will prevent Connext from dynamically allocating any memory after the creation of the DataWriter.
When setting the fields in this policy, the following rule applies. If this is false, Connext returns
DDS_RETCODE_INCONSISTENT_POLICY:
❏ max_concurrent_blocking_threads >= initial_concurrent_blocking_threads
The initial_concurrent_blocking_threads is the used to allocate necessary system resource initially. If necessary, it will be increased automatically up to the max_concurrent_blocking_threads limit.
Every user thread calling write() on a DataWriter may use a semaphore that will block the thread when the DataWriter’s send queue is full. Because user code may set a timeout, each thread must use a different semaphore. See the max_blocking_time parameter of the RELIABILITY QosPolicy (Section 6.5.19). This QoS is offered so that the user application can control the dynamic allocation of system resources by Connext.
If you do not mind if Connext dynamically allocates semaphores when needed, then you can set the max_concurrent_blocking_threads parameter to some large value like MAX_INT. However, if you know exactly how many threads will be calling write() on the same DataWriter, and you do not want Connext to allocate any system resources or memory after initialization, then you should set:
max_concurrent_blocking_threads = initial_concurrent_blocking_threads = NUM
(where NUM is the number of threads that could possibly block concurrently).
Each DataWriter can perform
6.5.4.1Example
If there are multiple threads that can write on the same DataWriter, and the write() operation may block (based on reliability_qos.max_blocking_time and HISTORY settings), you may want to set initial_concurrent_blocking_threads to the most likely number of threads that will block on the same DataWriter at the same time, and set max_concurrent_blocking_threads to the maximum number of threads that could potentially block in the worst case.
6.5.4.2Properties
This QosPolicy cannot be modified after the DataWriter is created.
Since it is only for DataWriters, there are no compatibility restrictions for how it is set on the publishing and subscribing sides.
6.5.4.3Related QosPolicies
❏HISTORY QosPolicy (Section 6.5.10)
6.5.4.4Applicable Entities
6.5.4.5System Resource Considerations
Increasing the values in this QosPolicy will cause more memory usage and more system resource usage.
6.5.5DEADLINE QosPolicy
On a DataWriter, this QosPolicy states the maximum period in which the application expects to call write() on the DataWriter, thus publishing a new sample. The application may call write() faster than the rate set by this QosPolicy.
On a DataReader, this QosPolicy states the maximum period in which the application expects to receive new values for the Topic. The application may receive data faster than the rate set by this QosPolicy.
The DEADLINE QosPolicy has a single member, shown in Table 6.38. For the default and valid range, please refer to the API Reference HTML documentation.
Table 6.38 DDS_DeadlineQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
|
For DataWriters: maximum time between writing a new value of an |
|
DDS_Duration_t |
period |
instance. |
|
For DataReaders: maximum time between receiving new values for |
|||
|
|
||
|
|
an instance. |
|
|
|
|
You can use this QosPolicy during system integration to ensure that applications have been coded to meet design specifications. You can also use it during run time to detect when systems are performing outside of design specifications. Receiving applications can take appropriate actions to prevent total system failure when data is not received in time. For topics on which data is not expected to be periodic, the deadline period should be set to an infinite value.
For keyed topics, the DEADLINE QoS applies on a
Connext will modify the DDS_OFFERED_DEADLINE_MISSED_STATUS and call the associated method in the DataWriterListener (see OFFERED_DEADLINE_MISSED Status (Section 6.3.6.4)) if the application fails to write() a value for an instance within the period set by the DEADLINE QosPolicy of the DataWriter.
Similarly, Connext will modify the
For DataReaders, the DEADLINE QosPolicy and the TIME_BASED_FILTER QosPolicy (Section 7.6.4) may interact such that even though the DataWriter writes samples fast enough to fulfill its commitment to its own DEADLINE QosPolicy, the DataReader may see violations of its DEADLINE QosPolicy. This happens because Connext will drop any packets received within the minimum_separation set by the
DataReader’s deadline.
To avoid triggering the DataReader’s deadline even though the matched DataWriter is meeting its own deadline, set your QoS parameters to meet the following relationship:
reader deadline period >= reader minimum_separation + writer deadline period
Although you can set the DEADLINE QosPolicy on Topics, its value can only be used to initialize the DEADLINE QosPolicies of either a DataWriter or DataReader. It does not directly affect the operation of Connext, see Section 5.1.3.
6.5.5.1Example
Suppose you have a
Note that in practice, there will be latency and jitter in the time between when data is send and when data is received. Thus even if the DataWriter is sending data at exactly 1 second intervals, the DataReader may not receive the data at exactly 1 second intervals. More likely, it will DataReader will receive the data at 1 second plus a small variable quantity of time. Thus you should accommodate this practical reality in choosing the DEADLINE period as well as the actual update period of the DataWriter or your application may receive false indications of failure.
The DEADLINE QosPolicy also interacts with the OWNERSHIP QosPolicy when OWNERSHIP is set to EXCLUSIVE. If a DataReader fails to receive data from the highest strength DataWriter within its requested DEADLINE, then the DataReaders can
6.5.5.2Properties
This QosPolicy can be changed at any time.
The deadlines on the two sides must be compatible.
DataWriter’s DEADLINE period <= the DataReader’s DEADLINE period.
That is, the DataReader cannot expect to receive samples more often than the DataWriter commits to sending them.
If the DataReader and DataWriter have compatible deadlines, Connext monitors this “contract” and informs the application of any violations. If the deadlines are incompatible, both sides are informed and communication does not occur. The ON_OFFERED_INCOMPATIBLE_QOS and the ON_REQUESTED_INCOMPATIBLE_QOS statuses will be modified and the corresponding Listeners called for the DataWriter and DataReader respectively.
6.5.5.3Related QosPolicies
❏LIVELINESS QosPolicy (Section 6.5.13)
❏OWNERSHIP QosPolicy (Section 6.5.15)
❏TIME_BASED_FILTER QosPolicy (Section 7.6.4)
6.5.5.4Applicable Entities
6.5.5.5System Resource Considerations
A
6.5.6DESTINATION_ORDER QosPolicy
When multiple DataWriters send data for the same topic, the order in which data from different DataWriters are received by the applications of different DataReaders may be different. Thus different DataReaders may not receive the same "last" value when DataWriters stop sending data.
This policy controls how each subscriber resolves the final value of a data instance that is written by multiple DataWriters (which may be associated with different Publishers) running on different nodes.
This QosPolicy can be used to create systems that have the property of "eventual consistency." Thus intermediate states across multiple applications may be inconsistent, but when DataWriters stop sending changes to the same topic, all applications will end up having the same state.
Each data sample includes two timestamps: a source timestamp and a destination timestamp. The source timestamp is recorded by the DataWriter application when the data was written. The destination timestamp is recorded by the DataReader application when the data was received.
This QoS includes the member in Table 6.39.
Table 6.39 DDS_DestinationOrderQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
|
Can be either: |
|
DDS_Destination- |
kind |
• DDS_BY_RECEPTION_TIMESTAMP_ |
|
DESTINATIONORDER_QOS |
|||
OrderQosPolicyKind |
|||
|
|
• DDS_BY_SOURCE_TIMESTAMP_ |
|
|
|
DESTINATIONORDER_QOS |
|
|
|
|
|
|
|
Allowed tolerance between source timestamps of |
|
|
|
consecutive samples. |
|
DDS_Duration_t |
source_timestamp_tolerance |
Only applies when kind (above) is |
|
|
|
DDS_BY_SOURCE_TIMESTAMP_ |
|
|
|
DESTINATIONORDER_QOS. |
|
|
|
|
Each DataReader can set this QoS to:
❏DDS_BY_RECEPTION_TIMESTAMP_DESTINATIONORDER_QOS
Assuming the OWNERSHIP_STRENGTH allows it, the latest received value for the instance should be the one whose value is kept. Data will be delivered by a DataReader in the order in which it was received (which may lead to inconsistent final values).
❏DDS_BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS
Assuming the OWNERSHIP_STRENGTH allows it, within each instance, the source_timestamp shall be used to determine the most recent information. This is the only setting that, in the case of concurrent
Data will be delivered by a DataReader in the order in which it was sent. If data arrives on the network with a source timestamp earlier than the source timestamp of the last data delivered, the new data will be dropped. This ordering therefore works best when system clocks are relatively synchronized among writing machines.
Not all data sent by multiple DataWriters may be delivered to a DataReader and not all DataReaders will see the same data sent by DataWriters. However, all DataReaders will see the same "final" data when DataWriters "stop" sending data.
•For a DataWriter with kind
DDS_BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS:
When writing a sample, its timestamp must not be less than the timestamp of the pre- viously written sample. However, if it is less than the timestamp of the previously written sample but the difference is less than this tolerance, the sample will use the previously written sample's timestamp as its timestamp. Otherwise, if the difference is greater than this tolerance, the write will fail.
See also: Special instructions for deleting DataWriters if you are using the ‘Time- stamp’ APIs and BY_SOURCE_TIMESTAMP Destination Order: on page
•A DataReader with kind
DDS_BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS will accept a sample only if the difference between the sample’s source timestamp and the reception time- stamp is no greater than source_timestamp_tolerance. Otherwise, the sample is rejected.
Although you can set the DESTINATION_ORDER QosPolicy on Topics, its value can only be used to initialize the DESTINATION_ORDER QosPolicies of either a DataWriter or DataReader. It does not directly affect the operation of Connext, see Section 5.1.3.
6.5.6.1Properties
This QosPolicy cannot be modified after the Entity is enabled.
This QoS must be set compatibly between the DataWriter and the DataReader. The compatible combinations are shown in Table 6.40.
Table 6.40 Valid Reader/Writer Combinations of DestinationOrder
Destination Order |
|
DataReader requests: |
||
|
|
|
||
|
BY_SOURCE |
BY_RECEPTION |
||
|
|
|
||
|
|
|
|
|
|
|
|
|
|
DataWriter offers: |
BY_SOURCE |
|
✔ |
✔ |
|
|
|
|
|
BY_RECEPTION |
|
incompatible |
✔ |
|
|
|
|||
|
|
|
|
|
If this QosPolicy is set incompatibly, the ON_OFFERED_INCOMPATIBLE_QOS and ON_REQUESTED_INCOMPATIBLE_QOS statuses will be modified and the corresponding
Listeners called for the DataWriter and DataReader respectively.
6.5.6.2Related QosPolicies
❏OWNERSHIP QosPolicy (Section 6.5.15)
❏HISTORY QosPolicy (Section 6.5.10)
6.5.6.3Applicable Entities
6.5.6.4System Resource Considerations
The use of this policy does not significantly impact the use of resources.
6.5.7DURABILITY QosPolicy
Because the
The DURABILITY QosPolicy controls whether or not, and how, published samples are stored by the DataWriter application for DataReaders that are found after the samples were initially written. DataReaders use this QoS to request samples that were published before they were created. The analogy is for a new subscriber to a magazine to ask for issues that were published in the past. These are known as ‘historical’
This QosPolicy can be used to help ensure that DataReaders get all data that was sent by DataWriters, regardless of when it was sent. This QosPolicy can increase system tolerance to failure conditions.
Exactly how many samples are stored by the DataWriter or requested by the DataReader is controlled using the HISTORY QosPolicy (Section 6.5.10).
For more information, please see Chapter 12: Mechanisms for Achieving Information Durability and Persistence.
The possible settings for this QoS are:
❏DDS_VOLATILE_DURABILITY_QOS Connext is not required to send and will not deliver any data samples to DataReaders that are discovered after the samples were initially published.
❏DDS_TRANSIENT_LOCAL_DURABILITY_QOS Connext will store and send previously published samples for delivery to newly discovered DataReaders as long as the DataWriter entity still exists. For this setting to be effective, you must also set the
RELIABILITY QosPolicy (Section 6.5.19) kind to Reliable (not Best Effort). The HISTORY QosPolicy (Section 6.5.10) of the DataReaders/DataWriters used by Persistence Service1 determines exactly how many samples are saved or delivered by Persistence Service.
❏DDS_TRANSIENT_DURABILITY_QOS Connext will store previously published samples in memory using Persistence Service, which will send the stored data to newly discovered DataReaders. The HISTORY QosPolicy (Section 6.5.10) of the DataReaders/DataWriters used by Persistence Service determines exactly how many samples are saved or delivered
Persistence Service.
❏DDS_PERSISTENT_DURABILITY_QOS Connext will store previously published samples in permanent storage, like a disk, using Persistence Service, which will send the stored data to newly discovered DataReaders. The HISTORY QosPolicy (Section 6.5.10) determines exactly how many samples are saved or delivered.
This QosPolicy includes the members in Table 6.41. For default settings, please refer to the API Reference HTML documentation.
With this QoS policy alone, there is no way to specify or characterize the intended consumers of the information. With TRANSIENT_LOCAL, TRANSIENT, or PERSISTENT durability a DataWriter can be configured to keep samples around for
1. Persistence Service is included with Connext Messaging. It saves data samples so they can be delivered to subscribing applications that join the system at a later time (see Chapter 26: Introduction to RTI Persistence Service).
Table 6.41 DDS_DurabilityQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
|
DDS_VOLATILE_DURABILITY_QOS: |
|
|
|
Do not save or deliver old samples. |
|
|
|
DDS_TRANSIENT_LOCAL_DURABILITY_QOS: |
|
DDS_Durability |
kind |
Save and deliver old samples if the DataWriter still exists. |
|
QosPolicyKind |
DDS_TRANSIENT_DURABILITY_QOS: |
||
|
|||
|
|
Save and deliver old samples using a |
|
|
|
DDS_PERSISTENCE_DURABILITY_QOS: |
|
|
|
Save and deliver old samples using |
|
|
|
|
|
|
|
Whether or not a TRANSIENT or PERSISTENT DataReader should |
|
|
|
receive samples directly from a TRANSIENT or PERSISTENT |
|
|
|
DataWriter. |
|
|
|
When TRUE, a TRANSIENT or PERSISTENT DataReader will receive |
|
|
|
samples directly from the original DataWriter. The DataReader may |
|
|
direct_ |
also receive samples from Persistence Servicea but the duplicates will |
|
DDS_Boolean |
be filtered by the middleware. |
||
communication |
|||
|
When FALSE, a TRANSIENT or PERSISTENT DataReader will |
||
|
|
receive samples only from the DataWriter created by Persistence |
|
|
|
Service. This ‘relay communication’ pattern provides a way to |
|
|
|
guarantee eventual consistency. |
|
|
|
||
|
|
This field only applies to DataReaders. |
|
|
|
|
a. Persistence Service is included with Connext Messaging. See Chapter 26: Introduction to RTI Persistence Service.
Information durability can be combined with required subscriptions in order to guarantee that samples are delivered to a set of required subscriptions. For additional details on required subscriptions see Section 6.3.13 and Section 6.5.1.
6.5.7.1Example
Suppose you have a DataWriter that sends data sporadically and its DURABILITY kind is set to VOLATILE. If a new DataReader joins the system, it won’t see any data until the next time that write() is called on the DataWriter. If you want the DataReader to receive any data that is valid, old or new, both sides should set their DURABILITY kind to TRANSIENT_LOCAL. This will ensure that the DataReader gets some of the previous samples immediately after it is enabled.
6.5.7.2Properties
This QosPolicy cannot be modified after the Entity has been created.
The DataWriter and DataReader must use compatible settings for this QosPolicy. To be compatible, the DataWriter and DataReader must use one of the valid combinations shown in Table 6.42.
If this QosPolicy is found to be incompatible, the ON_OFFERED_INCOMPATIBLE_QOS and ON_REQUESTED_INCOMPATIBLE_QOS statuses will be modified and the corresponding
Listeners called for the DataWriter and DataReader respectively.
6.5.7.3Related QosPolicies
❏HISTORY QosPolicy (Section 6.5.10)
❏RELIABILITY QosPolicy (Section 6.5.19)
Table 6.42 Valid Combinations of Durability ‘kind’
|
|
|
DataReader requests: |
|
||
|
|
|
|
|
|
|
|
|
VOLATILE |
TRANSIENT |
TRANSIENT |
PERSISTENT |
|
|
|
|
_LOCAL |
|
|
|
|
|
|
|
|
|
|
|
VOLATILE |
✔ |
incompatible |
incompatible |
incompatible |
|
|
|
|
|
|
|
|
DataWriter |
TRANSIENT_ |
✔ |
✔ |
incompatible |
incompatible |
|
LOCAL |
||||||
|
|
|
|
|||
offers: |
|
|
|
|
|
|
TRANSIENT |
✔ |
✔ |
✔ |
incompatible |
||
|
||||||
|
|
|
|
|
|
|
|
PERSISTENT |
✔ |
✔ |
✔ |
✔ |
|
|
|
|
|
|
|
6.5.7.4Applicable Entities
6.5.7.5System Resource Considerations
Using this policy with a setting other than VOLATILE will cause Connext to use CPU and network bandwidth to send old samples to matching, newly discovered DataReaders. The actual amount of resources depends on the total size of data that needs to be sent.
The maximum number of samples that will be kept on the DataWriter’s queue for
System Resource Considerations With Required Subscriptions”
By default, when TRANSIENT_LOCAL durability is used in combination with required subscriptions, a DataWriter configured with KEEP_ALL in the HISTORY QosPolicy (Section 6.5.10) will keep the samples in its cache until they are acknowledged by all the required subscriptions. After the samples are acknowledged by the required subscriptions they will be marked as reclaimable, but they will not be purged from the DataWriter’s queue until the DataWriter needs these resources for new samples. This may lead to a non efficient resource utilization, specially when max_samples is high or even UNLIMITED.
The DataWriter’s behavior can be changed to purge samples after they have been acknowledged by all the active/matching DataReaders and all the required subscriptions configured on the DataWriter. To do so, set the dds.data_writer.history.purge_samples_after_acknowledgment property to 1 (see PROPERTY QosPolicy (DDS Extension) (Section 6.5.17)).
6.5.8DURABILITY SERVICE QosPolicy
This QosPolicy is only used if the DURABILITY QosPolicy (Section 6.5.7) is PERSISTENT or TRANSIENT and you are using Persistence Service, which is included with Connext Messaging. Persistence Service is used to store and possibly forward the data sent by the DataWriter to DataReaders who are created after the data was initially sent.
This QosPolicy configures certain parameters of Persistence Service when it operates on the behalf of the DataWriter, such as how much data to store. Specifically, this QosPolicy configures the HISTORY and RESOURCE_LIMITS used by the fictitious DataReader and DataWriter used by
Persistence Service.
Note however, that Persistence Service itself may be configured to ignore these values and instead use values from its own configuration file.
For more information, please see:
❏Chapter 12: Mechanisms for Achieving Information Durability and Persistence
❏Chapter 26: Introduction to RTI Persistence Service
❏Chapter 27: Configuring Persistence Service
This QosPolicy includes the members in Table 6.43. For default values, please refer to the API Reference HTML documentation.
Table 6.43 DDS_DurabilityServiceQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_Duration_t |
service_cleanup_delay |
How long to keep all information regarding |
|
an instance. |
|||
|
|
||
|
|
|
|
DDS_HistoryQosPolicyKind |
history_kind |
Settings to use for the HISTORY QosPolicy |
|
|
|
(Section 6.5.10) when recouping durable |
|
DDS_Long |
history_depth |
||
data. |
|||
|
|
||
|
|
|
|
|
max_samples |
Settings to use for the RESOURCE_LIMITS |
|
|
|
||
DDS_Long |
max_instances |
QosPolicy (Section 6.5.20) when feeding |
|
|
|
data to a late joiner. |
|
|
max_samples_per_instance |
||
|
|
|
The service_cleanup_delay in this QosPolicy controls when Persistence Service may remove all information regarding a
1.The instance has been explicitly disposed (instance_state = NOT_ALIVE_DISPOSED).
2.While in the NOT_ALIVE_DISPOSED state, Connext detects that there are no more 'live' DataWriters writing the instance. That is, all existing writers either unregister the instance (call unregister) or lose their liveliness.
3.A time interval longer that DurabilityService QosPolicy’s service_cleanup_delay has elapsed since the time that Connext detected that the previous two conditions were met.
The service_cleanup_delay field is useful in the situation where your application disposes an instance and it crashes before it has a chance to complete additional tasks related to the disposition. Upon restart, your application may ask for initial data to regain its state and the delay introduced by service_cleanup_delay will allow your restarted application to receive the information about the disposed instance and complete any interrupted tasks.
Although you can set the DURABILITY_SERVICE QosPolicy on a Topic, this is only useful as a means to initialize the DURABILITY_SERVICE QosPolicy of a DataWriter. A Topic’s DURABILITY_SERVICE setting does not directly affect the operation of Connext, see Section 5.1.3.
6.5.8.1Properties
This QosPolicy cannot be modified after the Entity has been enabled.
It does not apply to DataReaders, so there is no requirement for setting it compatibly on the sending and receiving sides.
6.5.8.2Related QosPolicies
❏DURABILITY QosPolicy (Section 6.5.7)
6.5.8.3Applicable Entities
6.5.8.4System Resource Considerations
Since this QosPolicy configures the HISTORY and RESOURCE_LIMITS used by the fictitious DataReader and DataWriter used by Persistence Service, it does have some impact on resource usage.
6.5.9ENTITY_NAME QosPolicy (DDS Extension)
The ENTITY_NAME QosPolicy assigns a name and role name to a DomainParticipant,
DataReader, or DataWriter.
How the name is used is strictly
It is useful to attach names that are meaningful to the user. These names are propagated during discovery so that applications can use these names to identify, in a
The role_name identifies the role of the entity. It is used by the Collaborative DataWriter feature (see Availability QoS Policy and Collaborative DataWriters (Section 6.5.1.1)). With Durable Subscriptions, role_name is used to specify to which Durable Subscription the DataReader belongs. (see Availability QoS Policy and Required Subscriptions (Section 6.5.1.2).
This QosPolicy contains the members listed in Table 6.44.
Table 6.44 DDS_EntityNameQoSPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
char * |
name |
A null terminated string, up to 255 characters in length. |
|
|
|
|
|
A null terminated string, up to 255 characters in length. |
|
|
For Collaborative DataWriters, this name is used to specify to which |
|
|
endpoint group the DataWriter belongs. See. Availability QoS Policy |
char * |
role_name |
|
|
|
For Required and Durable Subscriptions this name is used to specify |
|
|
to which Subscription the DataReader belongs. See Required |
|
|
|
|
|
|
These names will appear in the
Prior to get_qos(), if the name and/or role_name field in assumes the memory to be valid and big enough and may name and/or role_name to NULL before calling get_qos() memory for name.
this QosPolicy is not null, Connext write to it. If that is not desired, set and Connext will allocate adequate
When you call the destructor of entity’s QoS structure (DomainParticipantQos, DataReaderQos, or DataWriterQos) (in C++, C++/CLI, and C#) or <entity>Qos_finalize() (in C), Connext will attempt to free the memory used for name and role_name if it is not NULL. If this behavior is not desired, set name and/or role_name to NULL before you call the destructor of entity’s QoS structure or DomainParticipantQos_finalize().
6.5.9.1Properties
This QosPolicy cannot be modified after the entity is enabled.
6.5.9.2Related QosPolicies
❏None
6.5.9.3Applicable Entities
❏DomainParticipants (Section 8.3)
6.5.9.4System Resource Considerations
If the value of name in this QosPolicy is not NULL, some memory will be consumed in storing the information in the database, but should not significantly impact the use of resource.
6.5.10HISTORY QosPolicy
This QosPolicy configures the number of samples that Connext will store locally for DataWriters and DataReaders. For keyed Topics, this QosPolicy applies on a per instance basis, so that Connext will attempt to store the configured value of samples for every instance (see Samples, Instances, and Keys (Section 2.2.2) for a discussion of keys and instances).
It includes the members seen in Table 6.45. For defaults and valid ranges, please refer to the API Reference HTML documentation.
Table 6.45 DDS_HistoryQosPolicy
Type |
Field |
Description |
|
|
|
Name |
|
|
|||
|
|
|
|
||
|
|
|
|||
|
|
|
|||
DDS_HistoryQos- |
|
DDS_KEEP_LAST_HISTORY_QOS: keep the last depth number of |
|||
kind |
samples per instance. |
|
|
||
PolicyKind |
|
DDS_KEEP_ALL_HISTORY_QOS: keep all samples.a |
|
|
|
|
|
|
|
||
|
|
If kind = DDS_KEEP_LAST_HISTORY_QOS, this is how many samples to |
|||
DDS_Long |
depth |
keep per instance.b |
|
|
|
|
|
if kind = DDS_KEEP_ALL_HISTORY_QOS, this value is ignored. |
|
||
|
|
|
|||
|
|
Specifies how a DataWriter should handle previously written samples for a |
|||
|
|
new DataReader. |
|
|
|
|
|
When a new DataReader matches a DataWriter, the DataWriter can be |
|||
|
|
configured to perform |
|||
|
|
samples stored in the DataWriter queue for the new DataReader. |
|
||
|
|
May be: |
|
|
|
DDS_RefilterQos- |
|
❏ DDS_NONE_REFILTER_QOS Do not |
filter |
existing |
|
refilter |
samples for a new DataReader. The DataReader will do the |
||||
PolicyKind |
|||||
|
filtering. |
|
|
||
|
|
|
|
||
|
|
❏ DDS_ALL_REFILTER_QOS Filter all existing samples for a |
|||
|
|
newly matched DataReader. |
|
|
|
|
|
❏ DDS_ON_DEMAND_REFILTER_QOS |
Filter |
existing |
|
|
|
samples only when they are requested by the DataReader. |
|||
|
|
(An extension to the DDS standard.) |
|
|
|
|
|
|
|
|
a. Connext will store up to the value of the max_samples_per_instance parameter of the RESOURCE_LIMITS QosPol- icy (Section 6.5.20).
b. depth must be <= max_samples_per_instance parameter of the RESOURCE_LIMITS QosPolicy (Section 6.5.20)
The kind determines whether or not to save a configured number of samples or all samples. It can be set to either of the following:
❏DDS_KEEP_LAST_HISTORY_QOS Connext attempts to keep the latest values of the data- instance and discard the oldest ones when the limit as set by the depth parameter is reached; new data will overwrite the oldest data in the queue. Thus the queue acts like a circular buffer of length depth.
•For a DataWriter: Connext attempts to keep the most recent depth samples of each instance (identified by a unique key) managed by the DataWriter.
•For a DataReader: Connext attempts to keep the most recent depth samples received for each instance (identified by a unique key) until the application takes them via the DataReader's take() operation. See Section 7.4.3 for a discussion of the difference between read() and take().
❏DDS_KEEP_ALL_HISTORY_QOS Connext attempts to keep all of the samples of a Topic.
•For a DataWriter: Connext attempts to keep all samples published by the DataWriter.
•For a DataReader: Connext attempts to keep all samples received by the DataReader for a Topic (both keyed and
•The value of the depth parameter is ignored.
The above descriptions say “attempts to keep” because the actual number of samples kept is subject to the limitations imposed by the RESOURCE_LIMITS QosPolicy (Section 6.5.20). All of the samples of all instances of a Topic share a single physical queue that is allocated for a DataWriter or DataReader. The size of this queue is configured by the RESOURCE_LIMITS QosPolicy. If there are many difference instances for a Topic, it is possible that the physical queue may run out of space before the number of samples reaches the depth for all instances.
In the KEEP_ALL case, Connext can only keep as many samples for a Topic (independent of instances) as the size of the allocated queue. Connext may or may not allocate more memory when the queue is filled, depending on the settings in the RESOURCE_LIMITS QoSPolicy of the
DataWriter or DataReader.
This QosPolicy interacts with the RELIABILITY QosPolicy (Section 6.5.19) by controlling whether or not Connext guarantees that ALL of the data sent is received or if only the last N data values sent are guaranteed to be received (a reduced level of reliability using the KEEP_LAST setting). However, the physical sizes of the send and receive queues are not controlled by the History QosPolicy. The memory allocation for the queues is controlled by the RESOURCE_LIMITS QosPolicy (Section 6.5.20). Also, the amount of data that is sent to new DataReaders who have configured their DURABILITY QosPolicy (Section 6.5.7) to receive previously published data is controlled by the History QosPolicy.
What happens when the physical queue is filled depends both on the setting for the HISTORY QosPolicy as well as the RELIABILITY QosPolicy.
❏DDS_KEEP_LAST_HISTORY_QOS
•If RELIABILITY is BEST_EFFORT: When the number of samples for an instance in the queue reaches the value of depth, a new sample for the instance will replace the oldest sample for the instance in the queue.
•If RELIABILITY is RELIABLE: When the number of samples for an instance in the queue reaches the value of depth, a new sample for the instance will replace the oldest sample for the instance in the
❏DDS_KEEP_ALL_HISTORY_QOS
•If RELIABILITY is BEST_EFFORT: If the number of samples for an instance in the queue reaches the value of the RESOURCE_LIMITS QosPolicy (Section 6.5.20)’s max_samples_per_instance field, a new sample for the instance will replace the oldest sample for the instance in the queue (regardless of instance).
•If RELIABILITY is RELIABLE: When the number of samples for an instance in the queue reaches the value of the RESOURCE_LIMITS QosPolicy (Section 6.5.20)’s max_samples_per_instance field, then:
a)for a
b)for a
Although you can set the HISTORY QosPolicy on Topics, its value can only be used to initialize the HISTORY QosPolicies of either a DataWriter or DataReader. It does not directly affect the operation of Connext, see Section 5.1.3.
6.5.10.1Example
To achieve strict reliability, you must (1) set the DataWriter’s and DataReader’s HISTORY QosPolicy to KEEP_ALL, and (2) set the DataWriter’s and DataReader’s RELIABILITY QosPolicy to RELIABLE.
See Chapter 10 for a complete discussion on Connext’s reliable protocol. See Controlling Queue Depth with the History QosPolicy (Section 10.3.3).
6.5.10.2Properties
This QosPolicy cannot be modified after the Entity has been enabled.
There is no requirement that the publishing and subscribing sides use compatible values.
6.5.10.3Related QosPolicies
❏ BATCH QosPolicy (DDS Extension) (Section 6.5.2) Do not configure the DataReader’s depth to be shallower than the DataWriter's maximum batch size (batch_max_data_size). Because batches are acknowledged as a group, a DataReader that cannot process an entire batch will lose the remaining samples in it.
❏RELIABILITY QosPolicy (Section 6.5.19)
❏RESOURCE_LIMITS QosPolicy (Section 6.5.20)
6.5.10.4Applicable Entities
6.5.10.5System Resource Considerations
While this QosPolicy does not directly affect the system resources used by Connext, the RESOURCE_LIMITS QosPolicy (Section 6.5.20) that must be used in conjunction with the HISTORY QosPolicy (Section 6.5.10) will affect the amount of memory that Connext will allocate for a DataWriter or DataReader.
6.5.11LATENCYBUDGET QoS Policy
This QosPolicy can be used by a DDS implementation to change how it processes and sends data that has low latency requirements. The DDS specification does not mandate whether or how this parameter is used. Connext uses it to prioritize the sending of asynchronously published data; see ASYNCHRONOUS_PUBLISHER QosPolicy (DDS Extension) (Section 6.4.1).
This QosPolicy also applies to Topics. The Topic’s setting for the policy is ignored unless you explicitly make the DataWriter use it.
It contains the single member listed in Table 6.46.
Table 6.46 DDS_LatencyBudgetQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
Provides a hint as to the maximum acceptable delay from the time |
DDS_Duration_t |
duration |
the data is written to the time it is received by the subscribing |
|
|
applications. |
|
|
|
6.5.11.1Applicable Entities
6.5.12LIFESPAN QoS Policy
The purpose of this QoS is to avoid delivering stale data to the application. Each data sample written by a DataWriter has an associated expiration time, beyond which the data should not be delivered to any application. Once the sample expires, the data will be removed from the DataReader caches, as well as from the transient and persistent information caches.
The middleware attaches timestamps to all data sent and received. The expiration time of each sample is computed by adding the duration specified by this QoS to the destination timestamp. To avoid inconsistencies, if you have multiple DataWriters of the same instance, they should all use the same value for this QoS.
When you specify a finite Lifespan for your data, Connext will compare the current time with those timestamps and drop data when your specified Lifespan expires.
The Lifespan QosPolicy can be used to control how much data is stored by Connext. Even if it is configured to store "all" of the data sent or received for a topic (see the HISTORY QosPolicy (Section 6.5.10)), the total amount of data it stores may be limited by the Lifespan QosPolicy.
You may also use the Lifespan QosPolicy to ensure that applications do not receive or act on data, commands or messages that are too old and have "expired.”
It includes the single member listed in Table 6.47. For default and valid range, please refer to the API Reference HTML documentation.
Although you can set the LIFESPAN QosPolicy on Topics, its value can only be used to initialize the LIFESPAN QosPolicies of DataWriters. The Topic’s setting for this QosPolicy does not directly affect the operation of Connext, see Setting Topic QosPolicies (Section 5.1.3).
Table 6.47 DDS_LifespanQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
DDS_Duration_t |
duration |
Maximum duration for the data's validity. |
|
|
|
6.5.12.1Properties
This QoS policy can be modified after the entity is enabled.
It does not apply to DataReaders, so there is no requirement that the publishing and subscribing sides use compatible values.
6.5.12.2Related QoS Policies
❏BATCH QosPolicy (DDS Extension) (Section 6.5.2) Be careful when configuring a DataWriter with a Lifespan duration shorter than the batch flush period (batch_flush_delay). If the batch does not fill up before the flush period elapses, the short duration will cause the samples to be lost without being sent.
❏DURABILITY QosPolicy (Section 6.5.7)
6.5.12.3Applicable Entities
6.5.12.4System Resource Considerations
The use of this policy does not significantly impact the use of resources.
6.5.13LIVELINESS QosPolicy
The LIVELINESS QosPolicy specifies how Connext determines whether a DataWriter is “alive.” A DataWriter’s liveliness is used in combination with the OWNERSHIP QosPolicy (Section 6.5.15) to maintain ownership of an instance (note that the DEADLINE QosPolicy (Section 6.5.5) is also used to change ownership when a DataWriter is still alive). That is, for a DataWriter to own an instance, the DataWriter must still be alive as well as honoring its DEADLINE contract.
It includes the members in Table 6.48. For defaults and valid ranges, please refer to the API Reference HTML documentation.
Table 6.48 DDS_LivelinessQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
DDS_AUTOMATIC_LIVELINESS_QOS: |
|
|
Connext will automatically assert liveliness for the DataWriter at least as |
|
|
often as the lease_duration. |
DDS_Liveliness |
|
DDS_MANUAL_BY_PARTICIPANT_LIVELINESS_QOS: |
kind |
The DataWriter is assumed to be alive if any Entity within the same |
|
QosPolicyKind |
|
DomainParticipant has asserted its liveliness. |
|
|
|
|
|
DDS_MANUAL_BY_TOPIC_LIVELINESS_QOS: |
|
|
Your application must explicitly assert the liveliness of the DataWriter |
|
|
within the lease_duration. |
|
|
|
|
|
The timeout by which liveliness must be asserted for the DataWriter or |
|
|
the DataWriter will be considered “inactive or not alive. |
DDS_Duration_t |
lease_duration |
Additionally, for DataReaders, the lease_duration also specifies the |
|
|
maximum period at which Connext will check to see if the matching |
|
|
DataWriter is still alive. |
|
|
|
Setting a DataWriter’s kind of LIVELINESS specifies the mechanism that will be used to assert liveliness for the DataWriter. The DataWriter’s lease_duration then specifies the maximum period at which packets that indicate that the DataWriter is still alive are sent to matching
DataReaders.
The various mechanisms are:
❏DDS_AUTOMATIC_LIVELINESS_QOS — The DomainParticipant is responsible for automatically sending packets to indicate that the DataWriter is alive; this will be done at least as often as required by the lease_duration. This setting is appropriate when the primary failure mode is that the publishing application itself dies. It does not cover the case in which the application is still alive but in an erroneous
As long as the internal threads spawned by Connext for a DomainParticipant are running, then the liveliness of the DataWriter will be asserted regardless of the state of the rest of the application.
This setting is certainly the most convenient, if the least accurate, method of asserting liveliness for a DataWriter.
❏DDS_MANUAL_BY_PARTICIPANT_LIVELINESS_QOS — Connext will assume that as long as the user application has asserted the liveliness of at least one DataWriter belonging to the same DomainParticipant or the liveliness of the DomainParticipant itself, then this DataWriter is also alive.
This setting allows the user code to control the assertion of liveliness for an entire group of DataWriters with a single operation on any of the DataWriters or their DomainParticipant. Its a good balance between control and convenience.
❏DDS_MANUAL_BY_TOPIC_LIVELINESS_QOS — The DataWriter is considered alive only if the user application has explicitly called operations that assert the liveliness for that particular DataWriter.
This setting forces the user application to assert the liveliness for a DataWriter which gives the user application great control over when other applications can consider the DataWriter to be inactive, but at the cost of convenience.
With the MANUAL_BY_[TOPIC,PARTICIPANT] settings, user application code can assert the liveliness of DataWriters either explicitly by calling the assert_liveliness() operation on the DataWriter (as well as the DomainParticipant for the MANUAL_BY_PARTICIPANT setting) or implicitly by calling write() on the DataWriter. If the application does not use either of the methods mentioned at least once every lease_duration, then the subscribing application may assume that the DataWriter is no longer alive. Sending data MANUAL_BY_TOPIC will cause an assert message to be sent between the DataWriter and its matched DataReaders.
Publishing applications will monitor their DataWriters to make sure that they are honoring their LIVELINESS QosPolicy by asserting their liveliness at least at the period set by the lease_duration. If Connext finds that a DataWriter has failed to have its liveliness asserted by its lease_duration, an internal thread will modify the DataWriter’s
DDS_LIVELINESS_LOST_STATUS and trigger its on_liveliness_lost() DataWriterListener callback if a listener exists, see Listeners (Section 4.4).
Setting the DataReader’s kind of LIVELINESS requests a specific mechanism for the publishing application to maintain the liveliness of DataWriters. The subscribing application may want to know that the publishing application is explicitly asserting the liveliness of the matching DataWriter rather than inferring its liveliness through the liveliness of its DomainParticipant or its sibling DataWriters.
The DataReader’s lease_duration specifies the maximum period at which matching DataWriters must have their liveliness asserted. In addition, in the subscribing application Connext uses an internal thread that wakes up at the period set by the DataReader’s lease_duration to see if the DataWriter’s lease_duration has been violated.
When a matching DataWriter is determined to be dead (inactive), Connext will modify the
DDS_LIVELINESS_CHANGED_STATUS of each matching DataReader and trigger that
DataReader’s on_liveliness_changed() DataReaderListener callback (if a listener exists).
Although you can set the LIVELINESS QosPolicy on Topics, its value can only be used to initialize the LIVELINESS QosPolicies of either a DataWriter or DataReader. It does not directly affect the operation of Connext, see Section 5.1.3.
For more information on Liveliness, see Maintaining DataWriter Liveliness for kinds AUTOMATIC and MANUAL_BY_PARTICIPANT (Section 14.3.1.2).
6.5.13.1Example
You can use LIVELINESS QosPolicy during system integration to ensure that applications have been coded to meet design specifications. You can also use it during run time to detect when systems are performing outside of design specifications. Receiving applications can take appropriate actions in response to disconnected DataWriters.
The LIVELINESS QosPolicy can be used to manage
6.5.13.2Properties
This QosPolicy cannot be modified after the Entity has been enabled.
The DataWriter and DataReader must use compatible settings for this QosPolicy. To be compatible, both of the following conditions must be true:
1.The DataWriter and DataReader must use one of the valid combinations shown in Table 6.49.
2.DataWriter’s lease_duration <= DataReader’s lease_duration.
If this QosPolicy is found to be incompatible, the ON_OFFERED_INCOMPATIBLE_QOS and ON_REQUESTED_INCOMPATIBLE_QOS statuses will be modified and the corresponding
Listeners called for the DataWriter and DataReader respectively.
Table 6.49 Valid Combinations of Liveliness ‘kind’
|
|
DataReader requests: |
|
||
|
|
|
|
|
|
|
|
MANUAL_ |
MANUAL_BY_ |
|
AUTO- |
|
|
BY_TOPIC |
PARTICIPANT |
|
MATIC |
|
|
|
|
|
|
|
|
|
|
|
|
|
MANUAL_BY_TOPIC |
✔ |
✔ |
|
✔ |
DataWriter |
|
|
|
|
|
MANUAL_BY_PARTICIPANT |
incompatible |
✔ |
|
✔ |
|
offers: |
|
||||
|
|
|
|
|
|
|
AUTOMATIC |
incompatible |
incompatible |
|
✔ |
|
|
|
|
|
|
6.5.13.3Related QosPolicies
❏DEADLINE QosPolicy (Section 6.5.5)
6.5.13.4Applicable Entities
6.5.13.5System Resource Considerations
An internal thread in Connext will wake up periodically to check the liveliness of all the DataWriters. This happens both in the application that contains the DataWriters at the lease_duration set on the DataWriters as well as the applications that contain the DataReaders at the lease_duration set on the DataReaders. Therefore, as lease_duration becomes smaller, more CPU will be used to wake up threads and perform checks. A short lease_duration set on DataWriters may also use more network bandwidth because liveliness packets are being sent at a higher
6.5.14MULTI_CHANNEL QosPolicy (DDS Extension)
This QosPolicy is used to partition the data published by a DataWriter across multiple channels. A channel is defined by a filter expression and a sequence of multicast locators.
By using this QosPolicy, a DataWriter can be configured to send data to different multicast groups based on the content of the data. Using syntax similar to those used in
See Chapter 18 for complete documentation on
Note: Durable writer history is not supported for
This QosPolicy includes the members presented in Table 6.50, Table 6.51, and Table 6.52. For defaults and valid ranges, please refer to the API Reference HTML documentation.
Table 6.50 DDS_MultiChannelQosPolicy
Type |
Field |
Description |
|
Name |
|||
|
|
||
|
|
|
|
|
|
|
|
|
|
A sequence of channel settings used to configure the channels’ |
|
DDS_ChannelSettingsSeq |
channels |
properties. If the length of the sequence is zero, the QosPolicy |
|
|
|
will be ignored. See Table 6.51. |
|
|
|
|
|
|
|
Name of the filter class used to describe the filter expressions. |
|
|
|
The following values are supported: |
|
char * |
filter_name |
❏ DDS_SQLFILTER_NAMEa (see Section 5.4.6) |
|
|
|
❏ DDS_STRINGMATCHFILTER_NAMEa (see |
|
|
|
||
|
|
|
a. In Java and C#, you can access the names of the
The format of the filter_expression should correspond to one of the following filter classes:
❏ DDS_SQLFILTER_NAME (see SQL Filter Expression Notation (Section 5.4.6))
Table 6.51 DDS_ChannelSettings_t
|
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
|
A sequence of multicast settings used to configure the |
|
|
|
multicast addresses associated with a channel. The |
|
DDS_MulticastSettingsSeq |
multicast_settings |
sequence cannot be empty. |
|
The maximum number of multicast locators in a channel |
||
|
|
|
|
|
|
|
is limited to four. (A locator is defined by a transport |
|
|
|
alias, a multicast address and a port.) See Table 6.52. |
|
|
|
|
|
|
|
A logical expression used to determine the data that will |
|
|
|
be published in the channel. |
|
|
|
This string cannot be NULL. An empty string always |
|
char * |
filter_expression |
evaluates to TRUE. |
|
|
|
|
|
|
|
|
|
|
|
5.4.7) for expression syntax. |
|
|
|
|
|
|
|
A positive integer designating the relative priority of the |
|
|
|
channel, used to determine the transmission order of |
|
|
|
pending transmissions. Larger numbers have higher |
|
|
|
priority. |
|
|
|
To use publication priorities, the DataWriter’s |
|
|
|
|
|
DDS_Long |
priority |
6.5.18) must be set for asynchronous publishing and the |
|
|
|
DataWriter must use a FlowController that is configured |
|
|
|
for |
|
|
|
|
|
|
|
Note: Prioritized samples are not supported when using |
|
|
|
the Java, Ada, or .NET APIs. Therefore the priority field |
|
|
|
does not exist when using these APIs. |
|
|
|
|
Table 6.52 DDS_MulticastSettings |
|
|
|
|
|
|
|
|
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
|
A sequence of transport aliases that specifies which |
|
DDS_StringSeq |
transports |
transport should be used to publish multicast messages |
|
|
|
for this channel. |
|
|
|
|
|
char * |
receive_address |
A multicast group address on which DataReaders |
|
subscribing to this channel will receive data. |
||
|
|
|
|
|
|
|
|
|
DDS_Long |
receive_port |
The multicast port on which DataReaders subscribing to |
|
this channel will receive data. |
||
|
|
|
|
|
|
|
|
❏DDS_STRINGMATCHFILTER_NAME (see STRINGMATCH Filter Expression Notation (Section 5.4.7)
A DataReader can use the ContentFilteredTopic API (see Using a ContentFilteredTopic (Section 5.4.5)) to subscribe to a subset of the channels used by a DataWriter.
6.5.14.1Example
See Chapter 18:
6.5.14.2Properties
This QosPolicy cannot be modified after the DataWriter is created.
It does not apply to DataReaders, so there is no requirement that the publishing and subscribing sides use compatible values.
6.5.14.3Related Qos Policies
❏DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4)
6.5.14.4Applicable Entities
6.5.14.5System Resource Considerations
The following fields in the DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4) configure the resources associated with the channels stored in the MULTI_CHANNEL QosPolicy:
❏channel_seq_max_length
❏channel_filter_expression_max_length
For information about partitioning topic data across multiple channels, please refer to Chapter 18:
6.5.15OWNERSHIP QosPolicy
The OWNERSHIP QosPolicy specifies whether a DataReader receive data for an instance of a
Topic sent by multiple DataWriters.
For
Table 6.53 DDS_OwnershipQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_OwnershipQosPolicyKind |
kind |
DDS_SHARED_OWNERSHIP_QOS or |
|
DDS_EXCLUSIVE_OWNERSHIP_QOS |
|||
|
|
||
|
|
|
The kind of OWNERSHIP can be set to one of two values:
❏SHARED Ownership
When OWNERSHIP is SHARED, and multiple DataWriters for the Topic publishes the value of the same instance, all the updates are delivered to subscribing DataReaders. So in effect, there is no “owner;” no single DataWriter is responsible for updating the value of an instance. The subscribing application will receive modifications from all DataWriters.
❏EXCLUSIVE Ownership
When OWNERSHIP is EXCLUSIVE, each instance can only be owned by one DataWriter at a time. This means that a single DataWriter is identified as the exclusive owner whose updates are allowed to modify the value of the instance for matching DataWriters. Other DataWriters may submit modifications for the instance, but only those made by the current owner are passed on to the DataReaders. If a
Note for
This QosPolicy is often used to help users build systems that have redundant elements to safeguard against component or application failures. When systems have active and hot standby components, the Ownership QosPolicy can be used to ensure that data from standby applications are only delivered in the case of the failure of the primary.
The Ownership QosPolicy can also be used to create data channels or topics that are designed to be taken over by external applications for testing or maintenance purposes.
Although you can set the OWNERSHIP QosPolicy on Topics, its value can only be used to initialize the OWNERSHIP QosPolicies of either a DataWriter or DataReader. It does not directly affect the operation of Connext, see Section 5.1.3.
6.5.15.1How Connext Selects which DataWriter is the Exclusive Owner
When OWNERSHIP is EXCLUSIVE, the owner of an instance at any given time is the DataWriter with the highest OWNERSHIP_STRENGTH QosPolicy (Section 6.5.16) that is “alive” as defined by the LIVELINESS QosPolicy (Section 6.5.13)) and has not violated the DEADLINE QosPolicy (Section 6.5.5) of the DataReader. OWNERSHIP_STRENGTH is simply an integer set by the DataWriter.
As mentioned before, if the Topic’s data type is keyed (see Section 2.2.2) then EXCLUSIVE ownership is determined on a
DataWriter.
If there are multiple DataWriters with the same OWNERSHIP_STRENGTH writing to the same instance, Connext resolves the tie by choosing the DataWriter with the smallest GUID (Globally Unique Identifier, see Section 14.1.1.). This means that different DataReaders (in different applications) of the same Topic will all choose the same DataWriter as the owner when there are multiple DataWriters with the same strength.
The owner of an instance can change when:
❏A DataWriter with a higher OWNERSHIP_STRENGTH publishes a value for the instance.
❏The OWNERSHIP_STRENGTH of the owning DataWriter is dynamically changed to be less than the strength of an existing DataWriter of the instance.
❏The owning DataWriter stops asserting its LIVELINESS (the DataWriter dies).
❏The owning DataWriter violates the DEADLINE QosPolicy by not updating the value of the instance within the period set by the DEADLINE.
Note however, the change of ownership is not synchronous across different DataReaders in different participants. That is, DataReaders in different applications may not determine that the ownership of an instance has changed at exactly the same time.
6.5.15.2Example
OWNERSHIP is really a property that is shared between DataReaders and DataWriters of a Topic. However, in a system, some Topics will be exclusively owned and others will be shared. System requirements will determine which are which.
An example of a Topic that may be shared is one that is used by applications to publish alarm messages. If the application detects an anomalous condition, it will use a DataWriter to write a Topic “Alarm.” Another application that records alarms into a system log file will have a DataReader that subscribes to “Alarm.” In this example, any number of applications can publish the “Alarm” message. There is no concept that only one application at a time is allowed to publish the “Alarm” message, so in this case, the OWNERSHIP of the DataWriters and DataReaders should be set to SHARED.
In a different part of the system, EXCLUSIVE OWNERSHIP may be used to implement redundancy in support of fault tolerance. Say, the distributed system controls a traffic system. It monitors traffic and changes the information posted on signs, the operation of metering lights, and the timing of traffic lights. This system must be tolerant to failure of any part of the system including the application that actually issues commands to change the lights at a particular intersection.
One way to implement fault tolerance is to create the system redundantly both in hardware and software. So if a piece of the running system fails, a backup can take over. In systems where failover from the primary to backup system must be seamless and transparent, the actual mechanics of failover must be fast, and the redundant component must immediately pickup where the failed component left off. For the network connections of the component, Connext can provided redundant DataWriter and DataReaders.
In this case, you would not want the DataReaders to receive redundant messages from the redundant DataWriters. Instead you will want the DataReaders to only receive messages from the primary application and only from a backup application when a failure occurs. To continue our example, if we have redundant applications that all try to control the lights at an intersection, we would want the DataReaders on the light to receive messages only from the primary application. To do so, we should configure the DataWriters and DataReaders to have EXCLUSIVE OWNERSHIP and set the OWNERSHIP_STRENGTH differently on different redundant applications to distinguish between primary and backup systems.
6.5.15.3Properties
This QosPolicy cannot be modified after the Entity is enabled.
It must be set to the same kind on both the publishing and subscribing sides. If a DataWriter and DataReader of the same topic are found to have different kinds set for the OWNERSHIP QoS, the
ON_OFFERED_INCOMPATIBLE_QOS and ON_REQUESTED_INCOMPATIBLE_QOS statuses will be modified and the corresponding Listeners called for the DataWriter and DataReader respectively.
6.5.15.4Related QosPolicies
❏DEADLINE QosPolicy (Section 6.5.5)
❏LIVELINESS QosPolicy (Section 6.5.13)
❏OWNERSHIP_STRENGTH QosPolicy (Section 6.5.16)
6.5.15.5Applicable Entities
6.5.15.6System Resource Considerations
This QosPolicy does not significantly impact the use of system resources.
6.5.16OWNERSHIP_STRENGTH QosPolicy
The OWNERSHIP_STRENGTH QosPolicy is used to rank DataWriters of the same instance of a Topic, so that Connext can decide which DataWriter will have ownership of the instance when the OWNERSHIP QosPolicy (Section 6.5.15) is set to EXCLUSIVE.
It includes the member in Table 6.54. For the default and valid range, please refer to the API Reference HTML documentation.
Table 6.54 DDS_OwnershipStrengthQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
DDS_Long |
value |
The strength value used to arbitrate among multiple DataWriters. |
|
|
|
This QosPolicy only applies to DataWriters when EXCLUSIVE OWNERSHIP is used. The strength is simply an integer value, and the DataWriter with the largest value is the owner. A deterministic method is used to decide which DataWriter is the owner when there are multiple DataWriters that have equal strengths. See Section 6.5.15.1 for more details.
6.5.16.1Example
Suppose there are two DataWriters sending samples of the same Topic instance, one as the main DataWriter, and the other as a backup. If you want to make sure the DataReader always receive from the main one whenever possible, then set the main DataWriter to use a higher ownership_strength value than the one used by the backup DataWriter.
6.5.16.2Properties
This QosPolicy can be changed at any time.
It does not apply to DataReaders, so there is no requirement that the publishing and subscribing sides use compatible values.
6.5.16.3Related QosPolicies
❏OWNERSHIP QosPolicy (Section 6.5.15)
6.5.16.4Applicable Entities
6.5.16.5System Resource Considerations
The use of this policy does not significantly impact the use of resources.
6.5.17PROPERTY QosPolicy (DDS Extension)
The PROPERTY QosPolicy stores name/value (string) pairs that can be used to configure certain parameters of Connext that are not exposed through formal QoS policies.
It can also be used to store and propagate
this policy uses (name, value) pairs, and you can select whether or not a particular pair should be propagated (included in the
It includes the member in Table 6.55.
Table 6.55 DDS_PropertyQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
A sequence of: (name, value) pairs and booleans that indicate |
DDS_PropertySeq |
value |
whether the pair should be propagated (included in the entity’s |
|
|
|
|
|
|
The Property QoS stores name/value pairs for an Entity. Both the name and value are strings. Certain configurable parameters for Entities that do not have a formal DDS QoS definition may be configured via this QoS by using a
You can manipulate the sequence of properties (name, value pairs) with the standard methods available for sequences. You can also use the helper class, DDSPropertyQosPolicyHelper, which provides another way to work with a PropertyQosPolicy object.
The PropertyQosPolicy may be used to configure:
❏Durable writer history (see Section 12.3.2)
❏Durable reader state (see Section 12.4.4)
❏
❏Automatic registration of
❏Clock Selection (Section 8.6)
In addition, you can add your own name/value pairs to the Property QoS of an Entity. You may also use this QosPolicy to direct Connext to propagate these name/value pairs with the discovery information for the Entity. Applications that discover the Entity can then access the
Reasons for using the PropertyQosPolicy include:
❏Some features can only be configured through the PropertyQosPolicy, not through other QoS or API.s For example, Durable Reader State, Durable Writer History,
❏Alternative way to configure
•Note: When using the Java or .NET APIs, transport configuration must take place through the PropertyQosPolicy (not through the transport property structures).
❏Alternative way to support multiple instances of
❏Alternative way to dynamically load extension transports (such as RTI Secure WAN Transport1 or RTI TCP Transport2) or
❏Allows full pluggable transport configuration for
The PropertyQosPolicyHelper operations are described in Table 6.56. For more information, see the API Reference HTML documentation.
Table 6.56 PropertyQoSPolicyHelper Operations
Operation |
Description |
|
|
|
|
|
|
|
get_number_of_properties |
Gets the number of properties in the input policy. |
|
|
|
|
assert_property |
Asserts the property identified by name in the input policy. (Either adds it, |
|
or replaces an existing one.) |
||
|
||
|
|
|
add_property |
Adds a new property to the input policy. |
|
|
|
|
lookup_property |
Searches for a property in the input policy given its name. |
|
|
|
|
remove_property |
Removes a property from the input policy. |
|
|
|
|
get_properties |
Retrieves a list of properties whose names match the input prefix. |
|
|
|
6.5.17.1Properties
This QosPolicy can be changed at any time.
There is no requirement that the publishing and subscribing sides use compatible values.
6.5.17.2Related QosPolicies
❏DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4)
6.5.17.3Applicable Entities
❏DomainParticipants (Section 8.3)
1.RTI Secure WAN Transport is an optional packages available for separate purchase.
2.RTI TCP Transport is included with your Connext distribution but is not a
6.5.17.4System Resource Considerations
The DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4) contains several fields for configuring the resources associated with the properties stored in this QosPolicy.
6.5.18PUBLISH_MODE QosPolicy (DDS Extension)
This QosPolicy determines the DataWriter’s publishing mode, either asynchronous or synchronous.
The publishing mode controls whether data is written
Connext.
Note: Asynchronous DataWriters do not perform
Each Publisher spawns a single asynchronous publishing thread (set in its ASYNCHRONOUS_PUBLISHER QosPolicy (DDS Extension) (Section 6.4.1)) to serve all its asynchronous DataWriters.
When data is written asynchronously, a FlowController (Section 6.6), identified by flow_controller_name, can be used to shape the network traffic. The FlowController's properties determine when the asynchronous publishing thread is allowed to send data and how much.
The fastest way for Connext to send data is for the user thread to execute the middleware code that actually sends the data itself. However, there are times when user applications may need or want an internal middleware thread to send the data instead. For instance, for sending large data reliably, an asynchronous thread must be used (see ASYNCHRONOUS_PUBLISHER QosPolicy (DDS Extension) (Section 6.4.1)).
This QosPolicy can select a FlowController to prioritize or shape the data flow sent by a DataWriter to DataReaders. Shaping a data flow usually means limiting the maximum data rates with which the middleware will send data for a DataWriter. The FlowController will buffer data sent faster than the maximum rate by the DataWriter, and then only send the excess data when the user send rate drops below the maximum rate.
This QosPolicy includes the members in Table 6.57. For the defaults, please refer to the API Reference HTML documentation.
Table 6.57 DDS_PublishModeQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
DDS_PublishMode |
|
Either: |
kind |
DDS_ASYNCHRONOUS_PUBLISH_MODE_QOS |
|
QosPolicyKind |
|
DDS_SYNCHRONOUS_PUBLISH_MODE_QOS |
|
|
|
|
|
|
Table 6.57 DDS_PublishModeQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
|
|
Name of the associated flow controller. |
|
|
|
There are three |
|
|
flow_controller_ |
DDS_DEFAULT_FLOW_CONTROLLER_NAME |
|
char* |
DDS_FIXED_RATE_FLOW_CONTROLLER_NAME |
||
name |
|||
|
DDS_ON_DEMAND_FLOW_CONTROLLER_NAME |
||
|
|
||
|
|
You may also create your own FlowControllers. |
|
|
|
||
|
|
|
|
|
|
A positive integer designating the relative priority of the |
|
|
|
DataWriter, used to determine the transmission order of pending |
|
|
|
writes. |
|
|
|
To use publication priorities, this QosPolicy’s kind must be |
|
|
|
DDS_ASYNCHRONOUS_PUBLISH_MODE_QOS and the |
|
DDS_Long |
priority |
DataWriter must use a FlowController with a |
|
|
|
first (HPF) scheduling_policy. |
|
|
|
||
|
|
Note: Prioritized samples are not supported when using the Java, |
|
|
|
Ada, or .NET APIs. Therefore the priority field does not exist |
|
|
|
when using these APIs. |
|
|
|
|
The maximum number of samples that will be coalesced depends on
NDDS_Transport_Property_t::gather_send_buffer_count_max (each sample requires at least 2- 4
NDDS_Transport_Property_t::gather_send_buffer_count_max. Note that the maximum value is operating system dependent.
Connext queues samples until they can be sent by the asynchronous publishing thread (as determined by the corresponding FlowController).
The number of samples that will be queued is determined by the HISTORY QosPolicy (Section 6.5.10): when using KEEP_LAST, the most recent depth samples are kept in the queue.
Once unsent samples are removed from the queue, they are no longer available to the asynchronous publishing thread and will therefore never be sent.
Unless flow_controller_name points to one of the
Advantages of Asynchronous Publishing:
Asynchronous publishing may increase latency, but offers the following advantages:
❏The write() call does not make any network calls and is therefore faster and more deterministic. This becomes important when the user thread is executing
❏When data is written in bursts or when sending large data types as multiple fragments, a flow controller can throttle the send rate of the asynchronous publishing thread to avoid flooding the network.
❏Asynchronously written samples for the same destination will be coalesced into a single network packet which reduces bandwidth consumption.
6.5.18.1Properties
This QosPolicy cannot be modified after the Publisher is created.
Since it is only for DataWriters, there are no compatibility restrictions for how it is set on the publishing and subscribing sides.
6.5.18.2Related QosPolicies
❏ASYNCHRONOUS_PUBLISHER QosPolicy (DDS Extension) (Section 6.4.1)
❏HISTORY QosPolicy (Section 6.5.10)
6.5.18.3Applicable Entities
6.5.18.4System Resource Considerations
See Configuring Resource Limits for Asynchronous DataWriters (Section 6.5.20.1).
System resource usage depends on the settings in the corresponding FlowController (see Section 6.6).
6.5.19RELIABILITY QosPolicy
This RELIABILITY QosPolicy determines whether or not data published by a DataWriter will be reliably delivered by Connext to matching DataReaders. The reliability protocol used by Connext is discussed in Chapter 10: Reliable Communications.
The reliability of a connection between a DataWriter and DataReader is entirely user configurable. It can be done on a per DataWriter/DataReader connection. A connection may be configured to be "best effort" which means that Connext will not use any resources to monitor or guarantee that the data sent by a DataWriter is received by a DataReader.
For some use cases, such as the periodic update of sensor values to a GUI displaying the value to a person, "best effort" delivery is often good enough. It is certainly the fastest, most efficient, and least
However, there are data streams (topics) in which you want an absolute guarantee that all data sent by a DataWriter is received reliably by DataReaders. This means that Connext must check whether or not data was received, and repair any data that was lost by resending a copy of the data as many times as it takes for the DataReader to receive the data.
Connext uses a reliability protocol configured and tuned by these QoS policies:
❏HISTORY QosPolicy (Section 6.5.10),
❏DATA_WRITER_PROTOCOL QosPolicy (DDS Extension) (Section 6.5.3),
❏DATA_READER_PROTOCOL QosPolicy (DDS Extension) (Section 7.6.1),
❏RESOURCE_LIMITS QosPolicy (Section 6.5.20)
The Reliability QoS policy is simply a switch to turn on the reliability protocol for a DataWriter/ DataReader connection. The level of reliability provided by Connext is determined by the configuration of the aforementioned QoS policies.
You can configure Connext to deliver ALL data in the order they were sent (also known as absolute or strict reliability). Or, as a
can choose a reduced level of reliability where only the last N values are guaranteed to be delivered reliably to DataReaders (where N is
It includes the members in Table 6.58. For defaults and valid ranges, please refer to the API Reference HTML documentation.
Table 6.58 DDS_ReliabilityQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
Can be either: |
|
|
• DDS_BEST_EFFORT_RELIABILITY_QOS: |
|
|
Data samples are sent once and missed |
DDS_ReliabilityQosPolicyKind |
kind |
samples are acceptable. |
|
|
• DDS_RELIABLE_RELIABILITY_QOS: |
|
|
Connext will make sure that data sent is |
|
|
received and missed samples are resent. |
|
|
|
|
|
How long a DataWriter can block on a write() |
DDS_Duration_t |
max_blocking_time |
when the send queue is full due to |
|
|
unacknowledged messages. (Has no meaning for |
|
|
DataReaders.) |
|
|
|
|
|
Kind of reliable acknowledgment. |
|
|
Only applies when kind is RELIABLE. |
|
|
Sets the kind of acknowledgments supported by a |
|
|
DataWriter and sent by DataReader. |
|
|
Possible values: |
DDS_ReliabilityQosPolicy- |
acknowledgment_ |
• DDS_PROTOCOL_ |
AcknowledgmentModeKind |
kind |
ACKNOWLEDGMENT_MODE |
|
|
• DDS_APPLICATION_AUTO_ |
|
|
ACKNOWLEDGMENT_MODE |
|
|
• DDS_APPLICATION_EXPLICIT_ |
|
|
ACKNOWLEDGMENT_MODE |
|
|
|
|
|
|
|
|
|
The kind of RELIABILITY can be either:
❏BEST_EFFORT Connext will send data samples only once to DataReaders. No effort or resources are spent to track whether or not sent samples are received. Minimal resources are used. This is the most deterministic method of sending data since there is no indeterministic delay that can be introduced by buffering or resending data. Data samples may be lost. This setting is good for periodic data.
❏RELIABLE Connext will send samples reliably to
To send large data reliably, you will also need to set the PUBLISH_MODE QosPolicy (DDS Extension) (Section 6.5.18) kind to DDS_ASYNCHRONOUS_PUBLISH_MODE_QOS. Large in this context means that the data cannot be sent as a single packet by a transport (for example, data larger than 63K when using UDP/IP).
While a DataWriter sends data reliably, the HISTORY QosPolicy (Section 6.5.10) and RESOURCE_LIMITS QosPolicy (Section 6.5.20) determine how many samples can be stored while waiting for acknowledgements from DataReaders. A sample that is sent reliably is entered in the DataWriter’s send queue awaiting acknowledgement from DataReaders. How many samples that the DataWriter is allowed to store in the send queue for a
If the HISTORY kind is KEEP_LAST, then the DataWriter is allowed to have the HISTORY depth number of samples per instance of the Topic in the send queue. Should the number of unacknowledge samples in the send queue for a
However, if the HISTORY kind is KEEP_ALL, then when the send queue is filled with acknowledged samples (either due to the number of unacknowledged samples for an instance reaching the RESOURCE_LIMITS max_samples_per_instance value or the total number of unacknowledged samples have reached the size of the send queue as specified by RESOURCE_LIMITS max_samples), the next write() operation on the DataWriter will block until either a sample in the queue has been fully acknowledged by DataReaders and thus can be overwritten or a timeout of RELIABILITY max_blocking_period has been reached.
If there is still no space in the queue when max_blocking_time is reached, the write() call will return a failure with the error code DDS_RETCODE_TIMEOUT.
Thus for strict
KEEP_ALL for both the DataWriter and the DataReader.
Although you can set the RELIABILITY QosPolicy on Topics, its value can only be used to initialize the RELIABILITY QosPolicies of either a DataWriter or DataReader. It does not directly affect the operation of Connext, see Section 5.1.3.
6.5.19.1Example
This QosPolicy is used to achieve reliable communications, which is discussed in Chapter 10: Reliable Communications and Section 10.3.1.
6.5.19.2Properties
This QosPolicy cannot be modified after the Entity has been enabled.
The DataWriter and DataReader must use compatible settings for this QosPolicy. To be compatible, the DataWriter and DataReader must use one of the valid combinations for the Reliability kind (see Table 6.59), and one of the valid combinations for the acknowledgment_kind (see Table 6.60):
Table 6.59 Valid Combinations of Reliability ‘kind’
|
|
DataReader requests: |
||
|
|
|
|
|
|
|
BEST_EFFORT |
RELIABLE |
|
|
|
|
|
|
|
|
|
|
|
DataWriter offers: |
BEST_EFFORT |
✔ |
incompatible |
|
|
|
|
||
RELIABLE |
✔ |
✔ |
||
|
||||
|
|
|
|
Table 6.60 Valid Combinations of Reliability ‘acknowledgment_kind’
|
|
|
DataReader requests: |
|
||
|
|
|
|
|
|
|
|
|
PROTOCOL |
|
APPLICATION_ |
|
APPLICATION_ |
|
|
|
AUTO |
|
EXPLICIT |
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
PROTOCOL |
✔ |
|
incompatible |
|
incompatible |
DataWriter |
|
|
|
|
|
|
APPLICATION_AUTO |
✔ |
|
✔ |
|
✔ |
|
offers: |
|
|
|
|
|
|
APPLICATION_EXPLICIT |
✔ |
|
✔ |
|
✔ |
|
|
|
|
||||
|
|
|
|
|
|
|
If this QosPolicy is found to be incompatible, statuses ON_OFFERED_INCOMPATIBLE_QOS and ON_REQUESTED_INCOMPATIBLE_QOS will be modified and the corresponding
Listeners called for the DataWriter and DataReader, respectively.
There are no compatibility issues regarding the value of max_blocking_wait, since it does not apply to DataReaders.
6.5.19.3Related QosPolicies
❏HISTORY QosPolicy (Section 6.5.10)
❏PUBLISH_MODE QosPolicy (DDS Extension) (Section 6.5.18)
❏RESOURCE_LIMITS QosPolicy (Section 6.5.20)
6.5.19.4Applicable Entities
6.5.19.5System Resource Considerations
Setting the kind to RELIABLE will cause Connext to use up more resources to monitor and maintain a reliable connection between a DataWriter and all of its reliable DataReaders. This includes the use of extra CPU and network bandwidth to send and process heartbeat, ACK/ NACK, and repair packets (see Chapter 10: Reliable Communications).
Setting max_blocking_time to a
6.5.20RESOURCE_LIMITS QosPolicy
For the reliability protocol (and the DURABILITY QosPolicy (Section 6.5.7)), this QosPolicy determines the actual maximum queue size when the HISTORY QosPolicy (Section 6.5.10) is set to KEEP_ALL.
In general, this QosPolicy is used to limit the amount of system memory that Connext can allocate. For embedded
This QosPolicy can be set such that an entity does not dynamically allocate any more memory after its initialization phase.
It includes the members in Table 6.61. For defaults and valid ranges, please refer to the API Reference HTML documentation.
One of the most important fields is max_samples, which sets the size and causes memory to be allocated for the send or receive queues. For information on how this policy affects reliability, see Tuning Queue Sizes and Other Resource Limits (Section 10.3.2).
Table 6.61 DDS_ResourceLimitsQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
||
DDS_Long |
max_samples |
Maximum number of live samples that Connext |
can |
store for a |
||
DataWriter/DataReader. This is a physical limit. |
|
|
|
|||
|
|
|
|
|
||
|
|
|
||||
|
|
Maximum number of instances that can be managed by a DataWriter/ |
||||
|
|
DataReader. |
|
|
|
|
DDS_Long |
max_instances |
For DataReaders, max_instances must be <= max_total_instances in the |
||||
|
|
|||||
|
|
|
|
|
||
|
|
See also: Example (Section 6.5.20.3). |
|
|
|
|
|
|
|
||||
|
|
Maximum number of samples of any one instance that Connext will |
||||
|
|
store for a DataWriter/DataReader. |
|
|
|
|
DDS_Long |
max_samples_ |
For keyed types and DataReaders, this value only applies to samples |
||||
per_instance |
with an instance state of DDS_ALIVE_INSTANCE_STATE. |
|
||||
|
|
|||||
|
|
If a keyed Topic is not used, then max_samples_per_instance must |
||||
|
|
equal max_samples. |
|
|
|
|
|
|
|
|
|
|
|
DDS_Long |
initial_samples |
Initial number of samples that Connext will store |
for |
a |
DataWriter/ |
|
DataReader. (DDS extension) |
|
|
|
|||
|
|
|
|
|
||
|
|
|
|
|
|
|
DDS_Long |
initial_instances |
Initial number of instances that can be managed |
by |
a |
DataWriter/ |
|
DataReader. (DDS extension) |
|
|
|
|||
|
|
|
|
|
||
|
|
|
||||
DDS_Long |
instance_hash_ |
Number of hash buckets, which are used by Connext to facilitate |
||||
buckets |
instance lookup. (DDS extension). |
|
|
|
||
|
|
|
|
|||
|
|
|
|
|
|
When a DataWriter or DataReader is created, the initial_instances and initial_samples parameters determine the amount of memory first allocated for the those Entities. As the application executes, if more space is needed in the send/receive queues to store samples or as more instances are created, then Connext will automatically allocate memory until the limits of max_instances and max_samples are reached.
You may set initial_instances = max_instances and initial_samples = max_samples if you do not want Connext to dynamically allocate memory after initialization.
For keyed Topics, the max_samples_per_instance field in this policy represents maximum number of samples with the same key that are allowed to be stored by a DataWriter or DataReader. This is a logical limit. The hard physical limit is determined by max_samples. However, because the theoretical number of instances may be quite large (as set by max_instances), you may not want Connext to allocate the total memory needed to hold the maximum number of samples per instance for all possible instances (max_samples_per_instance * max_instances) because during normal operations, the application will never have to hold that much data for the Entity.
So it is possible that an Entity will hit the physical limit max_samples before it hits the max_samples_per_instance limit for a particular instance. However, Connext must be able to store max_samples_per_instance for at least one instance. Therefore, max_samples_per_instance must be <= max_samples.
Important: If a keyed data type is not used, then there is only a single instance of the Topic, so max_samples_per_instance must equal max_samples.
Once a physical or logical limit is hit, then how Connext deals with new data samples being sent or received for a DataWriter or DataReader is described in the HISTORY QosPolicy (Section 6.5.10) setting of DDS_KEEP_ALL_HISTORY_QOS. It is closely tied to whether or not a reliable connection is being maintained.
Although you can set the RESOURCE_LIMITS QosPolicy on Topics, its value can only be used to initialize the RESOURCE_LIMITS QosPolicies of either a DataWriter or DataReader. It does not directly affect the operation of Connext, see Section 5.1.3.
6.5.20.1Configuring Resource Limits for Asynchronous DataWriters
When using an asynchronous Publisher, if a call to write() is blocked due to a resource limit, the block will last until the timeout period expires, which will prevent others from freeing the resource. To avoid this situation, make sure that the DomainParticipant’s outstanding_asynchronous_sample_allocation in the DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4) is always greater than the sum of all asynchronous DataWriters’ max_samples.
6.5.20.2Configuring DataWriter Instance Replacement
When the max_instances limit is reached, a DataWriter will try to make space for a new instance by replacing an existing instance according to the instance replacement kind set in instance_replacement. For the sake of instance replacement, an instance is considered to be unregistered, disposed, or alive. The oldest instance of the specified kind, if such an instance exists, would be replaced with the new instance. Also, all samples of a replaced instance must already have been acknowledged, such that removing the instance would not deprive any existing reader from receiving them.
Since an unregistered instance is one that a DataWriter will not update any further, unregistered instances are replaced before any other instance kinds. This applies for all instance_replacement kinds; for example, the ALIVE_THEN_DISPOSED kind would first replace unregistered, then alive, and then disposed instances. The rest of the kinds specify one or two kinds (e.g DISPOSED and ALIVE_OR_DISPOSED). For the single kind, if no unregistered instances are replaceable, and no instances of the specified kind are replaceable, then the instance replacement will fail. For the others specifying multiple kinds, it either specifies to look for one kind first and then another kind (e.g. ALIVE_THEN_DISPOSED), meaning if the first kind is found then that instance will be replaced, or it will replace either of the kinds specified (e.g. ALIVE_OR_DISPOSED), whichever is older as determined by the time of instance registering, writing, or disposing.
If an acknowledged instance of the specified kind is found, the DataWriter will reclaim its resources for the new instance. It will also invoke the DataWriterListener’s on_instance_replaced() callback (if installed) and notify the user with the handle of the replaced instance, which can then be used to retrieve the instance key from within the callback. If no replaceable instances are found, the new instance will fail to be registered; the DataWriter may block, if the instance registration was done in the context of a write, or it may return with an out-
In addition, replace_empty_instances (in the DATA_WRITER_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 6.5.4)) configures whether instances with no samples are eligible to be replaced. If this is set, then a DataWriter will first try to replace empty instances, even before replacing unregistered instances.
6.5.20.3Example
If you want to be able to store max_samples_per_instance for every instance, then you should set
max_samples >= max_instances * max_samples_per_instance
But if you want to save memory and you do not expect that the running application will ever reach the case where it will see max_instances of instances, then you may use a smaller value for max_samples to save memory.
In any case, there is a lower limit for max_samples:
max_samples >= max_samples_per_instance
If the HISTORY QosPolicy (Section 6.5.10)’s kind is set to KEEP_LAST, then you should set:
max_samples_per_instance = HISTORY.depth
6.5.20.4Properties
This QosPolicy cannot be modified after the Entity is enabled.
There are no requirements that the publishing and subscribing sides use compatible values.
6.5.20.5Related QosPolicies
❏HISTORY QosPolicy (Section 6.5.10)
❏RELIABILITY QosPolicy (Section 6.5.19)
❏For DataReaders, max_instances must be <= max_total_instances in the DATA_READER_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 7.6.2)
6.5.20.6Applicable Entities
6.5.20.7System Resource Considerations
Larger initial_* numbers will increase the initial system memory usage. Larger max_* numbers will increase the
Increasing instance_hash_buckets speeds up
6.5.21TRANSPORT_PRIORITY QosPolicy
The TRANSPORT_PRIORITY QosPolicy is optional and only partially supported on certain OSs and transports by RTI. However, its intention is to allow you to specify on a
DDS does not specify how a DDS implementation shall treat data of different priorities. It is often difficult or impossible for DDS implementations to treat data of higher priority differently than data of lower priority, especially when data is being sent (delivered to a physical transport) directly by the thread that called DataWriter’s write() operation. Also, many physical network transports themselves do not have an
In Connext, for the UDPv4
It is incorrect to assume that using the TRANSPORT_PRIORITY QosPolicy will have any effect at all on the
It includes the member in Table 6.62. For the default and valid range, please refer to the API Reference HTML documentation.
Table 6.62 DDS_TransportPriorityQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
DDS_Long |
value |
Hint as to how to set the priority. |
|
|
|
Connext will propagate the value set on a
Although you can set the TRANSPORT_PRIORITY QosPolicy on Topics, its value can only be used to initialize the TRANSPORT_PRIORITY QosPolicies of a DataWriter. It does not directly affect the operation of Connext, see Section 5.1.3.
6.5.21.1Example
Should Connext be configured with a transport that can use and will honor the concept of a prioritized message, then you would be able to create a DataWriter of a Topic whose data samples, when published, will be sent at a higher priority than other DataWriters that use the same transport.
6.5.21.2Properties
This QosPolicy may be modified after the entity is created.
It does not apply to DataReaders, so there is no requirement that the publishing and subscribing sides use compatible values.
6.5.21.3Related QosPolicies
This QosPolicy does not interact with any other policies.
6.5.21.4Applicable Entities
6.5.21.5System Resource Considerations
The use of this policy does not significantly impact the use of resources. However, if a transport is implemented to use the value set by this policy, then there may be
6.5.22TRANSPORT_SELECTION QosPolicy (DDS Extension)
The TRANSPORT_SELECTION QosPolicy allows you to select the transports that have been installed with the DomainParticipant to be used by the DataWriter or DataReader.
An application may be simultaneously connected to many different physical transports, e.g., Ethernet, Infiniband, shared memory, VME backplane, and wireless. By default, the middleware will use up to 4 transports to deliver data from a DataWriter to a DataReader.
This QosPolicy can be used to both limit and control which of the application’s available transports may be used by a DataWriter to send data or by a DataReader to receive data.
It includes the member in Table 6.63. For more information, please refer to the API Reference HTML documentation.
Table 6.63 DDS_TransportSelectionQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_StringSeq |
enabled_transports |
A sequence of aliases for the transports that may be used by the |
|
DataWriter or DataReader. |
|||
|
|
||
|
|
|
Connext allows user to configure the transports that it uses to send and receive messages. A number of
custom ones that the user may implement and install. Each transport will be installed in the
DomainParticipant with one or more aliases.
To enable a DataWriter or DataReader to use a particular transport, add the alias to the enabled_transports sequence of this QosPolicy. An empty sequence is a special case, and indicates that all transports installed in the DomainParticipant can be used by the DataWriter or
DataReader.
For more information on configuring and installing transports, please see the API Reference HTML documentation (from the Modules page, select “Connext API Reference, Pluggable Transports”).
6.5.22.1Example
Suppose a DomainParticipant has both UDPv4 and shared memory transports installed. If you want a particular DataWriter to publish its data only over shared memory, then you should use this QosPolicy to specify that restriction.
6.5.22.2Properties
This QosPolicy cannot be modified after the Entity is created. It can be set differently for the DataWriter and the DataReader.
6.5.22.3Related QosPolicies
❏TRANSPORT_UNICAST QosPolicy (DDS Extension) (Section 6.5.23)
❏TRANSPORT_MULTICAST QosPolicy (DDS Extension) (Section 7.6.5)
❏TRANSPORT_BUILTIN QosPolicy (DDS Extension) (Section 8.5.7)
6.5.22.4Applicable Entities
6.5.22.5System Resource Considerations
By restricting DataWriters from sending or DataReaders from receiving over certain transports, you may decrease the load on those transports.
6.5.23TRANSPORT_UNICAST QosPolicy (DDS Extension)
The TRANSPORT_UNICAST QosPolicy allows you to specify unicast network addresses to be used by DomainParticipant, DataWriters and DataReaders for receiving messages.
Connext may send data to a variety of Entities, not just DataReaders. DomainParticipants receive messages to support the discovery process discussed in Chapter 14. DataWriters may receive ACK/NACK messages to support the reliable protocol discussed in Chapter 10: Reliable Communications.
During discovery, each Entity announces to remote applications a list of (up to 4) unicast addresses to which the remote application should use send data (either user data packets or reliable protocol
By default, the list of addresses is populated automatically with values obtained from the enabled transport plugins allowed to be used by the Entity (see the TRANSPORT_BUILTIN QosPolicy (DDS Extension) (Section 8.5.7) and TRANSPORT_SELECTION QosPolicy (DDS Extension) (Section 6.5.22)). Also, the associated ports are automatically determined (see Inbound Ports for User Traffic (Section 14.5.2)).
Use TRANSPORT_UNICAST QosPolicy to manually set the receive address list for an Entity. You may optionally set a port to use a
The QosPolicy structure includes the members in Table 6.64. For more information and default values, please refer to the API Reference HTML documentation.
Table 6.64 DDS_TransportUnicastQosPolicy
|
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_TransportUnicast |
|
A sequence of up to 4 unicast settings that should be used by |
|
SettingsSeq |
value |
|
|
remote entities to address messages to be sent to this Entity. |
||
|
(see Table 6.65) |
|
|
|
|
|
|
|
|
|
|
Table 6.65 DDS_TransportUnicastSettings_t |
|
||
|
|
|
|
|
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_StringSeq |
transports |
A sequence of transport aliases that specifies which transports |
|
should be used to receive unicast messages for this Entity. |
||
|
|
|
|
|
|
|
|
|
|
|
The port that should be used in the addressing of unicast |
|
DDS_Long |
receive_port |
messages destined for this Entity. A value of 0 will cause |
|
Connext to use a default port number based on domain and |
||
|
|
|
|
|
|
|
participant ids. See Ports Used for Discovery (Section 14.5). |
|
|
|
|
A message sent to a unicast address will be received by a single node on the network (as opposed to a multicast address where a single message may be received by multiple nodes). This policy sets the unicast addresses and ports that remote entities should use when sending messages to the Entity on which the TRANSPORT_UNICAST QosPolicy is set.
Up to four “return” unicast addresses may be configured for an Entity. Instead of specifying addresses directly, you use the transports field of the DDS_TransportUnicastSetting_t to select the transports (using their aliases) on which remote entities should send messages destined for this Entity. The addresses of the selected transports will be the “return” addresses. See the API Reference HTML documentation about configuring transports and aliases (from the Modules page, select “API Reference, Pluggable Transports”).
Note, a single transport may have more than one unicast address. For example, if a node has multiple network interface cards (NICs), then the UDPv4 transport will have an address for each NIC. When using the TRANSPORT_UNICAST QosPolicy to set the return addresses, a single value for the DDS_TransportUnicastSettingsSeq may provide more than the four return addresses that Connext currently uses.
Whether or not you are able to configure the network interfaces that are allowed to be used by a transport is up to the implementation the transport. For the
For a DomainParticipant, this QoS policy sets the default list of addresses used by other applications to send user data for local DataReaders.
For a reliable DataWriter, if set, the other applications will use the specified list of addresses to send reliable protocol packets (ACKS/NACKS) on the behalf of reliable DataReaders. Otherwise, if not set, the other applications will use the addresses set by the DomainParticipant.
For a DataReader, if set, then other applications will use the specified list of addresses to send user data (and reliable protocol packets for reliable DataReaders). Otherwise, if not set, the other applications will use the addresses set by the DomainParticipant.
For a DataReader, if the port number specified by this QoS is the same as a port number specified by a TRANSPORT_MULTICAST QoS, then the transport may choose to process data received
both via multicast and unicast with a single thread. Whether or not a transport must use different threads to process data received via multicast or unicast for the same port number depends on the implementation of the transport.
To use this QosPolicy, you also need to specify a port number. A port number of 0 will cause Connext to automatically use a default value. As explained in Ports Used for Discovery (Section 14.5), the default port number for unicast addresses is based on the domain and participant IDs. Should you choose to use a different port number, then for every unique port number used by Entities in your application, depending on the transport, Connext may create a thread to process messages received for that port on that transport. See Chapter 19: Connext Threading Model for more about threads.
Threads are created on a
Note: If a DataWriter is using the MULTI_CHANNEL QosPolicy (DDS Extension) (Section 6.5.14), the unicast addresses specified in the TRANSPORT_UNICAST QosPolicy are ignored by that DataWriter. The DataWriter will not publish samples on those locators.
6.5.23.1Example
You may use this QosPolicy to restrict an Entity from receiving data through a particular transport. For example, on a
6.5.23.2Properties
This QosPolicy cannot be modified after the Entity is created.
It can be set differently for the DomainParticipant, the DataWriter and the DataReader.
6.5.23.3Related QosPolicies
❏MULTI_CHANNEL QosPolicy (DDS Extension) (Section 6.5.14)
❏TRANSPORT_SELECTION QosPolicy (DDS Extension) (Section 6.5.22)
❏TRANSPORT_MULTICAST QosPolicy (DDS Extension) (Section 7.6.5)
❏TRANSPORT_BUILTIN QosPolicy (DDS Extension) (Section 8.5.7)
6.5.23.4Applicable Entities
❏DomainParticipants (Section 8.3)
6.5.23.5System Resource Considerations
Because this QosPolicy changes the transports on which messages are received for different Entities, the bandwidth used on the different transports may be affected.
Depending on the implementation of a transport, Connext may need to create threads to receive and process data on a
implementation will determine whether or not the same thread can be used to process both unicast and multicast data. For UDPv4, only one thread is needed per
6.5.24TYPESUPPORT QosPolicy (DDS Extension)
This policy can be used to modify the
RTI generally recommends that users treat generated source files as compiler outputs (analogous to object files) and that users not modify them. RTI cannot support user changes to generated source files. Furthermore, such changes would make upgrading to newer versions of Connext more difficult, as this generated code is considered to be a part of the middleware implementation and consequently does change from version to version. This QoS policy should be considered a back door, only to be used after careful design consideration, testing, and consultation with your RTI representative.
It includes the members in Table 6.66.
Table 6.66 DDS_TypeSupportQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
void * |
plugin_data |
Value to pass into the type |
|
|
|
6.5.24.1Properties
This QoS policy may be modified after the DataWriter or DataReader is enabled. It can be set differently for the DataWriter and DataReader.
6.5.24.2Related QoS Policies
None.
6.5.24.3Applicable Entities
❏DomainParticipants (Section 8.3)
6.5.24.4System Resource Considerations
None.
6.5.25USER_DATA QosPolicy
This QosPolicy provides an area where your application can store additional information related to a DomainParticipant, DataWriter, or DataReader. This information is passed between applications during discovery (see Chapter 14: Discovery) using
Use cases are usually for
The value of the USER_DATA QosPolicy is sent to remote applications when they are first discovered, as well as when the DomainParticipant, DataWriter or DataReader’s set_qos() methods are called after changing the value of the USER_DATA. User code can set listeners on
the
Currently, USER_DATA of the associated Entity is only propagated with the information that declares a DomainParticipant, DataWriter or DataReader. Thus, you will need to access the value of USER_DATA through DDS_ParticipantBuiltinTopicData, DDS_PublicationBuiltinTopicData or DDS_SubscriptionBuiltinTopicData (see Chapter 16:
The structure for the USER_DATA QosPolicy includes just one field, as seen in Table 6.67. The field is a sequence of octets that translates to a contiguous buffer of bytes whose contents and length is set by the user. The maximum size for the data are set in the DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4).
Table 6.67 DDS_UserDataQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
DDS_OctetSeq |
value |
Default: empty |
|
|
|
This policy is similar to the GROUP_DATA QosPolicy (Section 6.4.4) and TOPIC_DATA QosPolicy (Section 5.2.1) that apply to other types of Entities.
6.5.25.1Example
One possible use of USER_DATA is to pass some credential or certificate that your subscriber application can use to accept or reject communication with the DataWriters (or vice versa, where the publisher application can validate the permission of DataReaders to receive its data). Using the same method, an application (DomainParticipant) can accept or reject all connections from another application. The value of the USER_DATA of the DomainParticipant is propagated in the ‘user_data’ field of the DDS_ParticipantBuiltinTopicData that is sent with the declaration of each DomainParticipant. Similarly, the value of the USER_DATA of the DataWriter is propagated in the ‘user_data’ field of the DDS_PublicationBuiltinTopicData that is sent with the declaration of each DataWriter, and the value of the USER_DATA of the DataReader is propagated in the ‘user_data’ field of the DDS_SubscriptionBuiltinTopicData that is sent with the declaration of each DataReader.
When Connext discovers a DomainParticipant/DataWriter/DataReader, the application can be notified of the discovery of the new entity and retrieve information about the Entity’s QoS by reading the DCPSParticipant, DCPSPublication or DCPSSubscription
DomainParticipant’s ignore_participant(), ignore_publication() or ignore_subscription() operation to reject the newly discovered remote entity as one with which the application allows Connext to communicate. See Figure 16.2 for an example of how to do this.
6.5.25.2Properties
This QosPolicy can be modified at any time. A change in the QosPolicy will cause Connext to send packets containing the new USER_DATA to all of the other applications in the domain.
It can be set differently on the publishing and subscribing sides.
6.5.25.3Related QosPolicies
❏DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4)
6.5.25.4Applicable Entities
❏DomainParticipants (Section 8.3)
6.5.25.5System Resource Considerations
As mentioned earlier, the maximum size of the USER_DATA is set in the participant_user_data_max_length, writer_user_data_max_length, and reader_user_data_max_length fields of the DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) (Section 8.5.4). Because Connext will allocated memory based on this value, you should only increase this value if you need to. If your system does not use USER_DATA, then you can set this value to 0 to save memory. Setting the value of the USER_DATA QosPolicy to hold data longer than the value set in the
[participant,writer,reader]_user_data_max_length field will result in failure and an INCONSISTENT_QOS_POLICY return code.
However, should you decide to change the maximum size of USER_DATA, you must make certain that all applications in the domain have changed the value of
[participant,writer,reader]_user_data_max_length to be the same. If two applications have different limits on the size of USER_DATA, and one application sets the USER_DATA QosPolicy to hold data that is greater than the maximum size set by another application, then the DataWriters and DataReaders between the two applications will not connect. The DomainParticipants may also reject connections from each other entirely. This is also true for the GROUP_DATA (Section 6.4.4) and TOPIC_DATA (Section 5.2.1) QosPolicies.
6.5.26WRITER_DATA_LIFECYCLE QoS Policy
This QoS policy controls how a DataWriter handles the lifecycle of the instances (keys) that the DataWriter is registered to manage. This QoS policy includes the members in Table 6.68.
Table 6.68 DDS_WriterDataLifecycleQosPolicy
Type |
Field Name |
Description |
|
|
|
|
|
|
|
autodispose_unregistered_ |
RTI_TRUE (default): instance is disposed when |
DDS_Boolean |
unregistered. |
|
|
instances |
RTI_FALSE: instance is not disposed when unregistered. |
|
|
|
|
|
|
|
|
Determines how long the DataWriter will maintain |
|
|
information regarding an instance that has been |
struct |
autopurge_unregistered_in |
unregistered. |
DDS_Duration_t |
stance_delay |
After this time elapses, the DataWriter will purge all |
|
|
internal information regarding the instance, including |
|
|
historical samples. |
|
|
|
You may use the DataWriter’s unregister() operation to indicate that the DataWriter no longer wants to send data for a Topic. This QoS controls whether or not Connext automatically also calls dispose() on the behalf of the DataWriter for the data.
The behavior controlled by this QoS applies on a per instance (key) basis for keyed Topics, so that when a DataWriter unregisters an instance, Connext can automatically also dispose that instance. This is the default behavior.
In many cases where the ownership of a Topic is EXCLUSIVE (see the OWNERSHIP QosPolicy (Section 6.5.15)), DataWriters may want to relinquish ownership of a particular instance of the
Topic to allow other DataWriters to send updates for the value of that instance. In that case, you may only want a DataWriter to unregister an instance without disposing the instance. Disposing an instance implies that the DataWriter no longer owns that instance, but it is a stronger statement to say that instance no longer exists.
User applications may be coded to trigger on the disposal of instances, thus the ability to unregister without disposing may be useful to properly maintain the semantic of disposal.
When a DataWriter unregisters an instance, it means that this particular DataWriter has no more information/data on this instance. When an instance is disposed, it means that the instance is
Setting autopurge_unregistered_instances to TRUE provides the same behavior as explicitly calling one of the dispose() operations (Section 6.3.14.2) on the instance before calling unregister() (Section 6.3.14.1), provided that autodispose_unregistered_instances is set to TRUE (the default).
When you delete a DataWriter (Section 6.3.1), all of the instances managed by the DataWriter are automatically unregistered. Therefore, this QoS policy determines whether or not instances are disposed when the DataWriter is deleted by calling one of these operations:
❏Publisher’s delete_datawriter() (see Section 6.3.1)
❏Publisher’s delete_contained_entities() (see Section 6.2.3.1)
❏DomainParticipant’s delete_contained_entities() (see Section 8.3.3)
When autopurge_unregistered_instances is TRUE, the middleware will clean up all the resources associated with an unregistered instance (most notably, the sample history of non- volatile DataWriters) when all the instance’s samples have been acknowledged by all its live DataReaders, including the sample that indicates the unregistration. By default, autopurge_unregistered_instance_delay is disabled (the delay is INFINITE). If the delay is set to zero, the DataWriter will clean up as soon as all the samples are acknowledged after the call to unregister(). A
1.To keep the historical samples for
2.In the context of discovery, if the applications temporarily lose the connection before the unregistration (which represents the remote entity destruction), to provide the samples that indicate the dispose and unregister actions once the connection is reestablished.
This delay can also be set for discovery data through these fields in the DISCOVERY_CONFIG QosPolicy (DDS Extension) (Section 8.5.3):
❏publication_writer_data_lifecycle.autopurge_unregistered_instances_delay
❏subscription_writer_data_lifecycle.autopurge_unregistered_instances_delay
6.5.26.1Properties
It does not apply to DataReaders, so there is no requirement that the publishing and subscribing sides use compatible values.
This QoS policy may be modified after the DataWriter is enabled.
6.5.26.2Related QoS Policies
None.
6.5.26.3Applicable Entities
6.5.26.4System Resource Considerations
None.
6.6FlowControllers (DDS Extension)
Note: This section does not apply when using the separate
A FlowController is the object responsible for shaping the network traffic by determining when attached asynchronous DataWriters are allowed to write data.
You can use one of the
To use a FlowController, you provide its name in the DataWriter’s PUBLISH_MODE QosPolicy (DDS Extension) (Section 6.5.18).
❏DDS_DEFAULT_FLOW_CONTROLLER_NAME
By default, flow control is disabled. That is, the
❏DDS_FIXED_RATE_FLOW_CONTROLLER_NAME
The FIXED_RATE flow controller shapes the network traffic by allowing data to be sent only once every second. Any accumulated samples destined for the same destination are coalesced into as few network packets as possible.
❏DDS_ON_DEMAND_FLOW_CONTROLLER_NAME
The ON_DEMAND flow controller allows data to be sent only when you call the FlowController’s trigger_flow() operation. With each trigger, all accumulated data since the previous trigger is sent (across all Publishers or DataWriters). In other words, the network traffic shape is fully controlled by the user. Any accumulated samples destined for the same destination are coalesced into as few network packets as possible.
This external trigger source is ideal for users who want to implement some form of
The default property settings for the
Samples written by an asynchronous DataWriter are not sent in the context of the write() call. Instead, Connext puts the samples in a queue for future processing. The FlowController associated with each asynchronous DataWriter determines when the samples are actually sent.
Each FlowController maintains a separate FIFO queue for each unique destination (remote application). Samples written by asynchronous DataWriters associated with the FlowController are placed in the queues that correspond to the intended destinations of the sample.
When tokens become available, a FlowController must decide which queue(s) to grant tokens first. This is determined by the FlowController's scheduling_policy property (see Table 6.69). Once a queue has been granted tokens, it is serviced by the asynchronous publishing thread.
The queued up samples will be coalesced and sent to the corresponding destination. The number of samples sent depends on the data size and the number of tokens granted.
Table 6.69 lists the properties for a FlowController.
Table 6.69 DDS_FlowControllerProperty_t
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_FlowControllerSchedulingPolicy |
scheduling_ |
Round robin, earliest deadline first, or |
|
policy |
highest priority first. See Section 6.6.1. |
||
|
|||
|
|
|
|
DDS_FlowControllerTokenBucketProperty_t |
token_bucket |
See Section 6.6.3. |
|
|
|
|
Table 6.70 lists the operations available for a FlowController.
Table 6.70 FlowController Operations
Operation |
Description |
Reference |
|
|
|
|
|
|
|
|
|
get_property |
Get and Set the FlowController properties. |
||
|
|||
set_property |
|||
|
|
||
|
|
|
|
trigger_flow |
Provides an external trigger to the FlowController. |
||
|
|
|
|
get_name |
Returns the name of the FlowController. |
||
|
|
||
get_participant |
Returns the DomainParticipant to which the FlowController belongs. |
||
|
|||
|
|
|
6.6.1Flow Controller Scheduling Policies
❏Round Robin (DDS_RR_FLOW_CONTROLLER_SCHED_POLICY) Perform flow control in a
Whenever tokens become available, the FlowController distributes the tokens uniformly across all of its
❏Earliest Deadline First (DDS_EDF_FLOW_CONTROLLER_SCHED_POLICY) Perform flow control in an
A sample's deadline is determined by the time it was written plus the latency budget of the DataWriter at the time of the write call (as specified in the DDS_LatencyBudgetQosPolicy). The relative priority of a flow controller's destination queue is determined by the earliest deadline across all samples it contains.
When tokens become available, the FlowController distributes tokens to the destination queues in order of their priority. In other words, the queue containing the sample with the earliest deadline is serviced first. The number of tokens granted equals the number of tokens required to send the first sample in the queue. Note that the priority of a queue may change as samples are sent (i.e., removed from the queue). If a sample must be sent to multiple destinations or two samples have an equal deadline value, the corresponding destination queues are serviced in a
With the default duration of 0 in the LatencyBudgetQosPolicy, using an EDF_FLOW_CONTROLLER_SCHED_POLICY FlowController preserves the order in which you call write() across the DataWriters associated with the FlowController.
Since the LatencyBudgetQosPolicy is mutable, a sample written second may contain an earlier deadline than the sample written first if the DDS_LatencyBudgetQosPolicy’s duration is sufficiently decreased in between writing the two samples. In that case, if the first sample is not yet written (still in queue waiting for its turn), it inherits the priority corresponding to the (earlier) deadline from the second sample.
In other words, the priority of a destination queue is always determined by the earliest deadline among all samples contained in the queue. This priority inheritance approach is required in order to both honor the updated duration and to adhere to the DataWriter in- order data delivery guarantee.
❏Highest Priority First (DDS_HPF_FLOW_CONTROLLER_SCHED_POLICY) Perform flow control in an
Note: Prioritized samples are not supported when using the Java, Ada, or .NET APIs. Therefore the Highest Priority First scheduling policy is not supported when using these APIs.
The next destination queue to service is determined by the publication priority of the DataWriter, the channel of a
The relative priority of a flow controller's destination queue is determined by the highest publication priority of all the samples it contains.
When tokens become available, the FlowController distributes tokens to the destination queues in order of their publication priority. The queue containing the sample with the highest publication priority is serviced first. The number of tokens granted equals the number of tokens required to send the first sample in the queue. Note that a queue’s priority may change as samples are sent (i.e., as they are removed from the queue). If a sample must be sent to multiple destinations or two samples have the same publication priority, the corresponding destination queues are serviced in a
This priority inheritance approach is required to both honor the designated publication priority and adhere to the DataWriter’s
See also: Prioritized Samples (Section 6.6.4).
6.6.2Managing Fast DataWriters When Using a FlowController
If a DataWriter is writing samples faster than its attached FlowController can throttle, Connext may drop samples on the writer’s side. This happens because the samples may be removed from the queue before the asynchronous publisher’s thread has a chance to send them. To work around this problem, either:
❏Use reliable communication to block the write() call and thereby throttle your application.
❏Do not allow the queue to fill up in the first place.
The queue should be sized large enough to handle expected write bursts, so that no samples are dropped. Then in steady state, the FlowController will smooth out these bursts and the queue will ideally have only one entry.
6.6.3Token Bucket Properties
FlowControllers use a
Table 6.71 DDS_FlowControllerTokenBucketProperty_t
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_Long |
max_tokens |
Maximum number of tokens than can accumulate in the |
|
token bucket. See Section 6.6.3.1. |
|||
|
|
||
|
|
|
|
DDS_Long |
tokens_added_per_period |
The number of tokens added to the token bucket per |
|
|
|
specified period. See Section 6.6.3.2. |
Table 6.71 DDS_FlowControllerTokenBucketProperty_t
Type |
Field Name |
Description |
|
|
|
|
|
|
|
|
|
DDS_Long |
tokens_leaked_per_period |
The number of tokens removed from the token bucket per |
|
specified period. See Section 6.6.3.3. |
|||
|
|
|
|
DDS_Duration_t |
period |
Period for adding tokens to and removing tokens from |
|
the bucket. See Section 6.6.3.4. |
|||
|
|
||
|
|
|
|
DDS_Long |
bytes_per_token |
Maximum number of bytes allowed to send for each |
|
token available. See Section 6.6.3.5. |
|||
|
|
||
|
|
|
Asynchronously published samples are queued up and transmitted based on the token bucket flow control scheme. The token bucket contains tokens, each of which represents a number of bytes. Samples can be sent only when there are sufficient tokens in the bucket. As samples are sent, tokens are consumed. The number of tokens consumed is proportional to the size of the data being sent. Tokens are replenished on a periodic basis.
The rate at which tokens become available and other token bucket properties determine the network traffic flow.
Note that if the same sample must be sent to multiple destinations, separate tokens are required for each destination. Only when multiple samples are destined to the same destination will they be coalesced and sent using the same token(s). In other words, each token can only contribute to a single network packet.
6.6.3.1max_tokens
The maximum number of tokens in the bucket will never exceed this value. Any excess tokens are discarded. This property value, combined with bytes_per_token, determines the maximum allowable data burst.
Use DDS_LENGTH_UNLIMITED to allow accumulation of an unlimited amount of tokens (and therefore potentially an unlimited burst size).
6.6.3.2tokens_added_per_period
A FlowController transmits data only when tokens are available. Tokens are periodically replenished. This field determines the number of tokens added to the token bucket with each periodic replenishment.
Available tokens are distributed to associated DataWriters based on the scheduling_policy. Use DDS_LENGTH_UNLIMITED to add the maximum number of tokens allowed by max_tokens.
6.6.3.3tokens_leaked_per_period
When tokens are replenished and there are sufficient tokens to send all samples in the queue, this property determines whether any or all of the leftover tokens remain in the bucket.
Use DDS_LENGTH_UNLIMITED to remove all excess tokens from the token bucket once all samples have been sent. In other words, no token accumulation is allowed. When new samples are written after tokens were purged, the earliest point in time at which they can be sent is at the next periodic replenishment.
6.6.3.4period
This field determines the period by which tokens are added or removed from the token bucket.
The special value DDS_DURATION_INFINITE can be used to create an
Note: Once period is set to DDS_DURATION_INFINITE, it can no longer be reverted to a finite period.
6.6.3.5bytes_per_token
This field determines the number of bytes that can actually be transmitted based on the number of tokens.
Tokens are always consumed in whole by each DataWriter. That is, in cases where bytes_per_token is greater than the sample size, multiple samples may be sent to the same destination using a single token (regardless of the scheduling_policy).
Where fragmentation is required, the fragment size will be either (a) bytes_per_token or (b) the minimum of the largest message sizes across all transports installed with the DataWriter, whichever is less.
Use DDS_LENGTH_UNLIMITED to indicate that an unlimited number of bytes can be transmitted per token. In other words, a single token allows the recipient DataWriter to transmit all its queued samples to a single destination. A separate token is required to send to each additional destination.
6.6.4Prioritized Samples
Note: This feature is not supported when using the Java, Ada, or .NET APIs.
The Prioritized Samples feature allows you to prioritize traffic that is in competition for transmission resources. The granularity of this prioritization may be by DataWriter, by instance, or by individual sample.
Prioritized Samples can improve latency in the following cases:
❏
With
❏
With
❏Prioritized Topics
With limited bandwidth communication, some topics may be deemed to be of higher priority than others on an ongoing basis, and samples written to some topics should be given precedence over others on transmission.
❏High Priority Events
Due to external rules or content analysis (e.g., perimeter violation or identification as a threat), the priority of samples is dynamically determined, and the priority assigned a given sample will reflect the urgency of its delivery.
To configure a DataWriter to use prioritized samples:
❏Create a FlowController with the scheduling_policy property set to
DDS_HPF_FLOW_CONTROLLER_SCHED_POLICY.
❏Create a DataWriter with the PUBLISH_MODE QosPolicy (DDS Extension) (Section 6.5.18) kind set to ASYNCHRONOUS and flow_controller_name set to the name of the FlowController.
A single FlowController may perform traffic shaping for multiple DataWriters and multiple DataWriter channels. The FlowController’s configuration determines how often publication
resources are scheduled, how much data may be sent per period, and other transmission characteristics that determine the ultimate performance of prioritized samples.
When working with prioritized samples, you should use these operations, which allow you to specify priority:
❏write_w_params() (see Writing Data (Section 6.3.8))
❏unregister_instance_w_params() (see Registering and Unregistering Instances (Section 6.3.14.1))
❏dispose_w_params() (see Disposing of Data (Section 6.3.14.2))
If you use write(), unregister(), or dispose() instead of the _w_params() versions, the affected sample is assigned priority 0 (undefined priority). If you are using a
6.6.4.1Designating Priorities
For DataWriters and DataWriter channels, valid publication priority values are:
❏DDS_PUBLICATION_PRIORITY_UNDEFINED
❏DDS_PUBLICATION_PRIORITY_AUTOMATIC
❏Positive integers excluding zero
For individual samples, valid publication priority values are 0 and positive integers.
There are three ways to set the publication priority of a DataWriter or DataWriter channel:
1.For a DataWriter, publication priority is set in the priority field of its PUBLISH_MODE QosPolicy (DDS Extension) (Section 6.5.18). For a
2.For a channel of a
3.If a DataWriter or a channel of a
The effective publication priority is determined from the interaction of the DataWriter, channel, and sample publication priorities, as shown in Table 6.72.
6.6.4.2
The configuration methods explained above are sufficient to create multiple DataWriters, each with its own assigned priority, all using the same FlowController configured for publication
To assign different priorities to data within a DataWriter, you will need to use a