46.5 PARTITION QosPolicy

The PARTITION QoS provides another way to control which Entities will match—and thus communicate with—which other Entities. It can be used to prevent Entities that would have otherwise matched from talking to each other. Much in the same way that only applications within the same DDS domain will communicate with each other, only Entities that belong to the same partition can talk to each other.

See also 16.3.5 Isolating DomainParticipants and Endpoints from Each Other for an overview of your options for isolating or partitioning data.

The PARTITION QoS applies to Publishers, Subscribers, and DomainParticipants. DataWriters and DataReaders belong to the partitions as set in the QoS of the Publishers and Subscribers that created them. DomainParticipants belong to the partitions as set in the DomainParticipants' QoS.

The mechanism implementing the PARTITION QoS is relatively lightweight compared to the creation and deletion of Entities, and membership in a partition can be dynamically changed.

The PARTITION QoS consists of a set of partition names that identify the partitions of which the Entity is a member. These names can be concrete (e.g., ExamplePartition) or regular expression strings (e.g, Example*), and two Entities are considered to be in the same partition if one of the Entities has a concrete partition name matching one of the concrete or regular expression partition names of the other Entity (see 46.5.2 Pattern Matching for PARTITION Names). By default, DomainParticipants, and DataWriters and DataReaders (through their Publisher/Subscriber parents), belong to a single partition whose name is the empty string, "".

Conceptually, each partition name can be thought of as defining a “visibility plane” within the DDS domain:

  • DataWriters will make their data available on all of the visibility planes that correspond to their Publisher’s partition names, and the DataReaders will see the data that is placed on all of the visibility planes that correspond to their Subscriber’s partition names.
  • DomainParticipants with the same domain ID (see 16.3.4 Choosing a Domain ID and Creating Multiple DDS Domains) and domain tag (see 16.3.5.1 Choosing a Domain Tag) will be visible to each other if they share a common visibility plane defined by the DomainParticipants' partition names.
  • Partitioning at the DomainParticipant level can be particularly useful in large, WAN, distributed systems (with thousands of participants) in which not all participants need to know about each other at any given time. Partitioning at the DomainParticipant level helps reduce network, CPU, and memory utilization, because DomainParticipants without matching partitions will not exchange information about their DataWriters and DataReaders.

DomainParticipant partitions and Publisher/Subscriber partitions are independent of each other. You can use both features independently or in combination to provide the right level of isolation.

Figure 46.4: Controlling Visibility of Data with the PARTITION QoS illustrates the concept of PARTITION QoS at the Publisher and Subscriber level. In this figure, all DataWriters and DataReaders belong to the same DDS domain ID, domain tag, and DomainParticipant partition, and they use the same Topic. DataWriter1 is configured to belong to three partitions: partition_A, partition_B, and partition_C. DataWriter2 belongs to partition_C and partition_D.

Figure 46.4: Controlling Visibility of Data with the PARTITION QoS

Similarly, DataReader1 is configured to belong to partition_A and partition_B, and DataReader2 belongs only to partition_C. Given this topology, the data written by DataWriter1 is visible in partitions A, B, and C. The oval tagged with the number “1” represents one DDS data sample written by DataWriter1.

Similarly, the data written by DataWriter2 is visible in partitions C and D. The oval tagged with the number “2” represents one DDS data sample written by DataWriter2.

The result is that the data written by DataWriter1 will be received by both DataReader1 and DataReader2, but the data written by DataWriter2 will only be visible by DataReader2.

Publishers and Subscribers always belong to a partition. By default, Publishers and Subscribers belong to a single partition whose name is the empty string, "". If you set the PARTITION QoS to be an empty set, Connext will assign the Publisher or Subscriber to the default partition, "". Thus, for the example above, without using the PARTITION QoS on any of the entities, DataReaders 1 and 2 would have received all data samples written by DataWriters 1 and 2.

46.5.1 Rules for PARTITION Matching

The PARTITION QosPolicy associates a set of partition names with the entity (DomainParticipant, Publisher, or Subscriber). The partition names are concrete names (e.g., ExamplePartition) or regular expression strings (e.g, Example*).

With regard to the PARTITION QoS, a DataWriter will communicate with a DataReader if and only if the following conditions apply:

  1. The DataWriter and DataReader belong to DomainParticipants bound to the same DDS domain ID, domain tag, and at least one matching DomainParticipant partition (see 16.3.1 Creating a DomainParticipant, 16.3.4 Choosing a Domain ID and Creating Multiple DDS Domains, and 16.3.5.1 Choosing a Domain Tag).
  2. The DataWriter and DataReader have matching Topics. That is, each is associated with a Topic with the same name and compatible data type.
  3. The QoS offered by the DataWriter is compatible with the QoS requested by the DataReader.
  4. The application has not used the ignore_participant(), ignore_datareader(), or ignore_datawriter() APIs to prevent the association (see 27. Restricting Communication—Ignoring Entities).
  5. The Publisher to which the DataWriter belongs and the Subscriber to which the DataReader belongs must have at least one matching partition name.

Matching partition names is done by string pattern matching, and partition names are case-sensitive.

Note: Failure to match partitions (on Publisher, Subscriber, or DomainParticipant) is not considered an incompatible QoS and does not trigger any listeners or change any status conditions.

46.5.2 Pattern Matching for PARTITION Names

You may also add strings that are regular expressions (as defined in the POSIX fnmatch API (1003.2-1992 Section B.6)) to the PARTITION QosPolicy. A regular expression does not define a set of partitions to which the Entity (Publisher, Subscriber, or DomainParticipant) belongs, as much as it is used in the partition matching process to see if a remote entity has a partition name that would be matched with the regular expression. That is, the regular expressions in the PARTITION QoS of an Entity are never matched against the regular expressions found in the PARTITION QoS of a different Entity. Regular expressions are always matched against “concrete” partition names. Thus, a concrete partition name may not contain any reserved characters that are used to define regular expressions, for example ‘*’, ‘.’, ‘+’, etc.

For more on regular expressions, see 35.5.5 SQL Extension: Regular Expression Matching.

If a PARTITION QoS only contains regular expressions, then the Entity will be assigned automatically to the default partition with the empty string name (""). Thus, a PARTITION QoS that only contains the string “*” matches another Entity's PARTITION QoS that also only contains the string “*”, not because the regular expression strings are identical, but because they both belong to the default "" partition.

Two Entities are considered to have a partition in common if the sets of partitions associated with them have:

  • At least one concrete partition name in common
  • A regular expression in one Entity that matches a concrete partition name in another Entity

The programmatic representation of the PARTITION QoS is shown in Table 46.5 DDS_PartitionQosPolicy. The QosPolicy contains the single string sequence, name. Each element in the sequence can be a concrete name or a regular expression. The Entity will be assigned to the default "" partition if the sequence is empty, or if the sequence contains only regular expressions.

Table 46.5 DDS_PartitionQosPolicy

Type

Field Name

Description

DDS_StringSeq

name

Empty by default.

There can be up to 64 names, with a maximum of 256 characters summed across all names.

You can have one long partition string of 256 chars, or multiple shorter strings that add up to 256 or fewer characters. For example, you can have one string of 4 chars and one string of 252 chars.

46.5.3 Example

Since the set of partitions for a Publisher or Subscriber can be dynamically changed, the PARTITION QosPolicy is useful to control which DataWriters can communicate with which DataReaders and vice versa—even if all of the DataWriters and DataReaders are for the same Topic. This facility is useful for creating temporary separation groups among Entities that would otherwise be connected to and exchange data each other.

Note when using Partitions and Durability: If a Publisher changes partitions after startup, it is possible for a reliable, late-joining DataReader to receive data that was written for both the original and the new partition. For example, suppose a DataWriter with TRANSIENT_LOCAL Durability initially writes DDS samples with Partition A, but later changes to Partition B. In this case, a reliable, late-joining DataReader configured for Partition B will receive whatever DDS samples have been saved for the DataWriter. These may include DDS samples which were written when the DataWriter was using Partition A.

The code in Figure 46.5: Setting Partition Names on a Publisher illustrates how to change the PARTITION QosPolicy.

Note:

Figure 46.5: Setting Partition Names on a Publisher

DDS_PublisherQos publisher_qos;
// domain, publisher_listener have been previously created
if (participant->get_default_publisher_qos(publisher_qos) !=
    DDS_RETCODE_OK) {
    // handle error
}
// Set the partition QoS
publisher_qos.partition.name.maximum(3);
publisher_qos.partition.name.length(3);
publisher_qos.partition.name[0] = DDS_String_dup(“partition_A”);
publisher_qos.partition.name[1] = DDS_String_dup(“partition_B”);
publisher_qos.partition.name[2] = DDS_String_dup(“partition_C”);
DDSPublisher* publisher = participant->create_publisher(
	publisher_qos, publisher_listener, DDS_STATUS_MASK_ALL);

The ability to dynamically control which DataWriters are matched to which DataReaders (of the same Topic) offered by the PARTITION QoS can be used in many different ways. Using partitions, connectivity can be controlled based on location-based partitioning, access-control groups, purpose, or a combination of these and other application-defined criteria. We will examine some of these options via concrete examples.

Example of location-based partitions. Assume you have a set of Topics in a traffic management system such as “TrafficAlert,” “AccidentReport,” and “CongestionStatus.” You may want to control the visibility of these Topics based on the actual location to which the information applies. You can do this by placing the Publisher in a partition that represents the area to which the information applies. This can be done using a string that includes the city, state, and country, such as “USA/California/Santa Clara.” A Subscriber can then choose whether it wants to see the alerts in a single city, the accidents in a set of states, or the congestion status across the US. Some concrete examples are shown in Table 46.6 Example of Using Location-Based Partitions.

Table 46.6 Example of Using Location-Based Partitions

Publisher Partitions

Subscriber Partitions

Result

Specify a single partition name using the pattern:

“<country>/<state>/<city>”

Specify multiple partition names, one per region of interest

Limits the visibility of the data to Subscribers that express interest in the geographical region.

“USA/California/Santa Clara”

(Subscriber partition is irrelevant here.)

Send only information for Santa Clara, California.

(Publisher partition is irrelevant here.)

“USA/California/Santa Clara”

Receive only information for Santa Clara, California.

“USA/California/Santa Clara”

“USA/California/Sunnyvale”

Receive information for Santa Clara or Sunnyvale, California.

“USA/California/*”

“USA/Nevada/*”

Receive information for California or Nevada.

“USA/California/*”

“USA/Nevada/Reno”

“USA/Nevada/Las Vegas”

Receive information for California and two cities in Nevada.

Example of access-control group partitions. Suppose you have an application where access to the information must be restricted based on reader membership to access-control groups. You can map this group-controlled visibility to partitions by naming all the groups (e.g. executives, payroll, financial, general-staff, consultants, external-people) and assigning the Publisher to the set of partitions that represents which groups should have access to the information. The Subscribers specify the groups to which they belong, and the partition-matching behavior will ensure that the information is only distributed to Subscribers belonging to the appropriate groups. Some concrete examples are shown in Table 46.7 Example of Access-Control Group Partitions.

Table 46.7 Example of Access-Control Group Partitions

Publisher Partitions

Subscriber Partitions

Result

Specify several partition names, one per group that is allowed access:

Specify multiple partition names, one per group to which the Subscriber belongs.

Limits the visibility of the data to Subscribers that belong to the access-groups specified by the Publisher.

“payroll”

“financial”

(Subscriber partition is irrelevant here.)

Makes information available only to Subscribers that have access to either financial or payroll information.

(Publisher partition is irrelevant here.)

“executives”

“financial”

Gain access to information that is intended for executives or people with access to the finances.

A slight variation of this pattern could be used to confine the information based on security levels.

Example of purpose-based partitions: Assume an application containing subsystems that can be used for multiple purposes, such as training, simulation, and real use. In some occasions it is convenient to be able to dynamically switch the subsystem from operating in the “simulation world” to the “training world” or to the “real world.” For supervision purposes, it may be convenient to observe multiple worlds, so that you can compare the each one’s results. This can be accomplished by setting a partition name in the Publisher that represents the “world” to which it belongs and a set of partition names in the Subscriber that represents the worlds that it can observe.

46.5.4 Properties

This QosPolicy can be modified at any time.

Strictly speaking, this QosPolicy does not have request-offered semantics, although it is matched between DataWriters and DataReaders, and communication is established only if there is a match between partition names.

46.5.5 Related QosPolicies

46.5.6 Applicable DDS Entities

46.5.7 System Resource Considerations

Partition names are propagated with the discovery traffic and can be examined by user code through built-in topics (see 28. Accessing Discovery Information through Built-In Topics). DomainParticipant partitions are propagated with participant discovery traffic, and Publisher and Subscriber partitions are propagated with endpoint discovery traffic.

The maximum number of partitions and the maximum number of characters that can be used for the sum-total length of all partition names are configured using the max_partitions and max_partition_cumulative_characters fields of the 44.4 DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension). Setting more partitions or using longer names than allowed by those limits will result in failure and an INCONSISTENT_QOS_POLICY return code.

However, should you decide to change the maximum number of partitions or maximum cumulative length of partition names, then you must make certain that all applications in the DDS domain have changed the values of max_partitions and max_partition_cumulative_characters to be the same. If two applications have different values for those settings, and one application sets the PARTITION QosPolicy to hold more partitions or longer names than set by another application, then the matching Entities between the two applications will not connect. This is similar to the restrictions for the GROUP_DATA (46.4 GROUP_DATA QosPolicy), USER_DATA (47.30 USER_DATA QosPolicy), and TOPIC_DATA (45.1 TOPIC_DATA QosPolicy) QosPolicies.

46.5.8 Partition Changes

46.5.8.1 DomainParticipant Partitions Changes

When a DomainParticipant’s partitions change, the DomainParticipant sends a new participant announcement to all the matching DomainParticipants and all the initial peers. The message is sent over the participant announcement channel, which is best-effort.

46.5.8.1.1 Changing from Match to Unmatch

When a local DomainParticipant unmatches with a remote DomainParticipant, it goes through the same process it would go through if the remote participant were deleted or otherwise lost liveliness. All information for the remote DomainParticipant is purged, including the matches with the remote participant's DataWriters and DataReaders.

Before unmatching with the remote DomainParticipant, the local DomainParticipants sends a participant announcement notification. After that, the local DomainParticipant no longer sends the announcements, unless the remote participant's discovery locator(s) are a part of the local participant's initial_peers list. Therefore, if the local participant announcement containing the partition change is lost and the remote DomainParticipant discovery locator is not part of the local DomainParticipants’s initial peers, the local DomainParticipant will end up losing liveliness with the remote DomainParticipant.

The unmatch operation can be detected in the local DomainParticipant by monitoring the DCPSParticipant built-in Topic (see 28. Accessing Discovery Information through Built-In Topics). The application will receive a sample with instance_state set to NOT_ALIVE_NO_WRITERS for each unmatched remote DomainParticipant.

There will also be changes to the 31.6.7 PUBLICATION_MATCHED Status or 40.7.9 SUBSCRIPTION_MATCHED Status in the local DomainParticipant's DataReaders and DataWriters that previously matched with the remote participant's DataReaders and DataWriters.

46.5.8.1.2 Changing from Unmatch to Match

The local DomainParticipant changing partitions will match with a remote DomainParticipant when it receives a new participant announcement from the remote DomainParticipant. Unlike with Publisher and Subscriber partitions, this change may take some time. In the worst-case scenario, the change depends on how fast the DomainParticipants send participant announcements.

46.5.8.2 Publisher/Subscriber Partitions Changes

When the partitions of a Publisher or Subscriber change, the DomainParticipant will send new publication or subscription announcements (publication DATAs and subscription DATAs) to all matching DomainParticipants. These messages are sent over the DCPSPublication and DCPSSubscription reliable channels (see 28. Accessing Discovery Information through Built-In Topics and 22.3 Simple Endpoint Discovery).

 

For a Publisher, the DomainParticipant will send one publication DATAs announcement per (Publisher’s DataWriter, remote DomainParticipant locator) pair. For a Subscriber, the DomainParticipant will send one subscription DATAs announcement per (Subscriber’s DataReader, remote DomainParticipant locator) pair.

46.5.8.2.1 Changing from Match to Unmatch

The local Entity (DataWriter or DataReader) that is changing partitions immediately unmatches the previously matching Entities. The remote Entity will unmatch the local Entity changing partitions as soon as it receives the endpoint (publication DATAs and subscription DATAs) announcement from the local Entity. For the local and remote Entity, the unmatch operation can be detected by monitoring the 31.6.7 PUBLICATION_MATCHED Status or 40.7.9 SUBSCRIPTION_MATCHED Status.

When a DataWriter unmatches a DataReader because of a change in partitions, the DataWriter will stop sending samples immediately to the DataReader.

When a DataReader unmatches a DataWriter because of a change in partitions, the DataReader’s DomainParticipant will drop samples coming from the DataWriter, so that the DataReader never receives them. If a reliable DataWriter has not yet detected the change, it may end up filling its send window and blocking new write operations until the DataReader is deactivated due to lack of responsiveness to HB messages (see 32.4.4.4 Controlling How Many Times Heartbeats are Resent (max_heartbeat_retries)).

46.5.8.3 Changing from Unmatch to Match

The local Entity (DataWriter or DataReader) that is changing partitions immediately matches other Entities that previously did not match. The remote Entity will match the Entity changing partitions as soon as it receives the endpoint (publication DATAs and subscription DATAs) announcement. The match operation can be detected by the local and remote Entity by monitoring the 31.6.7 PUBLICATION_MATCHED Status or 40.7.9 SUBSCRIPTION_MATCHED Status.

Because the partition change has to be propagated, there will be a delay before the DataReader starts receiving samples from matched DataWriters.