40.7 Statuses for DataReaders

There are several types of statuses available for a DataReader. You can use the get_*_status() operations (40.5 Checking DataReader Status and StatusConditions) to access and reset them, use a DataReaderListener (40.4 Setting Up DataReaderListeners) to listen for changes in their values (for those statuses that have Listeners), or use a StatusCondition and a WaitSet (15.9.8 StatusConditions) to wait for changes. Each status has an associated data structure and is described in more detail in the following sections.

40.7.1 DATA_AVAILABLE Status

This status indicates that new data is available for the DataReader. In most cases, this means that one new DDS sample has been received. However, there are situations in which more than one DDS samples for the DataReader may be received before the DATA_AVAILABLE status changes. For example, if the DataReader has the 47.9 DURABILITY QosPolicy set to be non-VOLATILE, then the DataReader may receive a batch of old DDS data samples all at once. Or if data is being received reliably from DataWriters, Connext may present several DDS samples of data simultaneously to the DataReader if they have been originally received out of order.

A change to this status also means that the DATA_ON_READERS status is changed for the DataReader’s Subscriber. This status is reset when you call read(), take(), or one of their variations.

Unlike most other statuses, this status (as well as DATA_ON_READERS for Subscribers) is a read communication status. See 39.9 Statuses for Subscribers and 15.7.1 Types of Communication Status for more information on read communication statuses.

The DataReaderListener’s on_data_available() callback is invoked when this status changes, unless the SubscriberListener (39.6 Setting Up SubscriberListeners) or DomainParticipantListener (16.3.6 Setting Up DomainParticipantListeners) has implemented an on_data_on_readers() callback. In that case, on_data_on_readers() will be invoked instead.

40.7.2 DATA_READER_CACHE_STATUS

This status keeps track of the number of DDS samples and instances in the reader's cache, including the number of samples that were dropped for different reasons. For information on the instance states described in the reader's cache, such as "alive," "no_writers," and "disposed," see 19.1 Instance States.

This status does not have an associated Listener. You can access this status by calling the DataReader’s get_datareader_cache_status() operation, which will return the status structure described in Table 40.3 DDS_DataReaderCacheStatus.

Table 40.3 DDS_DataReaderCacheStatus

Type

Field Name

Description

DDS_LongLong

sample_count_peak

Highest number of DDS samples in the DataReader’s queue over the lifetime of the DataReader.

DDS_LongLong

sample_count

Current number of DDS samples in the DataReader’s queue.

Includes DDS samples that may not yet be available to be read or taken by the user due to DDS samples being received out of order or settings in the 46.6 PRESENTATION QosPolicy.

DDS_LongLong

writer_removed_batch_sample_dropped_sample_count

The number of batched samples received by the DataReader that were marked as removed by the DataWriter.

When the DataReader receives a batch, the batch can contain samples marked as removed by the DataWriter. Examples of removed samples in a batch are samples that were replaced due to KEEP_LAST_HISTORY_QOS on the DataWriter (see 47.12 HISTORY QosPolicy) or samples that outlived the DataWriter's 47.14 LIFESPAN QoS Policy duration. By default, any sample marked as removed from a batch is dropped, unless you set the dds.data_reader.accept_writer_removed_batch_samples property in the 47.19 PROPERTY QosPolicy (DDS Extension) to TRUE. (By default, it is set to FALSE.)

Note: Historical data with removed batch samples written before the DataReader joined the DDS domain are also included in the count.

DDS_LongLong

old_source_timestamp_dropped_sample_count

The number of samples dropped as a result of receiving a sample older than the last one, using DDS_BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS.

When the DataReader is using DDS_BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS:

  • If the DataReader receives a sample for an instance with a source timestamp that is older than the last source timestamp received for the instance, the sample is dropped and included in this count.
  • If the DataReader receives a sample for an instance with a source timestamp that is equal to the last source timestamp received for the instance and the writer has a higher virtual GUID, the sample is dropped and included in this count.

DDS_LongLong

tolerance_source_timestamp_dropped_sample_count

The number of samples dropped as a result of receiving a sample in the future, using DDS_BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS.

When the DataReader is using DDS_BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS, the DataReader will accept a sample only if the source timestamp is no farther in the future from the reception timestamp than the source_timestamp_tolerance. Otherwise, the sample is dropped and included in this count.

DDS_LongLong

ownership_dropped_sample_count

The number of samples dropped as a result of receiving a sample from a DataWriter with a lower strength, using Exclusive Ownership.

When using Exclusive Ownership, the DataReader receives data from multiple DataWriters. Each instance can only be owned by one DataWriter. If other DataWriters write samples belonging to this instance, the samples will be dropped.

DDS_LongLong

content_filter_dropped_sample_count

The number of samples filtered by the DataReader due to ContentFilteredTopics.

When using a content filter on the DataReader side, if the sample received by the DataReader does not pass the filter, it will be dropped.

DDS_LongLong

time_based_filter_dropped_sample_count

The number of samples filtered by the DataReader due to the 48.4 TIME_BASED_FILTER QosPolicy.

When using the 48.4 TIME_BASED_FILTER QosPolicy on the DataReader side, if the sample received by the DataReader does not pass the minimum_separation filter, it will be dropped.

DDS_LongLong

expired_dropped_sample_count

The number of samples expired by the DataReader due to the 47.14 LIFESPAN QoS Policy or the autopurge sample delays in the 48.3 READER_DATA_LIFECYCLE QoS Policy:

  • DDS_LifespanQosPolicy: When a sample expires due to the DDS_LifespanQosPolicy, the data is removed from the DataReader caches. This sample will be considered dropped if its DDS_SampleStateKind is DDS_NOT_READ_SAMPLE_STATE.
  • DDS_ReaderDataLifecycleQosPolicy::autopurge_nowriter_samples_delay: When a sample expires due to the autopurge_nowriter_samples_delay, this sample will be considered dropped if its DDS_SampleStateKind is DDS_NOT_READ_SAMPLE_STATE.
  • DDS_ReaderDataLifecycleQosPolicy::autopurge_disposed_samples_delay: When a sample expires due to the autopurge_disposed_samples_delay, this sample will be considered dropped if its DDS_SampleStateKind is DDS_NOT_READ_SAMPLE_STATE.

DDS_LongLong

virtual_duplicate_dropped_sample_count

The number of virtual duplicate samples dropped by the DataReader. A sample is a virtual duplicate if it has the same identity (Virtual Writer GUID and Virtual Sequence Number) as a previously received sample.

When two DataWriters with the same logical data source publish a sample with the same sequence_number, one sample will be dropped and the other will be received by the DataReader.

This can happen when multiple writers are writing on behalf of the same original DataWriter: for example, in systems with redundant RTI Routing Service applications or when a DataReader is receiving samples both directly from the original DataWriter and from an instance of RTI Persistence Service.

DDS_LongLong

replaced_dropped_sample_count

The number of samples replaced by the DataReader due to DDS_KEEP_LAST_HISTORY_QOS replacement in the 47.12 HISTORY QosPolicy.

When the number of samples for an instance in the queue reaches the depth value in the HISTORY QosPolicy, a new sample for the instance will replace the oldest sample for the instance in the queue. The new sample will be accepted, and the old sample will be dropped.

This counter will only be updated if the replaced sample's DDS_SampleStateKind is DDS_NOT_READ_SAMPLE_STATE.

DDS_LongLong

total_samples_dropped_by_instance_replacement

Number of samples of the state NOT_READ_SAMPLE_STATE that were dropped when removing an instance due to instance replacement via the instance_replacement field in the 48.2 DATA_READER_RESOURCE_LIMITS QosPolicy (DDS Extension).

DDS_LongLong

alive_instance_count

Number of instances currently in the DataReader's queue that have an instance_state of ALIVE.

DDS_LongLong

alive_instance_count_peak

Highest number of ALIVE instances in the DataReader's queue over the lifetime of the DataReader.

DDS_LongLong

no_writers_instance_count

Number of instances in the DataReader's queue that have an instance_state of NOT_ALIVE_NO_WRITERS.

DDS_LongLong

no_writers_instance_count_peak

Highest number of NOT_ALIVE_NO_WRITERS instances in the DataReader's queue over the lifetime of the DataReader.

DDS_LongLong

disposed_instance_count

Number of instances in the DataReader's queue that have an instance_state of NOT_ALIVE_DISPOSED.

DDS_LongLong

disposed_instance_count_peak

Highest number of NOT_ALIVE_DISPOSED instances in the DataReader's queue over the lifetime of the DataReader.

DDS_LongLong

detached_instance_count

Number of detached instances—which contain only the minimum instance state—currently being maintained in the DataReader's queue.

If keep_minimum_state_for_instances in the 48.2 DATA_READER_RESOURCE_LIMITS QosPolicy (DDS Extension) is true (by default, it is), the DataReader will keep up to max_total_instances (in the DATA_READER_RESOURCE_LIMITS QosPolicy) of detached instances in its queue. See 40.8.7 Active State and Minimum State for more information.

DDS_LongLong

detached_instance_count_peak

Highest number of detached instances in the DataReader's queue over the lifetime of the DataReader.

40.7.3 DATA_READER_PROTOCOL_STATUS

The status of a DataReader’s internal protocol related metrics (such as the number of DDS samples received, filtered, rejected) and the status of wire protocol traffic. The structure for this status appears in Table 40.4 DDS_DataReaderProtocolStatus.

This status does not have an associated Listener. You can access this status by calling the following operations on the DataReader (which return the status structure described in Table 40.4 DDS_DataReaderProtocolStatus):

get_datareader_protocol_status() returns the sum of the protocol status for all the matched publications for the DataReader.

get_matched_publication_datareader_protocol_status() returns the protocol status of a particular matched publication, identified by a publication_handle.

The get_*_status() operations also reset the related status so it is no longer considered “changed.”

Note: Status/data for a matched publication is kept even if the DataWriter is not alive (that is, has lost liveliness based on the 47.15 LIVELINESS QosPolicy). The status/data will be removed only if the DataWriter is gone: that is, the DataWriter is destroyed and this change is propagated through a discovery update, or the DataWriter's DomainParticipant is gone (either gracefully or its liveliness expired and Connext is configured to purge not-alive participants). Once a matched DataWriter is gone, its status is deleted. If you try to get the status/data for a matched publication that is gone, the 'get status' or ' get data' call will return an error.

The DataReader's protocol status includes information about DATA_FRAG messages (sample fragments) if you are using DDS-level fragmentation. See 34.3 Large Data Fragmentation for more information.

Table 40.4 DDS_DataReaderProtocolStatus

Type

Field Name

Description

DDS_LongLong

received_sample_count

The number of samples received by a DataReader.

Note: When data is fragmented, this count is updated when all of the fragments required to reassemble a sample are received, not when individual fragments are received. The fragment count is tracked in the received_fragment_count.

received_sample_count_
change

Change in the received_sample_count since the last time the status was read.

received_sample_bytes

The number of bytes received by a DataReader.

Note: When data is fragmented, this statistic is updated upon the receipt of each fragment, not when a sample is reassembled.

received_sample_bytes_
change

Change in received_sample_bytes since the last time the status was read.

DDS_LongLong

duplicate_sample_count

The number of DDS samples received from a DataWriter, not for the first time, by this DataReader.

duplicate_sample_count_
change

Change in duplicate_sample_count since the last time the status was read.

duplicate_sample_bytes

The number of bytes of DDS samples received from a DataWriter received, not for the first time, by this DataReader.

duplicate_sample_bytes_
change

Change in the duplicate_sample_bytes since the last time the status was read.

DDS_LongLong

DEPRECATED

filtered_sample_count

The number of DDS samples filtered by this DataReader due to ContentFilteredTopics or Time-Based Filter.

DEPRECATED

filtered_sample_count_
change

Change in the filtered_sample_count since the last time the status was read.

DEPRECATED

filtered_sample_bytes

The number of bytes of DDS samples filtered by this DataReader due to ContentFilteredTopics or Time-Based Filter.

DEPRECATED

filtered_sample_bytes_
change

Change in the filtered_sample_bytes since the last time the status was read.

DDS_LongLong

received_heartbeat_count

The number of Heartbeats received from a DataWriter by this DataReader.

received_heartbeat_count_
change

Change in the received_heartbeat_count since the last time the status was read.

received_heartbeat_bytes

The number of bytes of Heartbeats received from a DataWriter by this DataReader.

received_heartbeat_bytes_
change

Change in the received_heartbeat_bytes since the last time the status was read.

DDS_LongLong

sent_ack_count

The number of ACKs sent from this DataReader to a matching DataWriter.

sent_ack_count_change

Change in the sent_ack_count since the last time the status was read.

sent_ack_bytes

The number of bytes of ACKs sent from this DataReader to a matching DataWriter.

sent_ack_bytes_change

Change in the sent_ack_bytes since the last time the status was read.

DDS_LongLong

sent_nack_count

The number of NACKs sent from this DataReader to a matching DataWriter.

sent_nack_count_change

Change in the sent_nack_count since the last time the status was read.

sent_nack_bytes

The number of bytes of NACKs sent from this DataReader to a matching DataWriter.

sent_nack_bytes_change

Change in the sent_nack_bytes since the last time the status was read.

DDS_LongLong

received_gap_count

The number of GAPs received from a DataWriter to this DataReader.

received_gap_count_change

Change in the received_gap_count since the last time the status was read.

received_gap_bytes

The number of bytes of GAPs received from a DataWriter to this DataReader.

received_gap_bytes_change

Change in the received_gap_bytes since the last time the status was read.

DDS_LongLong

rejected_sample_count

The number of times a sample is rejected because it cannot be accepted by a reliable DataReader. Samples rejected by a reliable DataReader will be NACKed, and they will have to be resent by the DataWriter if they are still available in the DataWriter queue.

Note: This count is a subset of the total_count in the 40.7.8 SAMPLE_REJECTED Status.The total_count in the SAMPLE_REJECTED status includes both protocol-related rejections, that trigger a repair or resend (the rejected_sample_count described here), and the rejections described in the 40.7.8 SAMPLE_REJECTED Status. For example, the DDS_REJECTED_BY_SAMPLES_PER_INSTANCE_LIMIT in the SAMPLE_REJECTED status is not part of the rejected_sample_count because it does not trigger a repair or resend.

rejected_sample_
count_change

Change in the rejected_sample_count since the last time the status was read.

DDS_LongLong

out_of_range_rejected_sample_count

The number of samples dropped by the DataReader due to the receive window being full and the sample received out-of-order.

When using reliable 47.21 RELIABILITY QosPolicy, if the DataReader receives samples out-of-order, they are stored internally until the missing samples are received. The number of out-of-order samples that the DataReader can keep is set by the receive_window_size in the Table 48.2 DDS_RtpsReliableReaderProtocol_t. When the receive window is full, any out-of-order sample received will be dropped and included in this count (but not in the SampleRejectedStatus).

DDS_
SequenceNumber_t

first_available_sample_
sequence_number

Sequence number of the first available DDS sample in a matched DataWriter's reliability queue. Applicable only when retrieving matched DataWriter statuses.

last_available_sample_
sequence_number

Sequence number of the last available DDS sample in a matched DataWriter's reliability queue. Applicable only when retrieving matched DataWriter statuses.

last_committed_sample_
sequence_number

Sequence number of the last committed DDS sample (i.e. available to be read or taken) in a matched DataWriter's reliability queue. Applicable only when retrieving matched DataWriter statuses.

For best-effort DataReaders, this is the sequence number of the latest DDS sample received.

For reliable DataReaders, this is the sequence number of the latest DDS sample that is available to be read or taken from the DataReader's queue.

DDS_Long

uncommitted_sample_count

Number of received DDS samples that are not yet available to be read or taken due to being received out of order. Applicable only when retrieving matched DataWriter statuses.

DDS_LongLong

received_fragment_count

The number of fragments (DATA_FRAG messages) that have been received by this DataReader. This count is incremented upon the receipt of each DATA_FRAG message. Fragments from duplicate samples do not count towards this number. Applicable only when data is fragmented.

DDS_LongLong

dropped_fragment_count

The number of DATA_FRAG messages that have been dropped by the DataReader. This count does not include malformed fragments. Applicable only when data is fragmented.

DDS_LongLong

reassembled_sample_count

The number of samples that have been reassembled by the DataReader. This statistic is incremented when all of the fragments that are required to reassemble an entire sample have been received. Applicable only when data is fragmented.

DDS_LongLong

sent_nack_fragment_count

The number of NACK FRAG RTPS messages that have been sent from the DataReader to a DataWriter. NACK FRAG RTPS messages are sent when large data is used in conjunction with reliable communication. They have the same properties as NACK messages, but instead of applying to samples, they apply to fragments. Applicable only when data is fragmented.

sent_nack_fragment_bytes

The number of NACK FRAG RTPS message bytes that have been sent from the DataReader to a DataWriter. NACK FRAG RTPS messages are sent when large data is used in conjunction with reliable communication. They have the same properties as NACK messages, but instead of applying to samples, they apply to fragments. Applicable only when data is fragmented.

40.7.4 LIVELINESS_CHANGED Status

This status indicates that the liveliness of one or more matched DataWriters has changed (i.e., one or more DataWriters has become alive or not alive). The mechanics of determining liveliness between a DataWriter and a DataReader is specified in their 47.15 LIVELINESS QosPolicy.

The structure for this status appears in Table 40.5 DDS_LivelinessChangedStatus.

Table 40.5 DDS_LivelinessChangedStatus

Type

Field Name

Description

DDS_Long

alive_count

Number of matched DataWriters that are currently alive.

not_alive_count

Number of matched DataWriters that are not currently alive.

alive_count_change

The change in the alive_count since the last time the Listener was called or the status was read.

not_alive_count_change

The change in the not_alive_count since the last time the Listener was called or the status was read.

Note that a positive not_alive_count_change means one of the following:

  • The DomainParticipant containing the matched DataWriter has lost liveliness or has been deleted.
  • The matched DataWriter has lost liveliness or has been deleted.

DDS_InstanceHandle_t

last_publication_handle

This InstanceHandle can be used to look up which remote DataWriter was the last to cause this DataReader's status to change, using the DataReader's get_matched_publication_data() method.

It's possible that the DataWriter has been purged from the discovery database. If so, get_matched_publication_data() will not be able to return information about the DataWriter. In this case, the only way to get information about the lost DataWriter is if you cached the information previously.

The DataReaderListener’s on_liveliness_changed() callback may be called for the following reasons:

  • The liveliness of any DataWriter matching this DataReader (as defined by the 47.15 LIVELINESS QosPolicy) is lost.
  • A DataWriter's liveliness is recovered after being lost.
  • A new matching DataWriter has been discovered.
  • A QoS Policy has changed such that a DataWriter that matched this DataReader before no longer matches (such as a change to the PartitionQosPolicy). In this case, Connext will no longer keep track of the DataWriter's liveliness. Furthermore:
    • If the DataWriter was alive when it and the DataReader stopped matching: alive_count will decrease (since there’s one less matching alive DataWriter) and not_alive_count will remain the same (since the DataWriter is still alive).
    • If the DataWriter was not alive when it and the DataReader stopped matching: alive_count will remain the same (since the matching DataWriter was not alive) and not_alive_count will decrease (since there’s one less not-alive matching DataWriter).
    • Note: There are several ways that a DataWriter and DataReader can become incompatible after the DataWriter has lost liveliness. For example, when the 47.15 LIVELINESS QosPolicy kind is set to MANUAL_BY_PARTICIPANT_LIVELINESS_QOS, it is possible that the DataWriter has not asserted its liveliness in a timely manner, and then a QoS change occurs on the DataWriter or DataReader that makes the entities incompatible.

  • A QoS Policy (such as the PartitionQosPolicy) has changed such that a DataWriter that was unmatched with the DataReader now matches.

You can also retrieve the value by calling the DataReader’s get_liveliness_changed_status() operation; this will also reset the status so it is no longer considered “changed.”

This status is reciprocal to the 31.6.9 RELIABLE_READER_ACTIVITY_CHANGED Status (DDS Extension) for a DataWriter.

40.7.5 REQUESTED_DEADLINE_MISSED Status

This status indicates that the DataReader did not receive a new DDS sample for an data-instance within the time period set in the DataReader’s 47.7 DEADLINE QosPolicy. For non-keyed Topics, this simply means that the DataReader did not receive data within the DEADLINE period. For keyed Topics, this means that for one of the data-instances that the DataReader was receiving, it has not received a new DDS sample within the DEADLINE period. For more information about keys and instances, see 8. DDS Samples, Instances, and Keys.

The structure for this status appears in Table 40.6 DDS_RequestedDeadlineMissedStatus.

Table 40.6 DDS_RequestedDeadlineMissedStatus

Type

Field Name

Description

DDS_Long

total_count

Cumulative number of times that the deadline was violated for any instance read by the DataReader.

total_count_change

The change in total_count since the last time the Listener was called or the status was read.

DDS_InstanceHandle_t

last_instance_handle

Handle to the last data-instance in the DataReader for which a requested deadline was missed.

The DataReaderListener’s on_requested_deadline_missed() callback is invoked when this status changes. You can also retrieve the value by calling the DataReader’s get_requested_deadline_missed_status() operation; this will also reset the status so it is no longer considered “changed.”

40.7.6 REQUESTED_INCOMPATIBLE_QOS Status

A change to this status indicates that the DataReader discovered a DataWriter for the same Topic, but that DataReader had requested QoS settings incompatible with this DataWriter’s offered QoS.

The structure for this status appears in Table 40.7 DDS_RequestedIncompatibleQosStatus .

Table 40.7 DDS_RequestedIncompatibleQosStatus

Type

Field Name

Description

DDS_Long

total_count

Cumulative number of times the DataReader discovered a DataWriter for the same Topic with an offered QoS that is incompatible with that requested by the DataReader.

DDS_Long

total_count_change

The change in total_count since the last time the Listener was called or the status was read.

DDS_QosPolicyId_t

last_policy_id

The ID of the QosPolicy that was found to be incompatible the last time an incompatibility was detected. (Note: if there are multiple incompatible policies, only one of them is reported here.)

DDS_QosPolicyCountSeq

policies

A list containing—for each policy—the total number of times that the DataReader discovered a DataWriter for the same Topic with a offered QoS that is incompatible with that requested by the DataReader.

The DataReaderListener’s on_requested_incompatible_qos() callback is invoked when this status changes. You can also retrieve the value by calling the DataReader’s get_requested_incompatible_qos_status() operation; this will also reset the status so it is no longer considered “changed.”

40.7.7 SAMPLE_LOST Status

This status indicates that one or more DDS samples written by a matched DataWriter have failed to be received and will never be received.

Some samples written by a DataWriter to its matching DataReaders may never be received by the DataReaders. This can happen because something went wrong while trying to add the sample to the DataReader’s queue, like a decryption or deserialzation error, or because the sample was removed from the DataWriter’s queue before it was received by the DataReaders. A sample can be removed from the DataWriter’s queue before it is delivered to matching DataReaders for a number of reasons, including that DataWriters are limited in the number of published DDS data samples that they can store, so that if a DataWriter continues to publish DDS data samples, new data may overwrite old data that has not yet been received by the DataReader. The DDS samples that are overwritten can never be resent to the DataReader and thus are considered to be lost. DataWriters may also set the 47.14 LIFESPAN QoS Policy, and samples that expire due to lifespan may also be reported as lost by a DataReader that has not received those samples.

The lost status applies to reliable and best-effort DataReaders, see the 47.21 RELIABILITY QosPolicy. By reporting a sample as lost, the DataReader has declared that the sample will never be received, and will therefore not NACK it. It cannot be repaired by a DataWriter or resent to the DataReader.

Before a sample is received by a DataReader it may also be reported as rejected or dropped. (See 40.7.8 SAMPLE_REJECTED Status and 40.7.2 DATA_READER_CACHE_STATUS.)

The structure for the lost status appears in Table 40.8 DDS_SampleLostStatus.

Table 40.8 DDS_SampleLostStatus

Type

Field Name

Description

DDS_Long

total_count

Cumulative count of all the DDS samples that have been lost, across all instances of data written for the Topic.

total_count_change

The incremental number of DDS samples lost since the last time the Listener was called or the status was read.

DDS_SampleLostStatusKind

last_reason

The reason the last DDS sample was lost. See Table 40.9 DDS_SampleLostStatusKind.

The reason the DDS sample was lost appears in the last_reason field. The possible values are listed in Table 40.9 DDS_SampleLostStatusKind.

Table 40.9 DDS_SampleLostStatusKind

Reason Kind

Description

NOT_LOST

The sample was not lost.

LOST_BY_AVAILABILITY_WAITING_TIME

max_data_availability_waiting_time in the 47.1 AVAILABILITY QosPolicy (DDS Extension) expired.

LOST_BY_DECODE_FAILURE

When using BEST_EFFORT in the 47.21 RELIABILITY QosPolicy, a sample was lost because it could not be decoded.

When using RELIABLE in the RELIABILITY QosPolicy, the sample is rejected, not lost, with the reason REJECTED_BY_DECODE_FAILURE.

LOST_BY_DESERIALIZATION_FAILURE

A sample was lost because it could not be deserialized. A sample may fail to be deserialized for the following reasons:

  • The subscribing application has received a sample with a sequence or string member that is longer than the maximum allowed by the DataReader's data type.
  • The subscribing application has received a sample with an unknown enum value. See the description of the dds.sample_assignability.accept_unknown_enum_value property in the Property Reference Guide for more information.
  • The subscribing application has received a sample with an unknown union discriminator value. See the description of the dds.sample_assignability.accept_unknown_union_discriminator property in the Property Reference Guide for more information.
  • The subscribing application has received a sample with an out-of-range value for one of the members that has been configured with a minimum or maximum value using the min, max, or range type annotations.
  • Sample corruption has occurred. If this is the case, then using RTI Security Plugins or enabling CRC (see the compute_crc and check_crc fields in the 44.9 WIRE_PROTOCOL QosPolicy (DDS Extension)) can help avoid these failures.

LOST_BY_INCOMPLETE_COHERENT_SET

A sample was lost because it is part of an incomplete coherent set. An incomplete coherent set is a coherent set for which some of the samples are missing.

For example, consider a DataWriter using KEEP_LAST in the 47.12 HISTORY QosPolicy with a depth of 1. The DataWriter publishes two samples of the same instance as part of a coherent set “CS1” ; the first sample of “CS1” is replaced by a new sample before it can be successfully delivered to the DataReader. In this case, the coherent set containing the two samples is considered incomplete. The new sample, by default, will not be provided to the application, and will be reported as LOST_BY_INCOMPLETE_COHERENT_SET. (You can change this default behavior by setting drop_incomplete_ coherent_set to FALSE in the 46.6 PRESENTATION QosPolicy. If you do, the new sample will be provided to the application, but it will be marked as part of an incomplete coherent set in the 41.6 The SampleInfo Structure.)

LOST_BY_INSTANCES_LIMIT

max_instances in the 47.22 RESOURCE_LIMITS QosPolicy was reached.

LOST_BY_LARGE_COHERENT_SET

A sample was lost because it was part of a large coherent set. A large coherent set is a coherent set that cannot fit all at once into the DataReader queue because resource limits are exceeded.

For example, if max_samples_per_instance on the DataReader is 10 and the coherent set has 15 samples for a given instance, the coherent set is a large coherent set that will be considered incomplete.

The resource limits that can lead to large coherent sets are: max_samples, max_samples_per_instance, max_instances, and max_samples_per_remote_writer.

LOST_BY_OUT_OF_MEMORY

A sample was lost because there was not enough memory to store the sample.

LOST_BY_REMOTE_WRITER_SAMPLES_
PER_VIRTUAL_QUEUE_LIMIT

A resource limit on the number of samples published by a DataWriter on behalf of a virtual DataWriter that a DataReader may store was reached. (This field is currently not used.)

LOST_BY_REMOTE_WRITERS_PER_
INSTANCE_LIMIT

max_remote_writers_per_instance in the 48.2 DATA_READER_RESOURCE_LIMITS QosPolicy (DDS Extension) was reached. (This limit is the number of DataWriters for a single instance from which a DataReader may read.)

LOST_BY_REMOTE_WRITERS_PER_
SAMPLE_LIMIT

max_remote_writers_per_sample in the 47.22 RESOURCE_LIMITS QosPolicy was reached. (This limit is the number of DataWriters that are allowed to write the same sample.)

LOST_BY_SAMPLES_LIMIT

When using BEST_EFFORT in the 47.21 RELIABILITY QosPolicy, max_samples in the 47.22 RESOURCE_LIMITS QosPolicy was reached.

When using RELIABLE in the RELIABILITY QosPolicy, reaching max_samples triggers a rejection, not a loss, with the reason REJECTED_BY_SAMPLES_LIMIT.

LOST_BY_SAMPLES_PER_INSTANCE_LIMIT

When using BEST_EFFORT in the 47.21 RELIABILITY QosPolicy, max_samples_per_instance in the 47.22 RESOURCE_LIMITS QosPolicy was reached.

When using RELIABLE in the RELIABILITY QosPolicy, reaching max_samples_per_instance triggers a rejection, not a loss, with the reason REJECTED_BY_SAMPLES_PER_INSTANCE_LIMIT.

LOST_BY_SAMPLES_PER_REMOTE_
WRITER_LIMIT

When using BEST_EFFORT in the 47.21 RELIABILITY QosPolicy, max_samples_per_remote_writer in the 48.2 DATA_READER_RESOURCE_LIMITS QosPolicy (DDS Extension) was reached. (This limit is the number of samples from a given DataWriter that a DataReader may store.)

When using RELIABLE in the RELIABILITY QosPolicy, reaching max_samples_per_remote_writer triggers a rejection, not a loss, with the reason REJECTED_BY_SAMPLES_PER_REMOTE_WRITER_LIMIT.

LOST_BY_UNKNOWN_INSTANCE

A sample was lost because it didn't contain enough information for the DataReader to know what instance it was associated with.

LOST_BY_VIRTUAL_WRITERS_LIMIT

max_remote_virtual_writers in the 48.2 DATA_READER_RESOURCE_LIMITS QosPolicy (DDS Extension) was reached. (This limit is the number of virtual DataWriters from which a DataReader may read.)

LOST_BY_WRITER

A DataWriter removed the DDS sample before being received by the DataReader.

The DataReader detects that a sample is lost:

  • For Best Effort 47.21 RELIABILITY QosPolicy: once a sample with a higher sequence number is received.
  • For Reliable RELIABILITY QosPolicy: once a heartbeat message is received that announces that a sample that the DataReader was waiting for is no longer available in the DataWriter’s queue (i.e., the first sequence number in the heartbeat is higher than the missing sample’s sequence number). Samples that are gapped through GAP messages are not considered lost.

Samples may be lost for any of the following reasons:

  • The lifespan of a sample expired before it was received by a DataReader; see 47.14 LIFESPAN QoS Policy.
  • For Best Effort RELIABILITY QosPolicy: a sample was lost on the network or arrived out of order at the DataReader. (For example, the DataReader received sample 2 but not sample 1; the DataReader considers sample 1 LOST_BY_WRITER.)
  • For Reliable RELIABILITY QosPolicy:
    • When using KEEP_LAST 47.12 HISTORY QosPolicy, unacknowledged samples can be overwritten if the history depth limit is reached for an instance.
      Important: Depending on timing, samples that were replaced due to KEEP_LAST replacement may be gapped by a GAP message and are therefore not reported as lost by the DataReader, or, at other times, the heartbeat message will announce that the sample is no longer available, as described above, and these will be reported as lost.
    • For KEEP_ALL HISTORY QosPolicy, the DataWriter can overwrite a sample in its queue after the DataReader was marked as 'inactive'. Once a DataReader is marked as 'inactive', samples will no longer be considered unacknowledged by that DataReader until it becomes active again. This means that if resource limits are hit and space is needed for a new sample, an old sample may be replaced to make room even if the inactive DataReader never received it. A DataReader is considered inactive either because it is not making progress (see inactivate_nonprogressing_readers) or max_heartbeat_retries was exceeded.

See 32.4.2 Tuning Queue Sizes and Other Resource Limits for more information on changing sample loss or queue configuration.

The DataReaderListener’s on_sample_lost() callback is invoked when this status changes. You can also retrieve the value by calling the DataReader’s get_sample_lost_status() operation; this will also reset the status so it is no longer considered “changed.”

40.7.8 SAMPLE_REJECTED Status

This status indicates that one or more DDS samples received from a matched DataWriter have been rejected by the DataReader because a resource limit would have been exceeded: for example, if the receive queue is full because the number of DDS samples in the queue is equal to the max_samples parameter of the 47.22 RESOURCE_LIMITS QosPolicy. These rejected samples could be accepted later once the conditions for acceptance are met (e.g., once the number of samples in the queue becomes less than max_samples). A sample that is rejected can be resent any number of times until it is eventually reported as lost, dropped, or accepted.

Samples can be rejected only with reliable communication; see 47.21 RELIABILITY QosPolicy. In best-effort communication, samples cannot be rejected because samples cannot be received again and are not eligible for resending.

The structure for the rejected status appears in Table 40.10 DDS_SampleRejectedStatus. The reason the DDS sample was rejected appears in the last_reason field. The possible values are listed in Table 40.11 DDS_SampleRejectedStatusKind.

Table 40.10 DDS_SampleRejectedStatus

Type

Field Name

Description

DDS_Long

total_count

Cumulative count of all the DDS samples that have been rejected by the DataReader.

total_count_change

The incremental number of DDS samples rejected since the last time the Listener was called or the status was read.

current_count

The current number of writers with which the DataReader is matched.

current_count_change

The change in current_count since the last time the Listener was called or the status was read.

DDS_SampleRejectedStatusKind

last_reason

Reason for rejecting the last DDS sample. See Table 40.11 DDS_SampleRejectedStatusKind.

DDS_InstanceHandle_t

last_instance_handle

Handle to the data-instance for which the last DDS sample was rejected.

 

Table 40.11 DDS_SampleRejectedStatusKind

Reason Kind

Description

DDS_NOT_REJECTED

DDS sample was accepted.

REJECTED_BY_DECODE_FAILURE

When using RELIABLE in the 47.21 RELIABILITY QosPolicy, a sample was rejected because it could not be decoded.

When using BEST_EFFORT in the 47.21 RELIABILITY QosPolicy, the sample is lost, not rejected, with the reason LOST_BY_DECODE_FAILURE.

DDS_REJECTED_BY_
INSTANCES_LIMIT

This field is not currently used.

DDS_REJECTED_BY_
SAMPLES_LIMIT

When using RELIABLE in the 47.21 RELIABILITY QosPolicy, max_samples in the 47.22 RESOURCE_LIMITS QosPolicy was reached.

When using BEST_EFFORT in the RELIABILITY QosPolicy, reaching max_samples triggers a loss, not a rejection, with the reason LOST_BY_SAMPLES_LIMIT.

DDS_REJECTED_BY_
SAMPLES_PER_INSTANCE_LIMIT

When using RELIABLE in the 47.21 RELIABILITY QosPolicy, max_samples_per_instance in the 47.22 RESOURCE_LIMITS QosPolicy was reached.

When using BEST_EFFORT in the RELIABILITY QosPolicy, reaching max_samples_per_instance triggers a loss, not a rejection, with the reason LOST_BY_SAMPLES_PER_INSTANCE_LIMIT.

DDS_REJECTED_BY_
SAMPLES_PER_REMOTE_WRITER_LIMIT

When using RELIABLE in the 47.21 RELIABILITY QosPolicy, max_samples_per_remote_writer in the 48.2 DATA_READER_RESOURCE_LIMITS QosPolicy (DDS Extension) was reached. (This limit is the number of samples that a DataReader may store from a specific DataWriter.)

When using BEST_EFFORT in the RELIABILITY QosPolicy, reaching max_samples_per_remote_writer triggers a loss, not a rejection, with the reason LOST_BY_SAMPLES_PER_REMOTE_WRITER_LIMIT.

DDS_REJECTED_BY_
REMOTE_WRITER_SAMPLES_PER_VIRTUAL_QUEUE_LIMIT

This field is currently not used.

The DataReaderListener’s on_sample_rejected() callback is invoked when this status changes. You can also retrieve the value by calling the DataReader’s get_sample_rejected_status() operation; this will also reset the status so it is no longer considered “changed.”

40.7.9 SUBSCRIPTION_MATCHED Status

A change to this status indicates that the DataReader discovered a matching DataWriter. A ‘match’ occurs only if the DataReader and DataWriter have the same Topic, same or compatible data type, and compatible QosPolicies. (For more information on compatible data types, see the RTI Connext Core Libraries Extensible Types Guide.) In addition, if user code has directed Connext to ignore certain DataWriters, then those DataWriters will never be matched. See 27.2 Ignoring Publications and Subscriptions for more on setting up a DomainParticipant to ignore specific DataWriters.

This status is also changed (and the listener, if any, called) when a match is ended. A DataReader will become unmatched from a DataWriter when that DataWriter goes away for any of the following reasons:

  • The DomainParticipant containing the matched DataWriter has lost liveliness.
  • The DataReader or the matched DataWriter has changed QoS such that the entities are now incompatible.
  • The matched DataWriter has been deleted.

This status may reflect changes from multiple match or unmatch events, and the current_count_change can be used to determine the number of changes since the listener was called back or the status was checked.

The structure for this status appears in Table 40.12 DDS_SubscriptionMatchedStatus.

Table 40.12 DDS_SubscriptionMatchedStatus

Type

Field Name

Description

DDS_Long

total_count

Cumulative number of times the DataReader discovered a "match" with a DataWriter.

This number increases whenever a new match is discovered. It does not decrease when an existing match goes away for any of the reasons listed above.

total_count_change

The changes in total_count since the last time the listener was called or the status was read.

Note that this number will never be negative (because it’s the total number of times the DataReader ever matched with a DataWriter).

current_count

The number of DataWriters currently matched to the concerned DataReader.

This number increases when a new match is discovered and decreases when an existing match goes away for any of the reasons listed above.

current_count_change

The change in current_count since the last time the listener was called or the status was read.

Note that a negative current_count_change means that one or more DataWriters have become unmatched for one or more of the reasons listed above.

current_count_peak

Greatest number of DataWriters that matched this DataReader simultaneously. That is, there was no moment in time when more than this many DataWriters matched this DataReader. (As a result, total_count can be higher than current_count_peak.)

DDS_InstanceHandle_t

last_publication_
handle

This InstanceHandle can be used to look up which remote DataWriter was the last to cause this DataReader's status to change, using the DataReader's get_matched_publication_data() method.

If the DataWriter no longer matches this DataReader due to any of the reasons listed above except incompatible QoS, then the DataWriter has been purged from this DataReader's DomainParticipant discovery database. (See 22. Discovery Overview.) In that case, the DataReader's get_matched_publication_data method will not be able to return information about the DataWriter. The only way to get information about the lost DataWriter is if you cached the information previously.

The DataReaderListener’s on_subscription_matched() callback is invoked when this status changes. You can also retrieve the value by calling the DataReader’s get_subscription_match_status() operation; this will also reset the status so it is no longer considered “changed.”