You are here: Part 2: Core Concepts > Sending Data > FlowControllers (DDS Extension) > Token Bucket Properties

Token Bucket Properties

FlowControllers use a token-bucket approach for open-loop network flow control. The flow control characteristics are determined by the token bucket properties. The properties are listed in Table 73 ; see the API Reference HTML documentation for their defaults and valid ranges.

Table 73 DDS_FlowControllerTokenBucketProperty_t

Type

Field Name

Description

DDS_Long

max_tokens

Maximum number of tokens than can accumulate in the token bucket. See max_tokens .

DDS_Long

tokens_added_per_period

The number of tokens added to the token bucket per specified period. See tokens_added_per_period .

DDS_Long

tokens_leaked_per_period

The number of tokens removed from the token bucket per specified period. See tokens_leaked_per_period .

DDS_Duration_t

period

Period for adding tokens to and removing tokens from the bucket. See period .

DDS_Long

bytes_per_token

Maximum number of bytes allowed to send for each token available. See bytes_per_token .

Asynchronously published DDS samples are queued up and transmitted based on the token bucket flow control scheme. The token bucket contains tokens, each of which represents a number of bytes. DDS samples can be sent only when there are sufficient tokens in the bucket. As DDS samples are sent, tokens are consumed. The number of tokens consumed is proportional to the size of the data being sent. Tokens are replenished on a periodic basis.

The rate at which tokens become available and other token bucket properties determine the network traffic flow.

Note that if the same DDS sample must be sent to multiple destinations, separate tokens are required for each destination. Only when multiple DDS samples are destined to the same destination will they be coalesced and sent using the same token(s). In other words, each token can only contribute to a single network packet.

max_tokens

The maximum number of tokens in the bucket will never exceed this value. Any excess tokens are discarded. This property value, combined with bytes_per_token, determines the maximum allowable data burst.

Use DDS_LENGTH_UNLIMITED to allow accumulation of an unlimited amount of tokens (and therefore potentially an unlimited burst size).

tokens_added_per_period

A FlowController transmits data only when tokens are available. Tokens are periodically replenished. This field determines the number of tokens added to the token bucket with each periodic replenishment.

Available tokens are distributed to associated DataWriters based on the scheduling_policy. Use DDS_LENGTH_UNLIMITED to add the maximum number of tokens allowed by max_tokens.

tokens_leaked_per_period

When tokens are replenished and there are sufficient tokens to send all DDS samples in the queue, this property determines whether any or all of the leftover tokens remain in the bucket.

Use DDS_LENGTH_UNLIMITED to remove all excess tokens from the token bucket once all DDS samples have been sent. In other words, no token accumulation is allowed. When new DDS samples are written after tokens were purged, the earliest point in time at which they can be sent is at the next periodic replenishment.

period

This field determines the period by which tokens are added or removed from the token bucket.

The special value DDS_DURATION_INFINITE can be used to create an on-demand FlowController, for which tokens are no longer replenished periodically. Instead, tokens must be added explicitly by calling the FlowController’s trigger_flow() operation. This external trigger adds tokens_added_per_period tokens each time it is called (subject to the other property settings).

Once period is set to DDS_DURATION_INFINITE, it can no longer be reverted to a finite period.

bytes_per_token

This field determines the number of bytes that can actually be transmitted based on the number of tokens.

Tokens are always consumed in whole by each DataWriter. That is, in cases where bytes_per_token is greater than the DDS sample size, multiple DDS samples may be sent to the same destination using a single token (regardless of the scheduling_policy).

Where fragmentation is required, the fragment size will be either (a) bytes_per_token or (b) the minimum of the largest message sizes across all transports installed with the DataWriter, whichever is less.

Use DDS_LENGTH_UNLIMITED to indicate that an unlimited number of bytes can be transmitted per token. In other words, a single token allows the recipient DataWriter to transmit all its queued DDS samples to a single destination. A separate token is required to send to each additional destination.

© 2015 RTI