Chapter 37 Sending Large Data
This section describes the capabilities offered by Connext to allow sending and receiving large data samples. In this section, "large data" refers to samples that, once serialized by the middleware, are typically large in size, often in the range of megabytes (MBs), such as video frame samples.
Note: The definition of "large data" in this chapter contrasts with the definition of large data in Chapter 23 Sample and Instance Memory Management, where "large data" refers to types whose samples have a large potential (maximum) serialized size, regardless of the actual serialized size of the samples sent over the network. For example, if your data type includes a variable-length sequence of integers with a maximum of 1000 elements, the maximum serialized size would be 1000 times the size of an integer. However, if a specific sample only contains 10 elements, the actual serialized size would only be 10 times the size of an integer. In this section, "large data" refers to samples with an actual large serialized size.
Connext offers the following solutions to optimize the sending and receiving of large data:
- Reducing latency using either or both of the following to reduce the number of copies produced by the middleware; see 37.1 Reducing Latency:
- RTI FlatData™ language binding; see 37.1.4 FlatData Language Binding
- Zero Copy transfer over shared memory; see 37.1.5 Zero Copy Transfer Over Shared Memory
- Reducing bandwidth usage by compressing samples with a set of standard compression algorithms; see 59.3 DATA_REPRESENTATION QosPolicy
- Preventing losses caused by bursts of large data samples that might saturate the network, by using a flow controller. See 37.3 FlowControllers (DDS Extension).