how large datalen test Throughput in 1Gbp net reach 900Mbpps

2 posts / 0 new
Last post
Offline
Last seen: 7 months 2 weeks ago
Joined: 09/17/2023
Posts: 1
how large datalen test Throughput in 1Gbp net reach 900Mbpps

use perftest_app  test 4194304 Throughput in 1Gbp,but Mbps only 300+,how to test large datalen's Throughput reach  900Mbpps?

pub

./perftest_cpp -pub -dataLen 4194304

sub

./perftest_cpp -sub -dataLen 4194304

 

Interval Throughput for 4194304 Bytes:
Length (Bytes), Total Samples, Samples/s, Avg Samples/s, Mbps, Avg Mbps, Lost Samples, Lost Samples (%)
Waiting for data ...
4194304, 31, 0, 0, 0.0, 0.0, 0, 0.00
4194304, 31, 0, 0, 0.0, 0.0, 0, 0.00

4194304, 31, 0, 0, 0.0, 0.0, 0, 0.00
4194304, 34, 2, 0, 100.6, 3.5, 0, 0.00
4194304, 44, 9, 0, 335.5, 14.5, 0, 0.00
4194304, 51, 6, 1, 234.8, 21.6, 0, 0.00
4194304, 63, 11, 1, 402.5, 33.5, 0, 0.00

Howard's picture
Offline
Last seen: 3 days 5 hours ago
Joined: 11/29/2012
Posts: 571

So the problem that you're seeing is that when you send large data (data size > transport MTU, approx 64 KB by default for UDP), DDS will fragment the data into packets that can be sent on the transport and send all of the fragments as fast as the CPU can do the work.

Unfortunately, that's usually much faster than the rest of end-to-end system (network, switches, receive host socket, receiving application) can keep up with.  Ultimately, a buffer somewhere will be full and additional packets dropped.   Then, assuming the connection is configured to be RELIABLE, DDS will need to repair the fragment before it the entire data sample can be reassembled and received by the application.

Thus, due to the additional overhead and latency of detecting and sending repair packets, the effective throughput is greatly reduced.

The solution to this is to configure Connext to send data at a rate slower than the level in which buffers overflow occurs losing packets. 

This can be done by using "Flow Controllers".  See this documentation:

https://community.rti.com/static/documentation/connext-dds/7.1.0/doc/manuals/connext_dds_professional/users_manual/users_manual/FlowControllers__DDS_Extension_.htm?Highlight=flow%20controller

 

You can configure RTI Perftest to use a flow-controller with the "-flow-controller <rate>" command line argument.  That's documented here:

https://community.rti.com/static/documentation/perftest/current/command_line_parameters.html#section-pubsub-command-line-parameters

 

NOTE: if you are trying to measure latency with RTI Perftest, you need to set the flow-controller for both the publishing and subcribing perftest instances, since latency is measure by sending the data in both directions.  If you are only trying to measure throughput, then you can disable the subscribing side from sending packets back to the publishing side, by using the command line argument "-latencyCount <count>" in the publishing side, where <count> is a very large number (e.g. 100000000).   See this:

https://community.rti.com/static/documentation/perftest/current/command_line_parameters.html#test-parameters-only-for-publishing-applications