How can I increase performance of Database Integration Service?

4 posts / 0 new
Last post
Offline
Last seen: 5 years 10 months ago
Joined: 05/17/2017
Posts: 4
How can I increase performance of Database Integration Service?

Hi,

I set up the RTI DB Integration Service to store samples of DDS and I used MySQL for the RDBMS.
I wanted to know how many samples are stored in a second, so I tested.

And the result was embarrassed, because under 100 samples were stored in a second.
(Of course, I published more than 100 samples per a second from a localhost datawriter.)

How can I increase the storing performance of the RTI DB Integration Service?
I tested both shmem(-dbTransport 1) and udpv4(-dbTransport 2).
But I didn't change the detail configuration (like shmem buffer size, ...) because I don't know what affects performance.
The only thing I know about increasing DB performance is using thread pool, but I can't use it for DB Integration Service.

The following is the server's specification I tested.

CPU: Intel Xeon E5-2609 (2.40GHz @ 4 cores)
RAM: 64GB
OS: CentOS 6.8 x64 (kernel 2.6.x , gcc 4.4.5)
DB: MySQL 5.7
DDS: RTI Connext DDS 5.3.0 / DB Integration Service 5.3.0

 

Best wishes,

fercs77's picture
Offline
Last seen: 3 months 1 week ago
Joined: 01/15/2011
Posts: 30

We are able to get better performance. These are results with MySQL:
 
DDS to MySQL
Payload size
Instances (rows)
SpinLoop (*)
Elapsed Time (ms)
Messages (Updates/Second)
dds2rtc
1610000068817.814386
6410000070039.514135
2561000007404813370
102410000078037.212686
409610000098606.910040
16384100000169338.45846
32768100000322934.63066
 
The results were generated with older RTI DDS (5.2.3) and MySQL (5.1) versions but I do not expect significant impact with respect to the versions that you are using.
 
Before providing some suggestions that can help your performance I have a few questions:
1) Are you using keyed or unkeyed types?
2) If you are using keyed types, how many keys do you try to store?
3) What is the size of the data that you try to store?
4) How many samples per second are you publishing with the DDS Publisher? What is your target throughput?
5) What are the DDS QoS configurations for your Publisher and DIS (Database Integration Service)?
 
If you are using keyed topics, one setting that can help is configuring the cache_maximum_size and cache_initial_size of your DIS subscription. This can be done both, by changing the columns cache_maximum_size, and cache_initial_size in the RTIDDS_SUBSCRIPTION table or via XML. Set cache_maximum_size and cache_initial_size to the number of expected instances in your test.
 
For additional information on these two parameters take a look into section "4.5.2.1.7 cache_maximum_size, cache_initial_size” in the DIS user manual.
 
Another two parameters that can boost your performance, are process_batch and process_period. process_batch configures how many samples you want DIS to group within the same transaction. process_period configures the periodicity at which DIS flushes the current transaction. You may want to increase process_batch from the default 10 to 100 and process_period from the default 100 msec to 1 sec and see if this has any impact.
 
For additional information on these parameters see "4.5.2.1.6 process_batch, process_period, commit_type"
 
From there, we can move to tuning the DDS QoSs. 
 
Best Regards,
- Fernando 
 
Offline
Last seen: 5 years 10 months ago
Joined: 05/17/2017
Posts: 4

1) I used unkeyed types. And I set the history_depth(in subscriptions table) to 1000000.
2) NOT keyed types. (If keyed types are better than unkeyed types in performance, please tell me.)
3) about 256 bytes.
4) I published 10k~15k samples per second because I wanted to DIS stores around 10k samples per second.
5) I used default QoS xml made by rtiddsgen.

I didn't consider the batch and period configuration. Thank you.

I want to know which QoS configuration is the best for DIS performance.

And, in the benchmark table, how can I get the 'Messages(Updates/Second)' value from instances and elapsed time?

 

Best Regards,
- randomist

fercs77's picture
Offline
Last seen: 3 months 1 week ago
Joined: 01/15/2011
Posts: 30

Hi,

Keyed types are not necessarily better from a performance point of view. It is about semantics, in DDS keyed types have some fields marked as key fields (using @keyed annotation). This is equivalent to the concept of primary key in a database. One thing to consider when using keyed types is that the history depth apply to each one of the key values independently.

Coming back to the performance issue, can you set the cache_maximum_size and cache_initial_size to 1 with your current configuration and check if you see any improvements?

With respect to your other questions:

>> I want to know which QoS configuration is the best for DIS performance.

You should use strict reliable communications but this is the default configuration generated by rtiddsgen. What QoS configuration are you using for DIS. Can you attach it to this thread?

>> And, in the benchmark table, how can I get the 'Messages(Updates/Second)' value from instances and elapsed time?

You cannot get the Updates/Second from the instance values. This value only indicates that we publish 10000 different primary keys. The test iterates through each one of these keys multiple times using a round robin approach.

Regards,

- Fernando