I am debugging a network issue on a system running Connext 5.3.1. My tcpdump contains several million messages. I am trying to calculate from data available in tcpdump the time difference between the source timestamp and the reception timestamp. I understand that the first data available in the 0x09 INFO_TS submessages of a message. As an example, I have the following snippet from tcpdump:
0x0030: 0901 0800 e9f2 b766 c4b7 3ea0 1505 5c00
This says that it is submessage ID 0x09, with the endianness flag set to little endian, with 8 octets to follow. DDS timestamps report having a seconds and nanoseconds, which I am guessing without support from the documentation follows the 'struct timespec' conventions. The Modern C++ API documentation says that a Time is comprised of a int32_t for seconds and uint32_t for nanoseconds.
e9f2 b766 --little endian, 32-bit, signed --> 1723331230
This is eyeball close to the time of the message transmission
c4b7 3ea0 --little endian, 32-bit, signed --> 2688464836
At more than 2.5 billion, this is more than one second worth of nanoseconds, and invalidates my assumption about following the struct timespec convention. Adding two seconds to the timestamp creates an unusually large time gap for computers connected through one switch whose time is controlled by PTP with the same clock hierarchy.How do you extract the source timestamp from the INFO_TS submessage?
You have to follow the RTPS specification to understand how the INFO_TS submessage is encoded in a packet.
See https://www.omg.org/spec/DDSI-RTPS/2.5/PDF
If you open your tcpdump using wireshark and select a timestamp field, you can see exactly which bytes in a packet were used to construct that value.
Thanks Howard. It didn't occur to me that it would be in the spec. I had to dig a little but I found this in 9.3.2.1, which should really cover it:
struct Time_t {unsigned long seconds; // time in seconds unsigned long fraction; // time in sec/2^32 };
The spec is annoying because it refers to the nanosecond field, but inasmuch as this part of the spec is implementation detail, it is more like [but not exactly] 1/4 nanoseconds. When I access the nanosecond field of the timestamps in the SampleInfo [modern C++], and I getting it converted to actual nanoseconds, or I am getting this 1/4-ish nanosecond precision reported?
Thanks again.
On the wire, the time is represented and sent on the network as
sec and frac (where frac has the units of 1/(2^32).
But via the DDS API, you only deal with sec/nanosecs. The on-the-wire representation is converted to sec/nanosecs for any time structures that are accessed via the API.