Question

I was reading NTPv4 rfc to better understand the mechanism used by ntp. So, far the basic idea seems to be simple. But I am a bit confused regarding how NTP Clock Discipline works.

In NTPv4 it says a hybrid PLL/FLL is used to discipline the clock. According to my understanding - PLL locks on to the server phase and adjust client clock at update interval. FLL locks on to the clock frequency and adjusts the client clock at update interval.

it also says the PLL works better if there is more network jitter (latency spikes) where FLL works better if clock wander is the issue (different clock frequencies / drift)

I can understand the use of feedback control to adjust them and can also understand how they are working from the diagram included in the NTP rfc. But can anyone explain how does NTP implements FLL/PLL hybrid clock discipline just from packet received from server?

It would be great if anyone can just explain the logic behind it also.

Was it helpful?

Solution

Answering exhaustively the question:

How does NTP implements FLL/PLL hybrid clock discipline just from packet received from server?

requires the 90 pages of the document : Network Time Protocol Version 4, Reference and Implementation Guide. I will try to summarize an answer here.

In short, the NTP client receives timestamps from one or multiple servers and estimates a phase correction to apply. Then the correction is applied gradually in order to avoid clock jumps.

Both PLL or FLL can be used but the document says

PLL usually works better when network jitter dominates, while a FLL works better when oscillator wander dominates.

Contrary to NTPv3, in NTPv4, PLL and FLL are used simultanouesly and combined.

Feedback control system

The clock discipline is implemented as the feedback control system shown in the figure 1.

enter image description here Figure 1 : Clock discipline feedback loop

theta_r represents the reference phase generated by the combining algorithm and representing the best estimate of the system clock offset relative to the set of servers.

theta_c represents the control phase of the system clock modeled as a Variable Frequency oscillator (VFO).

V_d is the phase difference theta_r - theta_c

V_s is the output of the clock filter algorithm that select the best offset samples.

V_c is the signal produced by the loop filter which combines the PLL and the FLL as described in the second figure.

enter image description here Figure 2 : Clock discipline loop filter

=== Update ===

To understand the details of phase offset and frequency offset computation, you have to dive into the reference implementation. A good point to start is the packet() function

/*
* packet() - process packet and compute offset, delay and
* dispersion.
*/

In broadcast server mode, the computation is as follow

offset = LFP2D(r->xmt - r->dst);
delay  = BDELAY;
disp   = LOG2D(r->precision) + LOG2D(s.precision) + PHI * 2 * BDELAY;

where r is the received packet pointer and s the system strucure. Then the clock_filter function is invoked

/*
* The clock filter contents consist of eight tuples (offset,
* delay, dispersion, time). Shift each tuple to the left,
* discarding the leftmost one. As each tuple is shifted,
* increase the dispersion since the last filter update. At the
* same time, copy each tuple to a temporary list. After this,
* place the (offset, delay, disp, time) in the vacated
* rightmost tuple.
*/

The clock_filter itself invoke the clock_select function, and only after this the clock_update function is called.

What is important to remember is that these algorithms are synchronizing with multiple clocks and not with just one server clock. This introduce a layer of complexity and the question How to synchronize with one server? has no direct answer because algorithms are built to synchronize with multiple clocks.

The SNTP protocol (Simple NTP) use only one server clock but there is no official reference implementation.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top