NTP daemon not always using all the defined servers

Package ntp Version: 4.2.8p4, Release: 12.2, Build Date: Thursday 17 December 2015 06:33:17 CET

Mostly (more than intermittently) the NTP daemon is reporting that its using only one of the specified servers.


/etc/ntp.conf:
driftfile /var/lib/ntp/drift/ntp.drift      
logfile /var/log/ntp                 
keys /etc/ntp.keys                   
trustedkey 1                         
requestkey 1                         
controlkey 1                         
server 0.de.pool.ntp.org  iburst
server 1.de.pool.ntp.org  iburst
server 2.de.pool.ntp.org  iburst
server 3.de.pool.ntp.org  iburst


# systemctl status ntpd.service 
ntpd.service - NTP Server Daemon
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled)
   Active: active (running) since Di 2016-01-26 15:34:21 CET; 1h 44min ago
     Docs: man:ntpd(1)
  Process: 8119 ExecStart=/usr/sbin/start-ntpd start (code=exited, status=0/SUCCESS)
 Main PID: 8132 (ntpd)
   CGroup: /system.slice/ntpd.service
           └─8132 /usr/sbin/ntpd -p /var/run/ntp/ntpd.pid -g -u ntp:ntp -i /var/lib/ntp -c /etc/ntp.conf

Jan 26 15:34:21 eck005 ntpd[8130]: ntpd 4.2.8p4@1.3265-o Thu Dec 17 05:32:52 UTC 2015 (1): Starting
Jan 26 15:34:21 eck005 start-ntpd[8119]: Starting network time protocol daemon (NTPD)
Jan 26 15:34:21 eck005 ntpd[8132]: proto: precision = 0.205 usec (-22)
Jan 26 15:34:21 eck005 ntpd[8132]: switching logging to file /var/log/ntp
#


# ntpq
ntpq> peers
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*ntp3.kashra-ser .PPS.            1 u  156  256  377   91.721   -1.043   5.022
ntpq> sysinfo
associd=0 status=0618 leap_none, sync_ntp, 1 event, no_sys_peer,
system peer:        ntp3.kashra-server.com:123
system peer mode:   client
leap indicator:     00
stratum:            2
log2 precision:     -22
root delay:         91.721
root dispersion:    29.756
reference ID:       85.25.197.197
reference time:     da521835.5a1f3dd1  Tue, Jan 26 2016 17:07:17.352
system jitter:      0.000000
clock jitter:       1.649
clock wander:       0.239
broadcast delay:    0.000
symm. auth. delay:  0.000
ntpq> authinfo
time since reset:  6898
stored keys:       1
free keys:         15
key lookups:       0
keys not found:    0
uncached keys:     0
expired keys:      0
encryptions:       0
decryptions:       0
ntpq> iostats
time since reset:       6922
receive buffers:        10
free receive buffers:   9
used receive buffers:   0
low water refills:      1
dropped packets:        0
ignored packets:        0
received packets:       100
packets sent:           113
packet send failures:   0
input wakeups:          1146
useful input wakeups:   1146
ntpq> kerninfo
associd=0 status=0618 leap_none, sync_ntp, 1 event, no_sys_peer,
pll offset:            -0.27689
pll frequency:         5.62236
maximum error:         0.734961
estimated error:       0.001649
kernel status:         pll nano
pll time constant:     8
precision:             1e-06
frequency tolerance:   500
pps frequency:         0
pps stability:         0
pps jitter:            0
calibration interval   0
calibration cycles:    0
jitter exceeded:       0
stability exceeded:    0
calibration errors:    0
ntpq> q
#

Any suggestions?

Noticed that the “/etc/ntp.keys” was an old file dated 26th October 2014:

  • moved it to another name;
  • did a forced reinstall of the 'ntp
    ’ package to force the generation of a new “ntp.keys” file; - checked “/etc/ntp.conf” contents;
  • restarted the ntp daemon.

No change.

  • Checked the "ntp.conf
    " generated by YaST; - moved the "server
    " commands to the beginning of the file; - added a “loca
    l” server (Laptop sometimes without a network connection); - restarted the ntp daemon.

No change.
After restarting the ntp daemon 3 times, the list of NTP servers increased to the number in the “ntp.conf” file.

This is weird.
Any ideas?

I haven’t looked closely at this,
But why would a machine need to connect to more than one NTP server?

You only need to connect to a working server, and once the connection is established should not need to connect to any others.

In fact, if you retrieved information from more than one NTP server and the data is different (highly likely since there would be latencies) your machine would create a conflict.

A problem would <really> exist if you queried your list of NTP servers and didn’t receive a response from <any>.

TSU

The NTP FAQ supplies the quick answer to the question “Why use more than one clock source (server)?”:

Further answers can be found in the NTP documentation:

The answer to “What happens if a clock source disappears?” is here:

My personal answer is:There are facilities using this general operating system which require accurate time-stamps on data sampled.
One achieves an absolute system clock accuracy of less than 1 millisecond deviation from absolute time standards without having to use Stratum 1 NTP Servers (despite the deviations and jitter due to the transmission time between the server(s) and the clients) by the simple solution of each NTP client synchronizing with more than one “reliable” NTP server.
And, a “Doomsday scenario” is handled by specifying the “special” localhost NTP server (127.127.1.0).
[INDENT=2]Need an additional ntp.conf command which is documented in the default openSUSE configuration file.
[/INDENT]

Most probably this is the problem of name resolution. ntpd will simply ignore any defined server if it cannot get its address. Try wireshark or similar to get packet trace when ntpd is restarted - does it get response to every name query?

OK, will do.

Some additional information, same LAN / DSL Router, but with openSUSE 13.2 and ntp package Version: 4.2.6p5, Build Date: Mo 20 Apr 2015 15:46:13 CEST:


** Output of: "# date ; ntpq -p" **
###
Sa 30. Jan 13:10:52 CET 2016
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+ntp1.wtnet.de   192.53.103.104   2 u   66   64    7   65.260    9.896   1.891
+mikrom.com      161.62.157.173   3 u   64   64    7   58.239   12.145   2.174
-y.ns.gin.ntt.ne 130.149.17.21    2 u   63   64    7   73.033    2.819   2.548
*ntp.ix.ru       .PPS.            1 u   63   64    7   98.826   12.011   2.868
-fritz.box       46.165.212.205   3 u   65   64    7    0.336    0.039   2.080
###
Sa 30. Jan 13:30:46 CET 2016
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+ntp1.wtnet.de   192.53.103.104   2 u   55  128  377   65.984   -2.281   1.561
-mikrom.com      161.62.157.173   3 u  114  128  377   59.305    0.554   1.011
+y.ns.gin.ntt.ne 249.224.99.213   2 u   56  128  377   78.899  -13.230   3.934
*ntp.ix.ru       .PPS.            1 u  111  128  377   98.867   -0.378   1.661
-fritz.box       46.165.212.205   3 u   55  128  377    0.348  -11.950   1.016
###
Sa 30. Jan 14:42:02 CET 2016
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+ntp1.wtnet.de   192.53.103.108   2 u  149  256  377   66.759   -1.677   0.365
+mikrom.com      122.227.206.195  3 u   51  256  377   58.376    0.205   0.335
-y.ns.gin.ntt.ne 249.224.99.213   2 u  235  256  337   79.117   -5.045   5.772
*ntp.ix.ru       .PPS.            1 u  163  256  377   98.817    0.409   0.414
-fritz.box       46.165.212.205   3 u  139  256  377    0.388  -12.696   0.666
###

And, for the readers who do not (yet) understand the meaning of the 1st character of the “ntpq -p” output it’s documented in the older NTP version “4.1.0” {for some reason it’s been removed from the newest version}:
<http://doc.ntp.org/4.1.0/ntpq.htm&gt;

space reject
    The peer is discarded as unreachable, synchronized to this server (synch loop) or outrageous synchronization distance.
x  falsetick
    The peer is discarded by the intersection algorithm as a falseticker.
.  excess
    The peer is discarded as not among the first ten peers sorted by synchronization distance and so is probably a poor candidate for further consideration.
-  outlyer
    The peer is discarded by the clustering algorithm as an outlyer.
+  candidat
    The peer is a survivor and a candidate for the combining algorithm.
#  selected
    The peer is a survivor, but not among the first six peers sorted by synchronization distance. If the assocation is ephemeral, it may be demobilized to conserve resources.
*  sys.peer
    The peer has been declared the system peer and lends its variables to the system variables.
o  pps.peer
    The peer has been declared the system peer and lends its variables to thesystem variables. However, the actual system synchronization is derived from a pulse-per-second (PPS) signal, either indirectly via the PPS reference clock driver or directly via kernel interface.

Hi Tsu,
one additional bit of information from the NTP support site:
<https://support.ntp.org/bin/view/Support/SelectingOffsiteNTPServers&gt;

5.3.3. Upstream Time Server Quantity
Many people wonder how many upstream time servers they should list in their NTP configuration file. The mathematics are complex, and fully understood by very few people. However, we can boil them down to some simple rules-of-thumb:

  • If you list just one, there can be no question which will be considered to be “right” or “wrong”. But if that one goes down, you are toast.
  • With two, it is impossible to tell which one is better, because you don’t have any other references to compare them with.
    This is actually the worst possible configuration – you’d be better off using just one upstream time server and letting the clocks run free if that upstream were to die or become unreachable.
  • With three servers, you have the minimum number of time sources needed to allow ntpd to dectect if one time source is a “falseticker”. However ntpd will then be in the position of choosing from the two remaining sources.This configuration provides no redundancy.
  • With at least four upstream servers, one (or more) can be a “falseticker”, or just unreachable, and ntpd will have a sufficient number of sources to choose from.

According to Brian Utterback, the math officially goes like this:
While the general rule is for 2n+1 to protect against “n” falsetickers, this actually isn’t true for the case where n=1. It actually takes 2 servers to produce a “candidate” time, which is really an interval. The winner is the shortest interval for which more than half (counting the two that define the interval) have an offset (+/- the dispersion) that lies on the interval and that contains the point of greatest overlap.
So, in the case of four servers, the truechimer with the largest offset defines one end of the interval, the truechimer with the smallest offset defines the other end, and the third truechimer overlaps these two, with a overlap count of at least two and possibly three. The falseticker’s interval will overlap few if any of these intervals (or it wouldn’t be a falseticker) and will be eliminated.
With only three servers, the interval defined by the two truechimers has no overlap with any other servers, but the interval defined by one of the truechimers and the falseticker overlaps the other truechimer, so this is the interval chosen, and thus the falseticker is still included.
5.3.4. Excessive Number of Upstream Time Servers
Note that four upstream time servers will protect you against only one falseticker. Using the 2n+1 algorithm, five upstreams will protect you against two falsetickers, seven will protect you against three falsetickers, etc…
Conventional wisdom is that using at least five upstream time servers would probably be a good idea, and you may want more. Note that ntpd won’t use more than ten upstream time servers, although it will continue to monitor as many as you configure.
You may have heard previous guidance which set a specific number and said that using any more than that would be considered “unfriendly” and “abusive”. That’s not really true. So long as your NTP client is correctly configured, once you’re up and running you should not be contacting your upstream servers any more than once every 1024 seconds (or so), and this shouldn’t be a problem for the time servers to support.
However, you do need to guarantee that your NTP client is configured correctly and is not doing inappropriate things.

Best regards
DCu

Results of tracing the DNS requests given the following NTP daemon restarts following a fresh laptop boot:


Timestamps are from the following commands:
# date ; ntpq -p
# date ; systemctl restart ntpd.service

###
Sa 30. Jan 16:38:39 CET 2016
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 LOCAL(0)        .LOCL.          10 l   12   64    7    0.000    0.000   0.000
###
Sa 30. Jan 16:40:13 CET 2016
Sa 30. Jan 16:40:18 CET 2016
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 LOCAL(0)        .LOCL.          10 l    4   64    1    0.000    0.000   0.000
*char-ntp-pool.c .shm0.           1 u    2   64    1   73.003  122.949   0.798
###
Sa 30. Jan 16:41:26 CET 2016
Sa 30. Jan 16:41:32 CET 2016
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 LOCAL(0)        .LOCL.          10 l    5   64    1    0.000    0.000   0.000
 hesinde.lf-net. 235.106.237.243  3 u    -   64    1   68.231   89.999   0.000
*himalia.mysnip. 192.53.103.108   2 u    -   64    1   72.993   93.925   1.338
###
Sa 30. Jan 16:42:46 CET 2016
a 30. Jan 16:42:51 CET 2016
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 LOCAL(0)        .LOCL.          10 l    4   64    1    0.000    0.000   0.000
 hesinde.lf-net. 235.106.237.243  3 u    1   64    1   58.951   50.561   0.000
*himalia.mysnip. 130.149.17.21    2 u    -   64    1  111.385   75.589  17.611
+ntp.plutex.de   131.188.3.221    2 u    -   64    1   96.801   65.529  14.701
###
Sa 30. Jan 16:43:35 CET 2016
Sa 30. Jan 16:43:38 CET 2016
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 LOCAL(0)        .LOCL.          10 l    2   64    1    0.000    0.000   0.000
 hesinde.lf-net. 235.106.237.243  3 u    2   64    1   79.048   34.557   0.000
 himalia.mysnip. 130.149.17.21    2 u    2   64    1   84.018   38.393   0.000
*ntp.plutex.de   131.188.3.221    2 u    2   64    1   69.156   28.609   0.000
 mail.ronny-bern 131.188.3.222    2 u    1   64    1   61.909   26.169   0.000
###


  572016-01-30 16:40:13.063794DNS  Standard query 0xba03  A 0.de.pool.ntp.org                                                                                        
  582016-01-30 16:40:13.063831DNS  Standard query 0xdb34  AAAA 0.de.pool.ntp.org                                                                                     
  612016-01-30 16:40:13.096213DNS  Standard query response 0xdb34                                                                                                    
  622016-01-30 16:40:13.121171DNS  Standard query response 0xba03  A 193.175.73.151 A 81.7.16.69 A 144.76.14.132 A 78.46.107.140                                     
 1572016-01-30 16:40:58.768909DNS  Standard query 0x4ba3  A 0.de.pool.ntp.org                                                                                        
 1582016-01-30 16:40:58.768957DNS  Standard query 0xcda0  AAAA 0.de.pool.ntp.org                                                                                     
 1592016-01-30 16:40:58.775825DNS  Standard query response 0xcda0                                                                                                    
 1602016-01-30 16:40:58.825664DNS  Standard query response 0x4ba3  A 78.46.76.100 A 134.106.187.58 A 78.46.37.9 A 81.169.196.230                                     
 2062016-01-30 16:41:26.964712DNS  Standard query 0x96be  A 1.de.pool.ntp.org                                                                                        
 2072016-01-30 16:41:26.964753DNS  Standard query 0x23a0  AAAA 1.de.pool.ntp.org                                                                                     
 2082016-01-30 16:41:26.971865DNS  Standard query response 0x23a0                                                                                                    
 2092016-01-30 16:41:27.020887DNS  Standard query response 0x96be  A 178.23.121.165 A 85.25.148.4 A 78.46.79.68 A 176.9.253.76                                       
 3422016-01-30 16:42:46.112505DNS  Standard query 0x3cfe  A 2.de.pool.ntp.org                                                                                        
 3432016-01-30 16:42:46.112544DNS  Standard query 0xde8a  AAAA 2.de.pool.ntp.org                                                                                     
 3442016-01-30 16:42:46.168007DNS  Standard query response 0x3cfe  A 31.24.144.185 A 148.251.154.36 A 176.9.102.215 A 5.9.110.236                                    
 3452016-01-30 16:42:46.173278DNS  Standard query response 0xde8a  AAAA 2a03:4000:6:b0c3::1337 AAAA 2001:638:504:2000::34 AAAA 2a01:4f8:120:710b::123 AAAA 2a00:1828:
 4602016-01-30 16:43:35.381004DNS  Standard query 0xf071  A 3.de.pool.ntp.org                                           
 4612016-01-30 16:43:35.381047DNS  Standard query 0xe98f  AAAA 3.de.pool.ntp.org                                        
 4672016-01-30 16:43:36.418725DNS  Standard query response 0xe98f                                                       
 4682016-01-30 16:43:36.421030DNS  Standard query response 0xf071  A 144.76.197.145 A 176.9.100.86 A 195.50.171.101 A 85.93.88.43
 4762016-01-30 16:43:36.800499DNS  Standard query 0x23cf  A 0.de.pool.ntp.org                                           
 4772016-01-30 16:43:36.800529DNS  Standard query 0x3989  AAAA 0.de.pool.ntp.org                                        
 4782016-01-30 16:43:36.808852DNS  Standard query response 0x3989                                                       
 4792016-01-30 16:43:36.855755DNS  Standard query response 0x23cf  A 78.46.189.152 A 129.70.132.36 A 129.250.35.251 A 176.9.104.147
 5502016-01-30 16:43:51.868269DNS  Standard query 0xf84c  A 2.de.pool.ntp.org                                           
 5512016-01-30 16:43:51.868309DNS  Standard query 0xaea2  AAAA 2.de.pool.ntp.org                                        
 5522016-01-30 16:43:51.876541DNS  Standard query response 0xaea2  AAAA 2a03:4000:6:b0c3::1337 AAAA 2001:638:504:2000::34 AAAA 2a01:4f8:120:710b::123 AAAA 2a00:1828:
 5532016-01-30 16:43:51.924890DNS  Standard query response 0xf84c  A 84.201.10.198 A 85.25.200.96 A 78.111.224.11 A 178.63.9.212
 5902016-01-30 16:44:06.950051DNS  Standard query 0x7a3c  A 1.de.pool.ntp.org                                           
 5912016-01-30 16:44:06.950083DNS  Standard query 0x3bd2  AAAA 1.de.pool.ntp.org                                        
 5922016-01-30 16:44:06.963787DNS  Standard query response 0x3bd2                                                       
 5932016-01-30 16:44:07.008638DNS  Standard query response 0x7a3c  A 89.163.209.233 A 192.162.168.12 A 46.4.16.145 A 78.47.138.42
 6022016-01-30 16:44:22.009237DNS  Standard query 0xad45  A 3.de.pool.ntp.org                                           
 6032016-01-30 16:44:22.009294DNS  Standard query 0x9fdb  AAAA 3.de.pool.ntp.org                                        
 6062016-01-30 16:44:22.046735DNS  Standard query response 0x9fdb                                                       
 6072016-01-30 16:44:22.065797DNS  Standard query response 0xad45  A 5.100.133.221 A 178.63.9.110 A 144.76.14.132 A 129.70.132.32

Wow! Could you test the same using just ntpd, without startup script? I.e. “systemctl stop ntpd; ntpd”? To see what daemon itself does?

It looks like either some elaborate logic inside of startup script or strange (negative) DNS caching in resolver.

Stopping the NTP daemon and restarting simply via the CLI didn’t help.

# ntpd -p /var/run/ntp/ntpd.pid -g -u ntp:ntp -i /var/lib/ntp -c /etc/ntp.conf

Attempting to use the “/usr/sbin/start-ntpd” shell script to add servers also didn’t help.
There are differences between the “/usr/sbin/start-ntpd” shell script of openSUSE 13.2 and that delivered with Leap 42.1.
But, they seem to be maintenance changes to conform to the current NTP implementation.
[HR][/HR]I’ll downgrade to the previous Leap 42.1 NTP package to confirm that the previous behavior is that what is expected.

Well, I have Leap 42.1 VM and I do not see this issue - I always see all 4 configured servers in ntp peer list. There must be some other difference. Do you use NM or wicked? Do you use local caching DNS server?

Downgrading to ntp-4.2.8p3-7.5.x86_64 with the Build Date: Sunday 25th October 2015 14:35:46 CET didn’t change this issue.
I’ll revert to 4.2.8p4-12.2 version with the Build Date: Thursday 17th December 2015 06:33:17 CET.

Restarting the NTP daemon after the upgrade back to the current packet version and then shutting down the WLAN interface (at 2 Feb 19:09:17) and then starting it up again produced the following information in /var/log/ntp:


 2 Feb 19:06:02 ntpd[4610]: ntpd exiting on signal 15 (Terminated)
 2 Feb 19:06:02 ntpd[4610]: 127.127.1.0 local addr 127.0.0.1 -> <null>
 2 Feb 19:06:02 ntpd[4610]: 91.237.88.67 local addr 192.168.178.27 -> <null>
 2 Feb 19:07:00 ntpd[4869]: Listen and drop on 0 v6wildcard ::]:123
 2 Feb 19:07:00 ntpd[4869]: Listen and drop on 1 v4wildcard 0.0.0.0:123
 2 Feb 19:07:00 ntpd[4869]: Listen normally on 2 lo 127.0.0.1:123
 2 Feb 19:07:00 ntpd[4869]: Listen normally on 3 wlp3s0 192.168.178.27:123
 2 Feb 19:07:00 ntpd[4869]: Listen normally on 4 lo ::1]:123
 2 Feb 19:07:00 ntpd[4869]: Listen normally on 5 wlp3s0 [fd00::a6db:30ff:fed7:f160]:123
 2 Feb 19:07:00 ntpd[4869]: Listen normally on 6 wlp3s0 [fe80::a6db:30ff:fed7:f160%3]:123
 2 Feb 19:07:00 ntpd[4869]: Listening on routing socket on fd #23 for interface updates
 2 Feb 19:09:17 ntpd[4869]: Deleting interface #3 wlp3s0, 192.168.178.27#123, interface stats: received=15, sent=15, dropped=0, active_time=137 secs
 2 Feb 19:09:17 ntpd[4869]: 85.236.36.4 local addr 192.168.178.27 -> <null>
 2 Feb 19:09:17 ntpd[4869]: 193.141.27.6 local addr 192.168.178.27 -> <null>
 2 Feb 19:09:17 ntpd[4869]: Deleting interface #5 wlp3s0, fd00::a6db:30ff:fed7:f160#123, interface stats: received=0, sent=0, dropped=0, active_time=137 secs
 2 Feb 19:09:17 ntpd[4869]: Deleting interface #6 wlp3s0, fe80::a6db:30ff:fed7:f160%3#123, interface stats: received=0, sent=0, dropped=0, active_time=137 secs
 2 Feb 19:09:34 ntpd[4869]: Listen normally on 7 wlp3s0 192.168.178.27:123
 2 Feb 19:09:34 ntpd[4869]: Listen normally on 8 wlp3s0 [fd00::a6db:30ff:fed7:f160]:123
 2 Feb 19:09:34 ntpd[4869]: Listen normally on 9 wlp3s0 [fe80::a6db:30ff:fed7:f160%3]:123
 2 Feb 19:09:34 ntpd[4869]: new interface(s) found: waking up resolver

Slowly peers are reappearing:


# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 LOCAL(0)        .LOCL.          10 l  431   64  100    0.000    0.000   0.000
+ts5.sct.de      192.53.103.104   2 u    3   64    7   58.804   -0.693  15.572
*ns3.customer-re 192.53.103.104   2 u    9   64    3   54.034   -0.891   3.681
#

I’m using Network Manager: NetworkManager-1.0.6-1.2.x86_64
DNS caching: nscd (so far as I know the openSUSE standard – it’s in the YaST list of recommended packages): nscd-2.19-17.4.x86_64.
Stopping the nscd service and restarting the NTP daemon didn’t improve matters.

Possibly the difference is that I upgraded this (UEFI) laptop from openSUSE 13.2 to Leap 42.1 by means of “zypper dup”.

  • Before the upgrade changed the 13.2 kernel from “Desktop” to “Default” and, as documented in the Leap 42.1 Release Notes, created persistent network interface names in the “predictable” (13.2) form; as opposed to “persistent” (Leap 42.1) form.

Yes! Possibly NTP is expecting the Leap 42.1 “persistent” network interface names!!!
Possibly NTP is behaving as it should do if Leap 42.1 is simply installed on a empty partition/system/VM.

  • Moved /etc/udev/rules.d/70-persistent-net.rules to /root/.
  • Removed the “Only use this interface” definitions in the Connection Manager entries.
  • Rebooted.
  • Reset the Connection Manager “Only use this interface” entries to the new Leap 42.1 names.
  • No change in the NTP behavior.
  • Uninstalled the ntp and nscd packages.
  • Rebooted.
  • Reinstalled the ntp and nscd packages and performed the YaST setup (generate the /etc/ntp.keys file).
  • No change in the NTP behavior – even after a reboot.

This issue has been resolved with the ntp Package Version 4.2.8p6-15.1.