Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 54 min 2 sec ago

sthbrx - a POWER technical blog: memcmp() for POWER8 - part II

Fri, 2017-09-01 13:00

This entry is a followup to part I which you should absolutely read here before continuing on.

Where we left off

We concluded that while a vectorised memcmp() is a win, there are some cases where it won't quite perform.

The overhead of enabling ALTIVEC

In the kernel we explicitly don't touch ALTIVEC unless we need to, this means that in the general case we can leave the userspace registers in place and not have do anything to service a syscall for a process.

This means that if we do want to use ALTIVEC in the kernel, there is some setup that must be done. Notably, we must enable the facility (a potentially time consuming move to MSR), save off the registers (if userspace we using them) and an inevitable restore later on.

If all this needs to be done for a memcmp() in the order of tens of bytes then it really wasn't worth it.

There are two reasons that memcmp() might go for a small number of bytes, firstly and trivially detectable is simply that parameter n is small. The other is harder to detect, if the memcmp() is going to fail (return non zero) early then it also wasn't worth enabling ALTIVEC.

Detecting early failures

Right at the start of memcmp(), before enabling ALTIVEC, the first 64 bytes are checked using general purpose registers. Why the first 64 bytes, well why not? In a strange twist of fate 64 bytes happens to be the amount of bytes in four ALTIVEC registers (128 bits per register, so 16 bytes multiplied by 4) and by utter coincidence that happens to be the stride of the ALTIVEC compare loop.

What does this all look like

Well unlike part I the results appear slightly less consistent across three runs of measurement but there are some very key differences with part I. The trends do appear to be the same across all three runs, just less pronounced - why this is is unclear.

The difference between run two and run three clipped at deltas of 1000ns is interesting:

vs

The results are similar except for a spike in the amount of deltas in the unpatched kernel at around 600ns. This is not present in the first sample (deltas1) of data. There are a number of reasons why this spike could have appeared here, it is possible that the kernel or hardware did something under the hood, prefetch could have brought deltas for a memcmp() that would otherwise have yielded a greater delta into the 600ns range.

What these two graphs do both demonstrate quite clearly is that optimisations down at the sub 100ns end have resulted in more sub 100ns deltas for the patched kernel, a significant win over the original data. Zooming out and looking at a graph which includes deltas up to 5000ns shows that the sub 100ns delta optimisations haven't noticeably slowed the performance of long duration memcmp(), .

Conclusion

The small amount of extra development effort has yielded tangible results in reducing the low end memcmp() times. This second round of data collection and performance analysis only confirms the that for any significant amount of comparison, a vectorised loop is significantly quicker.

The results obtained here show no downside to adopting this approach for all power8 and onwards chips as this new version of the patch solves the performance regression for small compares.

Lev Lafayette: An Overview of SSH

Thu, 2017-08-31 00:04

SSH (Secure Shell) is secure means, mainly used on Linux and other UNIX-like systems, to access remote systems, designed as a replacement for a variety of insecure protocols (e.g., telnet, rlogin etc). This presentation will cover the core useage of the protocol, its development (SSH-1 and SSH-2), the architecture (client-server, public key authentication), installation and implementation, and some handy elaborations and enhancements, and real and imagined vulnerabilities.
Plenty of examples will be provided throughout along with the opportunity to test the protocol.

read more

Linux Users of Victoria (LUV) Announce: LUV Main September 2017 Meeting: Cygwin and Virtualbox

Tue, 2017-08-29 18:03
Start: Sep 5 2017 18:30 End: Sep 5 2017 20:30 Start: Sep 5 2017 18:30 End: Sep 5 2017 20:30 Location:  The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053 Link:  http://luv.asn.au/meetings/map

Tuesday, September 5, 2017

6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

  • Duncan Roe, Cygwin
  • Steve Roylance, Virtualbox

Cygwin is a large collection of GNU and Open Source tools which provide functionality similar to a Linux distribution on Windows.  It allows easy porting of many Unix programs without the need for extensive changes to the source code.

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 5, 2017 - 18:30

read more

BlueHackers: Creative kid’s piano + vocal composition

Mon, 2017-08-28 11:01

An inspirational song from an Australian youngster.  He explains his background at the start.

OpenSTEM: This Week in HASS – term 3, week 8

Mon, 2017-08-28 10:05
This week our younger students are putting together a Class Museum, while older students are completing their Scientific Report. Foundation/Prep/Kindy to Year 3 Students in Foundation/Prep/Kindy (Units F.3 and F-1.3), as well as those in Years 1 (Unit 1.3). 2 (Unit 2.3) and 3 (Unit 3.3) are all putting together Class Museums of items of […]

BlueHackers: Mental Health Resources for New Dads

Thu, 2017-08-24 11:41

Right now, one in seven new fathers experiences high levels of psychological distress and as many as one in ten experience depression or anxiety. Often distressed fathers remain unidentified and unsupported due to both a reluctance to seek help for themselves and low levels of community understanding that the transition to parenthood is a difficult period for fathers, as well as mothers.

The project is hoping to both increase understanding of stress and distress in new fathers and encourage new fathers to take action to manage their mental health.

This work is being informed by research commissioned by beyondblue into men’s experiences of psychological distress in the perinatal period.

Informed by the findings of the Healthy Dads research, three projects are underway to provide men with the knowledge, tools and support to stay resilient during the transition to fatherhood.

https://www.medicalert.org.au/news-and-resources/becoming-a-healthy-dad

BlueHackers: The Attention Economy

Tue, 2017-08-22 17:35

In May 2017, James Williams, a former Google employee and doctoral candidate researching design ethics at Oxford University, won the inaugural Nine Dots Prize.

James argues that digital technologies privilege our impulses over our intentions, and are gradually diminishing our ability to engage with the issues we most care about.

Possibly a neat followup on our earlier post on “busy-ness“.

OpenSTEM: This Week in HASS – term 3, week 7

Mon, 2017-08-21 10:04
This week students are starting the final sections of their research projects and Scientific Reports. Our younger students are also preparing to set up a Class Museum. Foundation/Prep/Kindy to Year 3 Our youngest students (Unit F.3) also complete a Scientific Report. By becoming familiar with the overall layout and skills associated with the scientific process […]

Ben Martin: CNC Z Axis with 150mm or more of travel

Mon, 2017-08-21 09:56
Many of the hobby priced CNC machines have limited Z Axis movement. This coupled with limited clearance on the gantry force a limited number of options for work fixtures. For example, it is very unlikely that there will be clearance for a vice on the cutting bed of a cheap machine.

I started tinkering around with a Z Axis assembly which offers around 150mm of travel. The assembly also uses bearing blocks that should help overcome the tensions that drilling and cutting can offer.


The assembly is designed to be as thin as possible. The spindle mount is a little wider which allows easy bolting onto the spindle mount plate which attaches to these bearings and drive nut. The width of the assembly is important because it will limit the travel in the Y axis if it can interact with the gantry in any way.

Construction is mainly done in 1/4 and 1/2 inch 6061 alloy. The black bracket at the bottom is steel. This seemed like a reasonable choice since that bracket was going to be key to holding the weight and attachment to the gantry.

The Z axis shown above needs to be combined with a gantry height extension when attaching to a hobby CNC to be really effective. Using a longer travel Z axis like this would allow higher gantries which combined allow for easier fixturing and also pave the way for a 4/5th axis to fit under the cutter.

OpenSTEM: This Week in HASS – term 3, week 6

Mon, 2017-08-14 10:04
This week all our students are hard at work examining the objects they are using for their research projects. For the younger students these are objects that will be used to generate a Class Museum. For the older students, the objects of study relate to their chosen topic in Australian History. Foundation / Prep / […]

Donna Benjamin: Tools for talking

Fri, 2017-08-11 04:02
Friday, August 11, 2017 - 02:59

I gave a talk a couple of years ago called Tools for Talking.

I'm preparing a new talk, which, in some ways, is a sequel to this one. As part of that prep, I thought it might be useful to write some short summaries of each of the tools outlined here, with links to resources on them.

  • Powerful Non Defensive Communication
  • Non Violent Communication
  • Active Listening
  • Appreciative Inquiry
  • Transactional Analysis
  • The Drama Triangle vs
  • The Empowerment Dynamic
  • The 7 Cs

So I might try to make a start on that over the next week or so.

 

In the meantime, here's the slides:

Tools for talking from Donna Benjamin

And here's the video of the presentation at DrupalCon Barcelona

Ben Martin: Larger format CNC

Thu, 2017-08-10 23:06
Having access to a wood cutting CNC machine that can do a full sheet of plywood at once has led me to an initial project for a large sconce stand. The sconce is 210mm square at the base and the DAR ash I used was 140mm across. This lead to the four edge grain glue ups in the middle of the stand.


The design was created in Fusion 360 by just seeing what might look good. Unfortunately the sketch export as DXF presented some issues on the import side. This was part of why a littler project like this was a good first choice rather than a more complex whole sheet of ply.

To get around the DXF issue the tip was to select a face of a body and create a sketch from that face. Then export the created sketch as DXF which seemed to work much better. I don't know what I had in the original sketch that I created the body from that the DXF export/import didn't like. Maybe the dimensions, maybe the guide lines, hard to know without a bisect. The CNC was using the EnRoute software, so I had to work out how to bounce things from Fusion over to EnRoute and then get some help to reCAM things on that side and setup tabs et al.

One tip for others would be to use the DAR timber to form a glue up before arriving at a facility with a larger cut surface. Fewer pieces means less tabs/bridges and easier reCAM. A preformed blue panel would also have let me used more advanced designs such as n and u slots to connect two pieces instead of edge grains to connect four.

Overall it was a fun build and the owner of the sconce will love having it slightly off the table top so it can more easily be seen.

Francois Marier: pristine-tar and git-buildpackage Work-arounds

Thu, 2017-08-10 16:25

I recently ran into problems trying to package the latest version of my planetfilter tool.

This is how I was able to temporarily work-around bugs in my tools and still produce a package that can be built reproducibly from source and that contains a verifiable upstream signature.

pristine-tar being is unable to reproduce a tarball

After importing the latest upstream tarball using gbp import-orig, I tried to build the package but ran into this pristine-tar error:

$ gbp buildpackage gbp:error: Pristine-tar couldn't checkout "planetfilter_0.7.4.orig.tar.gz": xdelta3: target window checksum mismatch: XD3_INVALID_INPUT xdelta3: normally this indicates that the source file is incorrect xdelta3: please verify the source file with sha1sum or equivalent xdelta3 decode failed! at /usr/share/perl5/Pristine/Tar/DeltaTools.pm line 56. pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp pristine-tar: failed to generate tarball

So I decided to throw away what I had, re-import the tarball and try again. This time, I got a different pristine-tar error:

$ gbp buildpackage gbp:error: Pristine-tar couldn't checkout "planetfilter_0.7.4.orig.tar.gz": xdelta3: target window checksum mismatch: XD3_INVALID_INPUT xdelta3: normally this indicates that the source file is incorrect xdelta3: please verify the source file with sha1sum or equivalent xdelta3 decode failed! at /usr/share/perl5/Pristine/Tar/DeltaTools.pm line 56. pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp pristine-tar: failed to generate tarball

I filed bug 871938 for this.

As a work-around, I simply symlinked the upstream tarball I already had and then built the package using the tarball directly instead of the upstream git branch:

ln -s ~/deve/remote/planetfilter/dist/planetfilter-0.7.4.tar.gz ../planetfilter_0.7.4.orig.tar.gz gbp buildpackage --git-tarball-dir=..

Given that only the upstream and master branches are signed, the .delta file on the pristine-tar branch could be fixed at any time in the future by committing a new .delta file once pristine-tar gets fixed. This therefore seems like a reasonable work-around.

git-buildpackage doesn't import the upstream tarball signature

The second problem I ran into was a missing upstream signature after building the package with git-buildpackage:

$ lintian -i planetfilter_0.7.4-1_amd64.changes E: planetfilter changes: orig-tarball-missing-upstream-signature planetfilter_0.7.4.orig.tar.gz N: N: The packaging includes an upstream signing key but the corresponding N: .asc signature for one or more source tarballs are not included in your N: .changes file. N: N: Severity: important, Certainty: certain N: N: Check: changes-file, Type: changes N:

This problem (and the lintian error I suspect) is fairly new and hasn't been solved yet.

So until gbp import-orig gets proper support for upstream signatures, my work-around was to copy the upstream signature in the export-dir output directory (which I set in ~/.gbp.conf) so that it can be picked up by the final stages of gbp buildpackage:

ln -s ~/deve/remote/planetfilter/dist/planetfilter-0.7.4.tar.gz.asc ../build-area/planetfilter_0.7.4.orig.tar.gz.asc

If there's a better way to do this, please feel free to leave a comment (authentication not required)!

Tim Serong: NBN Fixed Wireless – Four Years On

Tue, 2017-08-08 00:04

It’s getting close to the fourth anniversary of our NBN fixed wireless connection. Over that time, speaking as someone who works from home, it’s been generally quite good. 22-24 Mbps down and 4-4.5 Mbps up is very nice. That said, there have been a few problems along the way, and more recently evenings have become significantly irritating.

There were some initial teething problems, and at least three or four occasions where someone was performing “upgrades” during business hours over the course of several consecutive days. These upgrade periods wouldn’t have affected people who are away at work or school or whatever during the day, as by the time they got home, the connection would have been back up. But for me, I had to either tether my mobile phone to my laptop, or go down to a cafe or friend’s place to get connectivity.

There’s also the icing problem, which occurs a couple of times a year when snow falls below 200-300 metres for a few days. No internet, and also no mobile phone.

These are all relatively isolated incidents though. What’s been happening more recently is our connection speed in the evenings has gone to hell. I don’t tend to do streaming video, and my syncing several GB of software mirrors happens automatically in the wee hours while I’m asleep, so my subjective impression for some time has just been that “things were kinda slower during the evenings” (web browsing, pushing/pulling from already cloned git repos, etc.). I vented about this on Twitter in mid-June but didn’t take any further action at the time.

Several weeks later, on the evening of July 28, I needed to update and rebuild a Ceph package for openSUSE and SLES. The specifics aren’t terribly relevant to this post, but the process (which is reasonably automated) involves running something like `git clone git@github.com:SUSE/ceph.git && cd ceph && git submodule update --init --recursive`, which in turn downloads a few GB of data. I’ve done this several times in the past, and it usually takes an hour, or maybe a bit more. So you start it up, then go make a meal, come back and you’re done.

Not so on that Friday evening. It took six hours.

I ran a couple of speed tests:

I looked at my smokeping graphs:

That’s awfully close to 20% packet loss in the evenings. It happens every night:

And it’s been happening for a long time:

Right now, as I’m writing this, the last three hours show an average of 15.57% packet loss:

So I’ve finally opened a support ticket with iiNet. We’ll see what they say. It seems unlikely that this is a problem with my equipment, as my neighbour on the same wireless tower has also had noticeable speed problems for at least the last couple of months. I’m guessing it’s either not enough backhaul, or the local NBN wireless tower is underprovisioned (or oversubscribed). I’m leaning towards the latter, as in recent times the signal strength indicators on the NTD flick between two amber and three green lights in the evenings, whereas during the day it’s three green lights all the time.

sthbrx - a POWER technical blog: memcmp() for POWER8

Mon, 2017-08-07 13:00
Userspace

When writing C programs in userspace there is libc which does so much of the heavy lifting. One important thing libc provides is portability in performing syscalls, that is, you don't need to know the architectural details of performing a syscall on each architecture your program might be compiled for. Another important feature that libc provides for the average userspace programmer is highly optimised routines to do things that are usually performance critical. It would be extremely inefficient for each userspace programmer if they had to implement even the naive version of these functions let alone optimised versions. Let us take memcmp() for example, I could trivially implement this in C like:

int memcmp(uint8_t *p1, uint8_t *p2, int n) { int i; for (i = 0; i < n; i++) { if (p1[i] < p2[i]) return -1; if (p1[i] > p2[i]) return 1; } return 0; }

However, while it is incredibly portable it is simply not going to perform, which is why the nice people who write libc have highly optimised ones in assembly for each architecture.

Kernel

When writing code for the Linux kernel, there isn't the luxury of a fully featured libc since it expects (and needs) to be in userspace, therefore we need to implement the features we need ourselves. Linux doesn't need all the features but something like memcmp() is definitely a requirement.

There have been some recent optimisations in glibc from which the kernel could benefit too! The question to be asked is, does the glibc optimised power8_memcmp() actually go faster or is it all smoke and mirrors?

Benchmarking memcmp()

With things like memcmp() it is actually quite easy to choose datasets which can make any implementation look good. For example; the new power8_memcmp() makes use of the vector unit of the power8 processor, in order to do so in the kernel there must be a small amount of setup code so that the rest of the kernel knows that the vector unit has been used and it correctly saves and restores the userspace vector registers. This means that power8_memcmp() has a slightly larger overhead than the current one, so for small compares or compares which are different early on then the newer 'faster' power8_memcmp() might actually not perform as well. For any kind of large compare however, using the vector unit should outperform a CPU register load and compare loop. It is for this reason that I wanted to avoid using micro benchmarks and use a 'real world' test as much as possible.

The biggest user of memcmp() in the kernel, at least on POWER is Kernel Samepage Merging (KSM). KSM provides code to inspect all the pages of a running system to determine if they're identical and deduplicate them if possible. This kind of feature allows for memory overcommit when used in a KVM host environment as guest kernels are likely to have a lot of similar, readonly pages which can be merged with no overhead afterwards. In order to determine if the pages are the same KSM must do a lot of page sized memcmp().

Performance

Performing a lot of page sized memcmp() is the one flaw with this test, the sizes of the memcmp() don't vary, hopefully the data will be 'random' enough that we can still observe differences in the two approaches.

My approach for testing involved getting the delta of ktime_get() across calls to memcmp() in memcmp_pages() (mm/ksm.c). This actually generated massive amounts of data, so, for consistency the following analysis is performed on the first 400MB of deltas collected.

The host was compiled with powernv_defconfig and run out of a ramdisk. For consistency the host was rebooted between each run so as to not have any previous tests affect the next. The host was rebooted a total of six times, the first three with my 'patched' power8_memcmp() kernel was booted the second three times with just my data collection patch applied, the 'vanilla' kernel. Both kernels are based off 4.13-rc3.

Each boot the following script was run and the resulting deltas file saved somewhere before reboot. The command line argument was always 15.

#!/bin/sh ppc64_cpu --smt=off #Host actually boots with ksm off but be sure echo 0 > /sys/kernel/mm/ksm/run #Scan a lot of pages echo 999999 > /sys/kernel/mm/ksm/pages_to_scan echo "Starting QEMUs" i=0 while [ "$i" -lt "$1" ] ; do qemu-system-ppc64 -smp 1 -m 1G -nographic -vga none \ -machine pseries,accel=kvm,kvm-type=HV \ -kernel guest.kernel -initrd guest.initrd \ -monitor pty -serial pty & i=$(expr $i + 1); done echo "Letting all the VMs boot" sleep 30 echo "Turning KSM om" echo 1 > /sys/kernel/mm/ksm/run echo "Letting KSM do its thing" sleep 2m echo 0 > /sys/kernel/mm/ksm/run dd if=/sys/kernel/debug/ksm/memcmp_deltas of=deltas bs=4096 count=100

The guest kernel was a pseries_le_defconfig 4.13-rc3 with the same ramdisk the host used. It booted to the login prompt and was left to idle.

Analysis

A variety of histograms were then generated in an attempt to see how the behaviour of memcmp() changed between the two implementations. It should be noted here that the y axis in the following graphs is a log scale as there were a lot of small deltas. The first observation is that the vanilla kernel had more smaller deltas, this is made particularly evident by the 'tally' points which are a running total of all deltas with less than the tally value.

Graph 1 depicting the vanilla kernel having a greater amount of small (sub 20ns) deltas than the patched kernel. The green points rise faster (left to right) and higher than the yellow points.

Still looking at the tallies, graph 1 also shows that the tally of deltas is very close by the 100ns mark, which means that the overhead of power8_memcmp() is not too great.

The problem with looking at only deltas under 200ns is that the performance results we want, that is, the difference between the algorithms is being masked by things like cache effects. To avoid this problem is may be wise to look at longer running (larger delta) memcmp() calls.

The following graph plots all deltas below 5000ns - still relatively short calls to memcmp() but an interesting trend emerges: Graph 2 shows that above 500ns the blue (patched kernel) points appear to have all shifted left with respect to the purple (vanilla kernel) points. This shows that for any memcmp() which will take more than 500ns to get a result it is favourable to use power8_memcmp() and it is only detrimental to use power8_memcmp() if the time will be under 50ns (a conservative estimate).

It is worth noting that graph 1 and graph 2 are generated by combining the first run of data collected from the vanilla and patched kernels. All the deltas for both runs are can be viewed separately here for vanilla and here for patched. Finally, the results from the other four runs look very much identical and provide me with a fair amount of confidence that these results make sense.

Conclusions

It is important to separate possible KSM optimisations with generic memcmp() optimisations, for example, perhaps KSM shouldn't be calling memcmp() if it suspects the first byte will differ. On the other hand, things that power8_memcmp() could do (which it currently doesn't) is check the length parameter and perhaps avoid the overhead of enabling kernel vector if the compare is less than some small amount of bytes.

It does seem like at least for the 'average case' glibcs power8_memcmp() is an improvement over what we have now.

Future work

A second round of data collection and plotting of delta vs position of first byte to differ should confirm these results, this would mean a more invasive patch to KSM.

OpenSTEM: This Week in HASS – term 3, week 5

Mon, 2017-08-07 10:04
This week students in all year levels are working on their research project for the term. Our youngest students are looking at items and pictures from the past, while our older students are collecting source material for their project on Australian history. Foundation/Prep/Kindy to Year 3 The focus of this term is an investigation into […]

Francois Marier: Time Synchronization with NTP and systemd

Mon, 2017-08-07 07:10

I recently ran into problems with generating TOTP 2-factor codes on my laptop. The fact that some of the codes would work and some wouldn't suggested a problem with time keeping on my laptop.

This was surprising since I've been running NTP for a many years and have therefore never had to think about time synchronization. After realizing that ntpd had stopped working on my machine for some reason, I found that systemd provides an easier way to keep time synchronized.

The new systemd time synchronization daemon

On a machine running systemd, there is no need to run the full-fledged ntpd daemon anymore. The built-in systemd-timesyncd can do the basic time synchronization job just fine.

However, I noticed that the daemon wasn't actually running:

$ systemctl status systemd-timesyncd.service ● systemd-timesyncd.service - Network Time Synchronization Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled) Drop-In: /lib/systemd/system/systemd-timesyncd.service.d └─disable-with-time-daemon.conf Active: inactive (dead) Condition: start condition failed at Thu 2017-08-03 21:48:13 PDT; 1 day 20h ago Docs: man:systemd-timesyncd.service(8)

referring instead to a mysterious "failed condition". Attempting to restart the service did provide more details though:

$ systemctl restart systemd-timesyncd.service $ systemctl status systemd-timesyncd.service ● systemd-timesyncd.service - Network Time Synchronization Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled) Drop-In: /lib/systemd/system/systemd-timesyncd.service.d └─disable-with-time-daemon.conf Active: inactive (dead) Condition: start condition failed at Sat 2017-08-05 18:19:12 PDT; 1s ago └─ ConditionFileIsExecutable=!/usr/sbin/ntpd was not met Docs: man:systemd-timesyncd.service(8)

The above check for the presence of /usr/sbin/ntpd points to a conflict between ntpd and systemd-timesyncd. The solution of course is to remove the former before enabling the latter:

apt purge ntp Enabling time synchronization with NTP

Once the ntp package has been removed, it is time to enable NTP support in timesyncd.

Start by choosing the NTP server pool nearest you and put it in /etc/systemd/timesyncd.conf. For example, mine reads like this:

[Time] NTP=ca.pool.ntp.org

before restarting the daemon:

systemctl restart systemd-timesyncd.service

That may not be enough on your machine though. To check whether or not the time has been synchronized with NTP servers, run the following:

$ timedatectl status ... Network time on: yes NTP synchronized: no RTC in local TZ: no

If NTP is not enabled, then you can enable it by running this command:

timedatectl set-ntp true

Once that's done, everything should be in place and time should be kept correctly:

$ timedatectl status ... Network time on: yes NTP synchronized: yes RTC in local TZ: no