Category: Uncategorized

Much faster incremental apt updates

APT’s performance in applying the Pdiffs files, which are the diff format used for Packages, Sources, and other files in the archive has been slow.

Improving performance for uncompressed files

The reason for this is that our I/O is unbuffered, and we were reading one byte at a time in order to read lines. This changed on December 24, by adding read buffering for reading lines, vastly improving the performance of rred.

But it was still slow, so today I profiled – using gperftools – the rred method running on a 430MB uncompressed Contents file with a 75 KB large patch. I noticed that our ReadLine() method was calling some method which took a long time (google-pprof told me it was some _nss method, but that was wrong [thank you, addr2line]).

After some further look into the code, I noticed that we set the length of the buffer using the length of the line. And whenever we moved some data out of the buffer, we called memmove() to move the remaining data to the front of the buffer.

So, I tried to use a fixed buffer size of 4096 (commit). Now memmove() would spend less time moving memory around inside the buffer. This helped a lot, bringing the run time on my example file down from 46 seconds to about 2 seconds.

Later on, I rewrote the code to not use memmove() at all – opting for start and end variables instead; and increasing the start variable when reading from the buffer (commit).

This in turn further improved things, bringing it down to about 1.6 seconds. We could now increase the buffer size again, without any negative effect.

Effects on apt-get update

I measured the run-time of apt-get update, excluding appstream and apt-file files, for the update from todays 07:52 to the 13:52 dinstall run. Configured sources are unstable and experimental with amd64 and i386 architectures. appstream and apt-file indexes are disabled for testing, so only Packages and Sources indexes are fetched.

The results are impressive:

  • For APT 1.1.6, updating with PDiffs enabled took 41 seconds.
  • For APT 1.1.7, updating with PDiffs enabled took 4 seconds.

That’s a tenfold increase in performance. By the way, running without PDiffs took 20 seconds, so there’s no reason not to use them.

Future work

Writes are still unbuffered, and account for about 75% to 80% of our runtime. That’s an obvious area for improvements.

rred-profile

Performance for patching compressed files

Contents files are usually compressed with gzip, and kept compressed locally because they are about 500 MB uncompressed and only 30MB compressed. I profiled this, and it turns out there is not much we can do about it: The majority of the time is spent inside zlib, mostly combining CRC checksums:

rred-gz-profile

Going forward, I think a reasonable option might be to recompress Contents files using lzo – they will be a bit bigger (50MB instead of 30MB), but lzo is about 6 times as fast (compressing a 430MB Contents file took 1 second instead of 6).

Key transition

I started transitioning from 1024D to 4096R. The new key is available at:

https://people.debian.org/~jak/pubkey.gpg

and the keys.gnupg.net key server. A very short transition statement is available at:

https://people.debian.org/~jak/transition-statement.txt

and included below (the http version might get extended over time if needed).

The key consists of one master key and 3 sub keys (signing, encryption, authentication). The sub keys are stored on an OpenPGP v2 Smartcard. That’s really cool, isn’t it?

Somehow it seems that GnuPG 1.4.18 also works with 4096R keys on this smartcard (I accidentally used it instead of gpg2 and it worked fine), although only GPG 2.0.13 and newer is supposed to work.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA512

Because 1024D keys are not deemed secure enough anymore, I switched to
a 4096R one.

The old key will continue to be valid for some time, but i prefer all
future correspondence to come to the new one.  I would also like this
new key to be re-integrated into the web of trust.  This message is
signed by both keys to certify the transition.

the old key was:

pub   1024D/00823EC2 2007-04-12
      Key fingerprint = D9D9 754A 4BBA 2E7D 0A0A  C024 AC2A 5FFE 0082 3EC2

And the new key is:

pub   4096R/6B031B00 2014-10-14 [expires: 2017-10-13]
      Key fingerprint = AEE1 C8AA AAF0 B768 4019  C546 021B 361B 6B03 1B00

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iEYEARECAAYFAlQ9j+oACgkQrCpf/gCCPsKskgCgiRn7DoP5RASkaZZjpop9P8aG
zhgAnjHeE8BXvTSkr7hccNb2tZsnqlTaiQIcBAEBCgAGBQJUPY/qAAoJENc8OeVl
gLOGZiMP/1MHubKmA8aGDj8Ow5Uo4lkzp+A89vJqgbm9bjVrfjDHZQIdebYfWrjr
RQzXdbIHnILYnUfYaOHUzMxpBHya3rFu6xbfKesR+jzQf8gxFXoBY7OQVL4Ycyss
4Y++g9m4Lqm+IDyIhhDNY6mtFU9e3CkljI52p/CIqM7eUyBfyRJDRfeh6c40Pfx2
AlNyFe+9JzYG1i3YG96Z8bKiVK5GpvyKWiggo08r3oqGvWyROYY9E4nLM9OJu8EL
GuSNDCRJOhfnegWqKq+BRZUXA2wbTG0f8AxAuetdo6MKmVmHGcHxpIGFHqxO1QhV
VM7VpMj+bxcevJ50BO5kylRrptlUugTaJ6il/o5sfgy1FdXGlgWCsIwmja2Z/fQr
ycnqrtMVVYfln9IwDODItHx3hSwRoHnUxLWq8yY8gyx+//geZ0BROonXVy1YEo9a
PDplOF1HKlaFAHv+Zq8wDWT8Lt1H2EecRFN+hov3+lU74ylnogZLS+bA7tqrjig0
bZfCo7i9Z7ag4GvLWY5PvN4fbws/5Yz9L8I4CnrqCUtzJg4vyA44Kpo8iuQsIrhz
CKDnsoehxS95YjiJcbL0Y63Ed4mkSaibUKfoYObv/k61XmBCNkmNAAuRwzV7d5q2
/w3bSTB0O7FHcCxFDnn+tiLwgiTEQDYAP9nN97uibSUCbf98wl3/
=VRZJ
-----END PGP SIGNATURE-----

hardlink 0.3.0 released; xattr support

Today I not only submitted my bachelor thesis to the printing company, I also released a new version of hardlink, my file deduplication tool.

hardlink 0.3 now features support for xattr support, contributed by Tom Keel at Intel. If this does not work correctly, please blame him.

I also added support for a –minimum-size option.

Most of the other code has been tested since the upload of RC1 to experimental in September 2012.

The next major version will split up the code into multiple files and clean it up a bit. It’s getting a bit long now in a single file.

APT 1.1~exp3 released to experimental: First step to sandboxed fetcher methods

Today, we worked, with the help of ioerror on IRC, on reducing the attack surface in our fetcher methods.

There are three things that we looked at:

  1. Reducing privileges by setting a new user and group
  2. chroot()
  3. seccomp-bpf sandbox

Today, we implemented the first of them. Starting with 1.1~exp3, the APT directories /var/cache/apt/archives and /var/lib/apt/lists are owned by the “_apt” user (username suggested by pabs). The methods switch to that user shortly after the start. The only methods doing this right now are: copy, ftp, gpgv, gzip, http, https.

If privileges cannot be dropped, the methods will fail to start. No fetching will be possible at all.

Known issues:

  • We drop all groups except the primary gid of the user
  • copy breaks if that group has no read access to the files

We plan to also add chroot() and seccomp sandboxing later on; to reduce the attack surface on untrusted files and protocol parsing.

Looking for HP Touchpad, Intel tablets, and other devices

If someone in Germany (or want to send it to Germany [at low costs]) still has (new) Touchpads to sell, I’d buy one or two of them at the reduced price (16GB: 99€, 32GB: 129€), or take them for free.

I promise that I will not sell them to others. I’m interested in WebOS, in running Debian and/or Ubuntu on those devices (for the extra fun factor), and lend it to family members for surfing, etc.

I also take other tablets and smart phones and various kinds of ARM and PowerPC hardware (I guess that’s all that’s interesting for me) for free, just send me an email if you have some and want to give them to me. This applies to Intel stuff as well, I’d really like to get some kind of WeTab/ExoPC, but can’t buy one currently (and they’re probably to outdated hardware-wise for buying to make sense).