<feed xmlns='http://www.w3.org/2005/Atom'>
<title>apt/apt-pkg/acquire-item.cc, branch 1.2.5</title>
<subtitle>Debians commandline package manager</subtitle>
<id>https://git.kalnischkies.de/apt/atom?h=1.2.5</id>
<link rel='self' href='https://git.kalnischkies.de/apt/atom?h=1.2.5'/>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/'/>
<updated>2016-03-06T11:57:38Z</updated>
<entry>
<title>do not move not-failed pdiff-patches into CWD on failure</title>
<updated>2016-03-06T11:57:38Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-03-06T11:03:34Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=dfcf7f356b790338f0a3e9df3c5d6f159814fe53'/>
<id>urn:sha1:dfcf7f356b790338f0a3e9df3c5d6f159814fe53</id>
<content type='text'>
If a single pdiff fails, we have to fail the entire patching endeavour
and fall back to getting the complete file instead. That is easy in
serverside merged pdiffs as we get them one by one. For clientside we
get them all at once through, which means that a failure in one has to
stop the entire pipeline, which works as expected (as proven by the
bugreporters as they don't even notice it happening). The problem is
just that the first failing pdiff will do the cleanup, so another pdiff
which happens to be successfully acquired after we processed the failure
doesn't find the file it is supposed to use as a basename anymore, so
the patch is renamed to what should be the unique extension and moved
into the current working directory. Processing is then stopped as the
patch realizes that it isn't the last one which completed downloading.

On the plus side this means this is neither us using a bad temporary
location nor a security problem. It "just" overrides unconditionally
files in your current working directory (if you happen to have them
named like a pdiff patch – a bit unlikely perhaps) and so drops files
there which are never used again.

I guess this was introduced in 4e3c5633b1e74b4f58b95f339cfbbf4cbf21ab3e
for real as I made the need for the existence of the base file rather
explicit, but the potential lingers in the code for far longer.

Closes: #816837
</content>
</entry>
<entry>
<title>deal with partially downloaded changelogs</title>
<updated>2016-03-06T08:39:30Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-02-16T14:35:36Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=872bd447163066b331eb12c0fed0d71c0585f47b'/>
<id>urn:sha1:872bd447163066b331eb12c0fed0d71c0585f47b</id>
<content type='text'>
Changelogs are relatively small and we have no hashes for them, but we
had partial support for them before, so lets stick to it.

This also deletes the (partial) file before moving the downloaded file
into its place – rename(2) should be doing this by itself, but testing
on semaphoreci suggests that this isn't always the case (error is "Stale
file handle") and we don't need an atomic replace here, so be explicit.

Git-Dch: Ignore
</content>
</entry>
<entry>
<title>Add missing numeric includes in files using std::accumulate()</title>
<updated>2016-02-26T14:17:14Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2016-02-26T14:17:14Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=2f4e4070a5247abac86b2ca4ae24a8400da3c42e'/>
<id>urn:sha1:2f4e4070a5247abac86b2ca4ae24a8400da3c42e</id>
<content type='text'>
Reported-By: Helmut Grohne on IRC
</content>
</entry>
<entry>
<title>always download changelogs into /tmp first</title>
<updated>2016-02-11T22:13:47Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-02-11T21:54:49Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=6fd4b4c0b693b52cb8b593b76e5b60f77e500454'/>
<id>urn:sha1:6fd4b4c0b693b52cb8b593b76e5b60f77e500454</id>
<content type='text'>
pkgAcqChangelog has the default behaviour of downloading a changelog to
a temporary directory (inside /tmp, not /tmp directly), which is cleaned
up on shutdown, but this can be overridden to store the changelog more
permanently – but that caries a permission problem.

For changelog we can 'easily' solve this by always downloading to a
temporary directory and only move it out of there on done.
</content>
</entry>
<entry>
<title>use local changelog from /usr/share/doc if possible</title>
<updated>2016-02-11T20:07:56Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-02-11T20:07:56Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=b5aba9096e371a5f8612aff05384aca54ccc5acd'/>
<id>urn:sha1:b5aba9096e371a5f8612aff05384aca54ccc5acd</id>
<content type='text'>
If pkgAcqChangelog is told to acquire the changelog for a version it
will check first if this version is installed on the disk and if so will
use the local changelog in /usr/share/doc (possibily/likely gz
compressed) instead of downloading the file from the web.

An option is provided to disable this, which is enabled by default for
the Ubuntu vendor as they truncate the local changelogs – and for apts
--print-uris action.
</content>
</entry>
<entry>
<title>remove uncompressed leftover partial file before pdiff bootstrap</title>
<updated>2016-01-08T16:51:23Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-01-08T16:51:23Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=ef3c549e00b2a0487ddee0aeb70e3a29f76c2fbb'/>
<id>urn:sha1:ef3c549e00b2a0487ddee0aeb70e3a29f76c2fbb</id>
<content type='text'>
The code already deals with compressed leftovers, but forgot the
uncompressed files. The opertunity is picked to reorder this code and
add debug messages about the actions taken as well as produce such a
leftover file in the associated testcase.
</content>
</entry>
<entry>
<title>use filesize of compressed pdiffs for the limit if possible</title>
<updated>2016-01-08T14:40:01Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-01-08T14:30:05Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=4e6219da0dd1e68fad7db972f7ddd76598645228'/>
<id>urn:sha1:4e6219da0dd1e68fad7db972f7ddd76598645228</id>
<content type='text'>
With the addition of the $HASH-Download field in the .diff/Index we got
the size of the compressed patches for 'free', so if that information is
available we can use it for a more fitting calculation of the size
requirements of the patches vs. the complete file.

Note that this predicts a too small size in the transition case in which
the information isn't available for all patches, but figuring this out
would be a lot of code for practically nothing as only one update can
ever be in such a transition phase.
</content>
</entry>
<entry>
<title>keep compressed indexes in a low-cost format</title>
<updated>2016-01-08T14:40:01Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-01-07T19:32:09Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=0179cfa83cf0042235eda41db7f35c420781c63e'/>
<id>urn:sha1:0179cfa83cf0042235eda41db7f35c420781c63e</id>
<content type='text'>
Downloading and storing are two different operations were different
compression types can be preferred. For downloading we provide the
choice via Acquire::CompressionTypes::Order as there is a choice to
be made between download size and speed – and limited by whats available
in the repository.

Storage on the other hand has all compressions currently supported by
apt available and to reduce runtime of tools accessing these files the
compression type should be a low-cost format in terms of decompression.

apt traditionally stores its indexes uncompressed on disk, but has
options to keep them compressed. Now that apt downloads additional files
we also deal with files which simply can't be stored uncompressed as
they are just too big (like Contents for apt-file). Traditionally they
are downloaded in a low-cost format (gz) as repositories do not provide
other formats, but there might be even lower-cost formats and for
download we could introduce higher-cost in the repositories.

Downloading an entire index potentially requires recompression to
another format, so an update takes potentially longer – but big files
are usually updated via pdiffs which has to de- and re-compress anyhow
and does it on the fly anyhow, so there is no extra time needed and in
general it seems to be benefitial to invest the time in update to save
time later on file access.
</content>
</entry>
<entry>
<title>allow pdiff bootstrap from all supported compressors</title>
<updated>2016-01-08T14:40:01Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-01-05T23:05:24Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=4e3c5633b1e74b4f58b95f339cfbbf4cbf21ab3e'/>
<id>urn:sha1:4e3c5633b1e74b4f58b95f339cfbbf4cbf21ab3e</id>
<content type='text'>
There is no reason to enforce that the file we start the bootstrap with
is compressed with a compressor which is available online. This allows
us to change the on-disk format as well as deals with repositories
adding/removing support for a specific compressor.
</content>
</entry>
<entry>
<title>ensure compression cleanup even without lists-cleanup</title>
<updated>2016-01-08T14:40:01Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-01-05T23:08:04Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=1866240123a57de8f693b0ba01d8b709f027282d'/>
<id>urn:sha1:1866240123a57de8f693b0ba01d8b709f027282d</id>
<content type='text'>
If we store files compressed in lists/ and the file switched compression
formats we happened to retain the "old" format, but by default the
cleanup process catched this oversight and removed the file.
[The initial situation described doesn't arise as we store no files by
default compressed and even with apt-file configuring Contents files, we
don't really have that problem as there is just .gz files for those.]

We solve this by just removing any uncompressed as well as compressed
(we support) file just before we move the 'new' version of the file in.
</content>
</entry>
</feed>
