<feed xmlns='http://www.w3.org/2005/Atom'>
<title>apt/test/integration/test-compressed-indexes, branch master</title>
<subtitle>Debians commandline package manager</subtitle>
<id>https://git.kalnischkies.de/apt/atom?h=master</id>
<link rel='self' href='https://git.kalnischkies.de/apt/atom?h=master'/>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/'/>
<updated>2017-09-09T18:12:15Z</updated>
<entry>
<title>ftparchive: Do not pass through disabled hashes in Sources</title>
<updated>2017-09-09T18:12:15Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2017-09-03T12:38:58Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=8d23827be3043daf7fed1b86da1d41578889eaeb'/>
<id>urn:sha1:8d23827be3043daf7fed1b86da1d41578889eaeb</id>
<content type='text'>
When writing a Sources files hashes that were already present
in the .dsc were always copied through (or modified), even if
disabled. Remove them instead when they are disabled, otherwise
we end up with hashes for tarballs and stuff but not for dsc
files (as the dsc obviously does not hash itself).

Also adjust the tests: test-compressed-indexes relied on Files
being present in showsrc, and test-apt-update-weak-hashes expected
the tarball to be downloaded when an archive only has MD5 and we
are requiring SHA256 because that used to work because the tarball
was always included.

Closes: #872963
</content>
</entry>
<entry>
<title>tests: disable EIPP logging in test-compressed-indexes</title>
<updated>2016-07-05T18:45:47Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-07-05T18:36:15Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=68151307d42ed64cd6258f94a0d748abe9efb8e0'/>
<id>urn:sha1:68151307d42ed64cd6258f94a0d748abe9efb8e0</id>
<content type='text'>
The test makes heavy use of disabling compression types which are
usually available some way or another like xz which is how the EIPP
logs are compressed by default. Instead of changing this test to change
the filename according to the compression we want to test we just
disable EIPP logging for this test as that is easier and has the same
practical effect.

Gbp-Dch: Ignore
</content>
</entry>
<entry>
<title>do not hang on piped input in PipedFileFdPrivate</title>
<updated>2016-06-10T08:48:25Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-06-09T18:41:58Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=bdc42211700ef0f6f40e4ef3f362e52d684d70fb'/>
<id>urn:sha1:bdc42211700ef0f6f40e4ef3f362e52d684d70fb</id>
<content type='text'>
This effects only compressors configured on the fly (rather then the
inbuilt ones as they use a library).
</content>
</entry>
<entry>
<title>show final solution in --no-download --fix-missing mode</title>
<updated>2016-05-16T14:17:54Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-05-14T08:43:03Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=eb1f04dda07c2b69549ad9fd793cca0e91841b3e'/>
<id>urn:sha1:eb1f04dda07c2b69549ad9fd793cca0e91841b3e</id>
<content type='text'>
This commit moves the creation of the fetcher and with it the
calculation of the filenames before the code generation the various
lists detailing the solution. This means that simulation comes even so
slightly closer to a real run as it will require and parse the package
indexes for filenames and queuing of URIs, so that a simulation "using"
an unavailable download method actually fails now.

The real benefit of this change is through that the rather special but
nontheless handy --no-download --fix-missing mode now actually shows
what the solution is it will apply to the system rather than the
solution it would if it could download all not-downloaded packages.
</content>
</entry>
<entry>
<title>silently skip acquire of empty index files</title>
<updated>2016-04-14T19:56:01Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-04-14T15:32:17Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=b2fd852459a6b9234255644730f48f071ccad64d'/>
<id>urn:sha1:b2fd852459a6b9234255644730f48f071ccad64d</id>
<content type='text'>
There is just no point in taking the time to acquire empty files –
especially as it will be tiny non-empty compressed files usually.
</content>
</entry>
<entry>
<title>keep compressed indexes in a low-cost format</title>
<updated>2016-01-08T14:40:01Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-01-07T19:32:09Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=0179cfa83cf0042235eda41db7f35c420781c63e'/>
<id>urn:sha1:0179cfa83cf0042235eda41db7f35c420781c63e</id>
<content type='text'>
Downloading and storing are two different operations were different
compression types can be preferred. For downloading we provide the
choice via Acquire::CompressionTypes::Order as there is a choice to
be made between download size and speed – and limited by whats available
in the repository.

Storage on the other hand has all compressions currently supported by
apt available and to reduce runtime of tools accessing these files the
compression type should be a low-cost format in terms of decompression.

apt traditionally stores its indexes uncompressed on disk, but has
options to keep them compressed. Now that apt downloads additional files
we also deal with files which simply can't be stored uncompressed as
they are just too big (like Contents for apt-file). Traditionally they
are downloaded in a low-cost format (gz) as repositories do not provide
other formats, but there might be even lower-cost formats and for
download we could introduce higher-cost in the repositories.

Downloading an entire index potentially requires recompression to
another format, so an update takes potentially longer – but big files
are usually updated via pdiffs which has to de- and re-compress anyhow
and does it on the fly anyhow, so there is no extra time needed and in
general it seems to be benefitial to invest the time in update to save
time later on file access.
</content>
</entry>
<entry>
<title>tests: try to pick up compressors from config automatically</title>
<updated>2016-01-08T14:40:01Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-01-03T21:39:46Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=912a61312a0463b46d6560756c89146f59daaab6'/>
<id>urn:sha1:912a61312a0463b46d6560756c89146f59daaab6</id>
<content type='text'>
Less hardcoding should help while introducing new compressors.

Git-Dch: Ignore
</content>
</entry>
<entry>
<title>tests: support spaces in path and TMPDIR</title>
<updated>2015-12-19T22:04:34Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-12-15T16:20:26Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=3abb6a6a1e485b3bc899b64b0a1b7dc2db25a9c2'/>
<id>urn:sha1:3abb6a6a1e485b3bc899b64b0a1b7dc2db25a9c2</id>
<content type='text'>
This doesn't allow all tests to run cleanly, but it at least allows to
write tests which could run successfully in such environments.

Git-Dch: Ignore
</content>
</entry>
<entry>
<title>support arch:all data e.g. in separate Packages file</title>
<updated>2015-11-04T17:42:27Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-10-28T13:38:49Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=1dd20368486820efb6ef4476ad739e967174bec4'/>
<id>urn:sha1:1dd20368486820efb6ef4476ad739e967174bec4</id>
<content type='text'>
Based on a discussion with Niels Thykier who asked for Contents-all this
implements apt trying for all architecture dependent files to get a file
for the architecture all, which is treated internally now as an official
architecture which is always around (like native). This way arch:all
data can be shared instead of duplicated for each architecture requiring
the user to download the same information again and again.

There is one problem however: In Debian there is already a binary-all/
Packages file, but the binary-any files still include arch:all packages,
so that downloading this file now would be a waste of time, bandwidth
and diskspace. We therefore need a way to decide if it makes sense to
download the all file for Packages in Debian or not. The obvious answer
would be a special flag in the Release file indicating this, which would
need to default to 'no' and every reasonable repository would override
it to 'yes' in a few years time, but the flag would be there "forever".

Looking closer at a Release file we see the field "Architectures", which
doesn't include 'all' at the moment. With the idea outlined above that
'all' is a "proper" architecture now, we interpret this field as being
authoritative in declaring which architectures are supported by this
repository. If it says 'all', apt will try to get all, if not it will be
skipped. This gives us another interesting feature: If I configure a
source to download armel and mips, but it declares it supports only
armel apt will now print a notice saying as much. Previously this was a
very cryptic failure. If on the other hand the repository supports mips,
too, but for some reason doesn't ship mips packages at the moment, this
'missing' file is silently ignored (= that is the same as the repository
including an empty file).

The Architectures field isn't mandatory through, so if it isn't there,
we assume that every architecture is supported by this repository, which
skips the arch:all if not listed in the release file.
</content>
</entry>
<entry>
<title>unbreak the copy-method claiming hashsum mismatch since ~exp9</title>
<updated>2015-11-04T17:04:00Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-10-11T11:58:23Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=af9e40c9bfb353b8aea1e2621b3b5a8c1c1db4bd'/>
<id>urn:sha1:af9e40c9bfb353b8aea1e2621b3b5a8c1c1db4bd</id>
<content type='text'>
Commit 653ef26c70dc9c0e2cbfdd4e79117876bb63e87d broke the camels back in
sofar that everything works in terms of our internal use of copy:/, but
external use is completely destroyed. This is kinda the reverse of what
happened in "parallel" in the sid branch, where external use was mostly
fine, internal and external exploded on the GzipIndexes option.

We fix this now by rewriting our internal use by letting copy:/ only do
what the name suggests it does: Copy files and not uncompress them
on-the-fly. Then we teach copy and the uncompressors how to deal with
/dev/null and use it as destination file in case we don't want to store
the uncompressed files on disk.

Closes: 799158
</content>
</entry>
</feed>
