| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a package has no description, we would crash in search. While
this should not happen, there seem to be some weird cases where
it does.
A safer way might be to make the whole parser thing safe
against this, so pkgRecords::Lookup(Desc.FileList()) works
and returns a parser where all values are empty. This would
also fix all other instances of this bug, if there are any.
Closes: #810622
|
|
|
|
|
| |
We need r126 of lz4, as this introduces the lz4frame.h
header.
|
|
|
|
|
|
|
| |
This is not really needed anymore, as those are in stable,
but as they are versioned already, let's just do it.
Gbp-Dch: ignore
|
|
|
|
|
| |
There's no point in breaking all older apt-file versions just
because one old experimental upload was broken.
|
|
|
|
|
| |
Reported-By: Mattia Rizzolo (on IRC)
Gbp-Dch: ignore
|
| |
|
|
|
|
| |
Gbp-Dch: ignore
|
|
|
|
|
|
|
| |
Move the completion to completions/bash/apt and install all
bash completions from completions/bash.
Gbp-Dch: ignore
|
|
|
|
|
|
| |
This ensures that a compatible version of appstream is
installed, that is, one that disables lz4 compression
for its data.
|
|
|
|
|
| |
By storing the size of the string in the cache, we can make use of
it when comparing the names in the hashtable in pkgCache::FindGrp.
|
|
|
|
|
|
|
| |
Hide the std::string overload instead of providing a
const char * one, the old variant was stupid.
Gbp-Dch: ignore
|
|
|
|
|
|
| |
I overlooked this
Gbp-Dch: ignore
|
| |
|
|
|
|
|
|
|
| |
The code already deals with compressed leftovers, but forgot the
uncompressed files. The opertunity is picked to reorder this code and
add debug messages about the actions taken as well as produce such a
leftover file in the associated testcase.
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the addition of the $HASH-Download field in the .diff/Index we got
the size of the compressed patches for 'free', so if that information is
available we can use it for a more fitting calculation of the size
requirements of the patches vs. the complete file.
Note that this predicts a too small size in the transition case in which
the information isn't available for all patches, but figuring this out
would be a lot of code for practically nothing as only one update can
ever be in such a transition phase.
|
|
|
|
|
|
|
|
| |
Some (older) versions of bash seem to be allergic to a method named
"aptautotest_grep_^apt" (note the caret). Unlikely that we are going to
write autotests for such commands so we could just skip those, but lets
instead just use "normal" characters in the names and strip the rest as
we already did with the (arguable more common) '-'.
|
|
|
|
|
|
| |
This way it works more similar to the compressor binaries, which we
can relief in this way from their job in the test framework avoiding the
need of adding e.g. liblz4-tool to the test dependencies.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Downloading and storing are two different operations were different
compression types can be preferred. For downloading we provide the
choice via Acquire::CompressionTypes::Order as there is a choice to
be made between download size and speed – and limited by whats available
in the repository.
Storage on the other hand has all compressions currently supported by
apt available and to reduce runtime of tools accessing these files the
compression type should be a low-cost format in terms of decompression.
apt traditionally stores its indexes uncompressed on disk, but has
options to keep them compressed. Now that apt downloads additional files
we also deal with files which simply can't be stored uncompressed as
they are just too big (like Contents for apt-file). Traditionally they
are downloaded in a low-cost format (gz) as repositories do not provide
other formats, but there might be even lower-cost formats and for
download we could introduce higher-cost in the repositories.
Downloading an entire index potentially requires recompression to
another format, so an update takes potentially longer – but big files
are usually updated via pdiffs which has to de- and re-compress anyhow
and does it on the fly anyhow, so there is no extra time needed and in
general it seems to be benefitial to invest the time in update to save
time later on file access.
|
|
|
|
|
|
| |
Less hardcoding should help while introducing new compressors.
Git-Dch: Ignore
|
|
|
|
|
|
|
| |
There is no reason to enforce that the file we start the bootstrap with
is compressed with a compressor which is available online. This allows
us to change the on-disk format as well as deals with repositories
adding/removing support for a specific compressor.
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we store files compressed in lists/ and the file switched compression
formats we happened to retain the "old" format, but by default the
cleanup process catched this oversight and removed the file.
[The initial situation described doesn't arise as we store no files by
default compressed and even with apt-file configuring Contents files, we
don't really have that problem as there is just .gz files for those.]
We solve this by just removing any uncompressed as well as compressed
(we support) file just before we move the 'new' version of the file in.
|
|
|
|
|
|
|
|
|
|
|
| |
Adding a new compressor method meant adding a new method as well – even
if that boilt down to just linking to our generalized decompressor with
a new name. That is unneeded busywork if we can instead just call the
generalized decompressor and let it figure out which compressor to use
based on the filenames rather than by program name.
For compatibility we ship still 'gzip', 'bzip2' and co, but they are
just links to our "new" 'store' method.
|
|
|
|
|
|
|
|
| |
Having a hardcoded list of compression types here doesn't really provide
us with anything beside added complexity each time someone adds a new
compression type. That we don't need to be that specific is evident by
Contents and Translation-* matchers which are a lot more generic and
didn't generate problems anyhow.
|
|
|
|
| |
Git-Dch: Ignore
|
|
|
|
|
|
|
|
|
|
|
| |
Do not create strings within the loop, that creates one string
per language and does more work than needed. Instead, reserve
enough space at the beginning and assign the prefix, and then
resize and append inside the loop.
Also call exists with the string itself instead of the c_str(),
this means that the lookup uses the size information in the
string now and does not have to call strlen() on it.
|
|
|
|
|
| |
It makes no sense to check if the value is empty, as it cannot
be. It will always be a hexstring of exactly 32 bytes.
|
|
|
|
|
|
|
|
| |
Use the same path for both comparisons, as the operator== path
is faster than just calling compare() - it avoids any comparison
if the size differs.
Gbp-Dch: ignore
|
|
|
|
| |
Gbp-Dch: ignore
|
|
|
|
|
|
|
|
|
| |
Instead of storing a string -> map_stringitem_t mapping, create
our own data type that can point to either a normal string or
a string inside the cache.
This avoids the creation of any string and improves performance
slightly (about 4%).
|
|
|
|
|
|
|
| |
This improves performance, as we now can ignore unequal strings
based on their length already.
Gbp-Dch: ignore
|
|
|
|
|
|
| |
This removes some minor overhead.
Gbp-Dch: ignore
|
|
|
|
|
|
|
|
| |
Moving the string is likely faster than copying it. We could probably
avoid strings alltogether in the future using some more crazy code,
but I have not looked at that yet.
Gbp-Dch: ignore
|
|
|
|
| |
Gbp-Dch: ignore
|
|
|
|
| |
Gbp-Dch: ignore
|
|
|
|
|
| |
Thanks: Niels Thykier for reporting on IRC
Gbp-Dch: ignore
|
|
|
|
|
|
| |
This improves performance of the cache generation on my
ARM platform (4x Cortex A15) by about 10% to 20% from
2.35-2.50 to 2.1 seconds.
|
|
|
|
|
|
|
| |
The class APT::StringView implements a drop-in replacement
for a subset of C++17 std::string_view() features. It will
be dropped at a later point and may not be used in public
interfaces.
|
|
|
|
|
| |
Remove the SingleInstance flag so we can use the new randomized
queue feature to run parallel.
|
|
|
|
|
|
| |
The maximum parallelization soft limit is the number of CPU
cores * 2 on systems defining _SC_NPROCESSORS_ONLN. The hard
limit in all cases is Acquire::QueueHost::Limit.
|
|
|
|
|
|
|
|
| |
This is a multiple of the page size and thus results in less
page faults, speeding up copying.
Also, while we're at at, unify all uses of that size in a
constant variable APT_BUFFER_SIZE.
|
|
|
|
|
|
| |
This allows passing compressing the output. The compressor must
be a compressor name, extension, or an extension without the
leading dot.
|
|
|
|
|
| |
Implement native support for LZ4 compression, using the official
lz4 library.
|
| |
|
|
|
|
|
|
|
| |
The PopFromSrvRecs() already removed the entry from the active
list, so the extra SrvRecords.erase() was incorrect.
Git-Dch: ignore
|
|
|
|
| |
Git-Dch: ignore
|
|
|
|
| |
Git-Dch: ignore
|
|
|
|
| |
Gbp-Dch: ignore
|
|
|
|
|
|
|
|
|
|
|
| |
This drop the hash table utilization from a high 98%
to acceptable 74% on unstable, and the average bucket length
from 4.6 to 1.8.
This improves performance by about 5%, while increasing
the size of the cache by 0.2 out of 38MB, that is 0.5%.
48481 is a nice number
|
|
|
|
| |
Gbp-Dch: ignore
|
|
|
|
|
| |
It does not make sense to consider empty buckets in the
average, as they do not affect the lookup performance.
|