| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
The mirror method can distribute requests for files based on various
metadata bits, but some – the main index files – weren't actually
passing those on to the methods as advertised in the manpage.
This is hidden both by mirror usually falling back to other sources
which will eventually hit the right one and that if the repository does
not support by-hash apt will automatically stick to the mirror which was
used for the Release file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Especially in small sections of an archive it can happen that an index
returns to a previous state (e.g. if a package was first added and then
removed with no other changes happening in between). The result is that
we have multiple patches which start from the same hash which if we
perform clientside merging is no problem although not ideal as we
perform needless work.
For serverside merging it would not matter, but due to rred previously
refusing to merge zero-size patches but dak ignoring failure letting it
carry these size-zero patches until they naturally expire we run into a
problem as these broken patches won't do and force us to fall back to
downloading the entire index. By always starting from the last patch
instead of the first with the starter hash we can avoid this problem
and behave optimally in clientside merge cases, too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The rred method expects the patches to have a certain name, which we
have to rename the file to before calling the method, but by delaying
the rename we ensure that if the download of one of them fails and a
successful fallback occurs they are all properly cleaned up as no longer
useful while in the error case the next apt run can potentially pick
them up as already downloaded.
Our test-pdiff-usage test was encountering this every other run, but did
not fail as the check for unaccounted files in partial/ was wrapped
in a subshell so that the failure produced failing output, but did not
change the exit code.
|
|
|
|
|
|
|
|
| |
Conflicts do require removing the package temporarily, so they really
should not be used.
We need to improve that eventually such that we can deconfigure packages
when we have to remove their dependencies due to conflicts.
|
|
|
|
|
|
|
|
|
|
|
| |
This caused python-apt to unset the Python signal handler when running
update or install commands, breaking KeyboardInterrupt amongst possibly
other things.
We do not set those signal handlers in this functions, and the calling
functions restore signal handlers to previous ones.
LP: #1898026
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As user "DaOfficialRolex" on GitHub pointed out:
This is needed to allow for APT on iOS to compile correctly. If not included the two following errors happen while compiling APT.
~/apt/apt-pkg/contrib/configuration.cc:900:44: error: constexpr variable cannot have non-literal type 'const std::array<APT::StringView, 3>'
constexpr std::array<APT::StringView, 3> magicComments { "clear"_sv, "include"_sv, "x-apt-configure-index"_sv };
^
~/apt/apt-pkg/contrib/configuration.cc:900:44: error: implicit instantiation of undefined template 'std::__1::array<APT::StringView, 3>'
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/__tuple:219:64: note: template is declared here
template <class _Tp, size_t _Size> struct _LIBCPP_TEMPLATE_VIS array;
^
|
|
|
|
|
|
|
|
|
| |
dpkg will be changed in 1.20.8 to not require --force-remove for
deconfiguration anymore, but we want to decouple our changes from the
dpkg ones, so let's always pass --force-remove-protected when installing
packages such that we can deconfigure protected packages.
Closes: #983014
|
|
|
|
|
|
|
|
|
| |
std::regex pulls in about 50 weak symbols which is complete and
utter madness, especially because we version all our symbols, so no
other library could ever reuse them.
Avoid using the regular expression here all together, loop using
string::find_first_of() and insert backslashes with strng::insert().
|
| |
|
|\
| |
| |
| |
| | |
Various patches uplifted from unfinished fuzzer branches
See merge request apt-team/apt!158
|
| |
| |
| |
| |
| |
| |
| | |
Depending on your configured source 25 MB is hardly enough, so the mmap
housing the cache while it is build has to grow. Repeatedly. We can cut
down on the repeats of this by keeping a record of the size of the old
cache assuming the sizes will remain roughly in the same ballpark.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
APT tries to detect if applying patches is more costly than just
downloading the complete index by combining the size of the patches.
That is correct for client-side merging, but for server-side merging
we actually don't know if we will jump directly from old to current or
have still intermediate steps in between.
With this commit we assume it will be a jump from old to current through
as that is what dak implements and it seems reasonable if you go to
the trouble of server side merging that the server does the entire
merging in one file instead of leaving additional work for the client
to do.
Note that this just changes the heuristic to prevent apt from discarding
patches as uneconomic in the now more common one merged-patch style, it
still supports multiple merged patches as before.
To resolve this cleanly we would need another field in the index file
declaring which hash we will arrive at if a patch is applied (or a field
differentiating between these merged-patch styles at least), but that
seems like overkill for now.
|
| |
| |
| |
| |
| |
| |
| | |
varg API is a nightmare as the symbols seems different on ever other
arch, but more importantly SendMessage does a few checks on the content
of the message and it is all outputted via C++ iostreams and not mixed
in FILE* which is handy for overriding the streams.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The undefined behaviour sanitizer complains with:
runtime error: addition of unsigned offset to 0x… overflowed to 0x…
Compilers and runtime do the right thing in any case and it is a
codepath that can (and ideally should) be avoided for speed reasons
alone, but fixing it can't hurt (too much).
|
| |
| |
| |
| |
| |
| |
| | |
Our configuration files are not security relevant, but having a parser
which avoids crashing on them even if they are seriously messed up is
not a bad idea anyway. It is also a good opportunity to brush up the
code a bit avoiding a few small string copies with our string_view.
|
| |
| |
| |
| |
| |
| |
| | |
strtoul(l) surprises us with parsing negative values which should not
exist in the places we use to parse them, so we can just downright
refuse them rather than trying to work with them by having them promoted
to huge positive values.
|
| |
| |
| |
| |
| |
| | |
It isn't super likely that we will encounter such big words in the real
world, but we can return arbitrary length, so lets just do that as that
also means we don't have to work with a second buffer.
|
| |
| |
| |
| |
| |
| | |
This has no attack surface though as the loop is to end very soon anyhow
and the method only used while reading CD-ROM mountpoints which seems
like a very unlikely attack vector…
|
| |
| |
| |
| |
| |
| |
| | |
For \xff and friends with the highest bit set and hence being a negative
value on signed char systems the wrong encoding is produced as we run
into undefined behaviour accessing negative array indexes.
We can avoid this problem simply by using an unsigned data type.
|
| |
| |
| |
| |
| |
| |
| |
| | |
If the Configuration code calling this was any indication, it is hard to
use – and even that monster still caused heap-buffer-overflow errors,
so instead of trying to fix it, lets just use methods which are far
easier to use. The question why this is done at all remains, but is left
for another day as an exercise for the reader.
|
| |
| |
| |
| |
| |
| | |
We were printing an error and hence have non-zero exit code either way,
but API wise it makes sense to have this properly reported back to the
caller to propagate it down the chain e.g. while parsing #include stanzas.
|
| |
| |
| |
| |
| | |
We do this once (usually), so the leak is tremendously big, but it is
detected as a leak by the fuzzer and trips it up.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Explicitly opening a tar member is a bit harder than it needs to be as
you have to remove the compressor extension so that it can be guessed
here gain potentially choosing the wrong member.
Doesn't really matter for deb packages of course as the member count is
pretty low and strongly defined, but testing is easier this way.
It also finally fixes an incorrectly formatted error message.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The buffers we feed in and read out are usually a couple kilobytes big
so allowing lzma to use an unlimited amount of memory is easy & okay,
but not needed and confuses memory checkers as it will cause lzma to
malloc a huge chunk of memory (which it will never use).
So lets just use a "big enough" value instead.
In exchange we simplify the decoder calling as we were already using the
auto-variant for xz, so we can just avoid the if-else and let liblzma
decide what it decodes.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For years I subconsciously thought this is wrong but ignored it:
$ LANG=C apt install -s
Reading package lists... Done
Building dependency tree
Reading state information... Done
Then I noticed:
$ LANG=C apt install -s -o dir::state::extended_states=/dev/null
Reading package lists... Done
Building dependency tree... Done
That can't be! Then it really should be:
$ LANG=C apt install -s
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
This oddity seems to be in since the introduction of the auto bit in 2005
which makes this rather hard to solve in theory, but in practice no front
end seems to call the readStateFile method directly, so we might actually
be lucky.
The alternative would be to call Done in the calling method and again
at the end of readStateFile while dropping it from the current place,
but as that is more shuffling around it could be more upsetting for
other front ends. Not sure, but now that I have seen it, I want to have
it fixed one way or another… aptitude at least seems not to explode.
References: afb1e2e3bb580077c6c917e6ea98baad8f3c39b3
|
|\
| |
| |
| |
| | |
Include all translations when building the cache
See merge request apt-team/apt!156
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We do download all translations we ever downloaded, but we don't add all
of those to the cache, meaning that if we run update with LANG=C, it
might still download your de_DE translation, but it won't insert it into
the cache, causing your de_DE user to not get translated messages.
LP: #1907850
|
|/
|
|
|
|
|
|
|
|
|
| |
The read-only /dev/null was duplicated to stdout and stderr, causing writes to those descriptors to fail:
[pid 260] openat(AT_FDCWD, "/dev/null", O_RDONLY) = 7
[pid 260] dup2(7, 0) = 0
[pid 260] close(5) = 0
[pid 260] dup2(6, 1) = 1
[pid 260] dup2(7, 2) = 2
[pid 260] write(2, "Chrooting into ", 15) = -1 EBADF (Bad file descriptor)
[pid 260] chroot("/chroot/") = 0
|
|
|
|
|
|
|
|
| |
Assigning the result of AllocateInMap directly to Ver->d caused Ver->d
to be resolved first, and hence if Ver was remapped during the
AllocateInMap, we were trying to assign to the old value.
Closes: #980037
|
|
|
|
|
|
|
| |
We interpreted "cannot detect chroot" as "not a chroot", but it's
arguably the better idea to detect it as a chroot, to avoid new behavior
from phased updations in situations where it's unclear (no /proc mounted
or stuff).
|
|
|
|
|
| |
In case we did not find any kernels to protect, the regular expression
will be empty, and trying to substr(1) it will fail.
|
|\
| |
| |
| |
| | |
Pu/small fixes
See merge request apt-team/apt!151
|
| |
| |
| |
| | |
Gbp-Dch: ignore
|
| |
| |
| |
| | |
Gbp-Dch: ignore
|
|/
|
|
|
|
|
|
|
|
|
| |
People have been asking for a feature to error out on transient network
errors for a while, this gives them one while keeping the door open for
other modes we need, such as --error-on=no-success which we need to
determine when to retry the daily update job.
Closes: #594813
(and a whole bunch of duplicates...)
|
|
|
|
|
|
|
|
|
| |
If we have different binNMU versions on different architectures,
we don't want madness to ensue.
This is a change from how update-manager does things, as Ubuntu does not
have binNMUs, but I believe it's the right thing to do for a generic
solution.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for Phased-Update-Percentage by pinning
upgrades that are not to be installed down to 1.
The output of policy has been changed to add the level of
phasing, and documentation has been improved to document
how phased updates work.
The patch detects if it is running in a chroot, and if so, always
includes phased updates, restoring classic apt behavior to avoid
behavioral changes on buildd chroots.
Various options are added to control this all:
* APT::Get::{Always,Never}-Include-Phased-Updates and their legacy
update-manager equivalents to always or never include phased updates
* APT::Machine-ID can be set to a UUID string to have all machines in a
fleet phase the same
* Dir::Etc::Machine-ID is weird in that it's default is sort of like
../machine-id, but not really, as ../machine-id would look up
$PWD/../machine-id and not relative to Dir::Etc; but it allows you to
override the path to machine-id (as opposed to the value)
* Dir::Bin::ischroot is the path to the ischroot(1) binary which is used
to detect whether we are running in a chroot.
|
|\
| |
| |
| |
| | |
Make immediate configuration optional
See merge request apt-team/apt!148
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The benefits of immediate configuration are that Essential packages
will be configured immediately, so if they are wrongly not working
without being configured they won't fail later packages.
However, we've reached the point where dependencies on the essential set
are too complex for immediate configuration to always work, causing
installations to error out at the end, despite having succeeded, because
we did not correctly return the error here and did not check for pending
errors before running dpkg.
Given that we check and configure any packages at the end that have not
been configured yet, or fail if we can't configure them; making
immediate configuration optional is the best way forward - it orders as
it does now, but then does not spuriously fail after having successfully
installed everything.
Closes: #973305, #188161, #211075, #649588
LP: #1871268
|
|\ \
| |/
|/|
| |
| | |
?depends patterns and friends
See merge request apt-team/apt!146
|
| |
| |
| |
| | |
This was easy.
|
| |
| |
| |
| |
| |
| | |
These match the target package, not target versions which is
slightly unfortunate but might make sense. Maybe we should add
a version that matches Versions instead.
|
| |
| |
| |
| |
| |
| | |
This fixes a problem on Ubuntu systems where the /boot partition
has been sized to manage 3 kernels, but does not really work with 4
kernels which was causing problems all over the place.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Our kernel autoremoval helper script protects the currently booted
kernel, but it only runs whenever we install or remove a kernel,
causing it to protect the kernel that was booted at that point in time,
which is not necessarily the same kernel as the one that is running
right now.
Reimplement the logic in C++ such that we can calculate it at run-time:
Provide a function to produce a regular expression that matches all
kernels that need protecting, and by changing the default root set
function in the DepCache to make use of that expression.
Note that the code groups the kernels by versions as before, and then
marks all kernel packages with the same version.
This optimized version inserts a virtual package $kernel into the cache
when building it to avoid having to iterate over all packages in the
cache to find the installed ones, significantly improving performance at
a minor cost when building the cache.
LP: #1615381
|
|/
|
|
|
| |
This avoids the cost of setting up the function every time
we mark and sweep.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit potentially breaks code feeding apt an encoded URI using a
method which does not get URIs send encoded. The webserverconfig
requests in our tests are an example for this – but they only worked
before if the server was expecting a double encoding as that was what
was happening to an encoded URI: so unlikely to work as expected in
practice.
Now with the new methods we can drop this double encoding and rely on
the URI being passed properly (and without modification) between the
layers so that passing in encoded URIs should now work correctly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We do not deal a lot with URIs which need encoding, but then we do it is
a pain that we store it decoded in the acquire system as it means we
have to decode and reencode URIs eventually which is potentially giving
us slightly different URIs.
We see that in our own testing framework while setting up redirects as
the config options are effectively double-encoded and decoded to pass
them around successfully as otherwise %2f and / in an URI are treated
the same.
This commit adds the infrastructure for methods to opt into getting URIs
send in encoded form (and returning them to us in encoded form, too) so
that we eventually do not have to touch the URIs which is how it should
be. This means though that we have to deal with methods who do not
support this yet (aka: all at the moment) for which we decode and encode
while communicating with them.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unroll pkgCache::sHash 8 times and break up the dependency between
the iterations by expanding the calculation
H(n) = 33 * H(n-1) + c
8 times rather than performing it 8 times. This seems to yield about
a 0.4% performance improvement.
I tried unrolling 4 and 2 bytes as well, those only having 3 ifs at
the end rather than 1 small loop; but that was actually slower -
potentially the code got to large and the cache went bonkers.
I also tried unrolling 4 times instead of 8, thinking that smaller
code might yield better results overall then, but that was slower as
well.
|
|
|
|
|
|
| |
XXH3 is faster than both our CRC32c implementation as well
as DJB hash for hash table hashing, so meh, let's switch to
it.
|