| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
This reverts commit 64127478630b676838735b509fec5cdfa36874c8.
|
|\
| |
| |
| |
| | |
Count uninstallable packages in "not upgraded"
See merge request apt-team/apt!169
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If a first step of the solver can figure out that a package is
uninstallable it might reset the candidate so that later steps are
prevented from exploring this dead end. While that helps the resolver it
can confuse the display of the found solution as this will include an
incorrect count of packages not upgraded in this solution.
It was possible before, but happens a fair bit more with the April/May
resolver changes last year so finally doing proper counting is a good
idea.
Sadly this is a bit harder than just getting the number first and than
subtracting the packages we upgraded from it as the user can influence
candidates via the command line and a package which could be upgraded,
but is removed instead shouldn't count as not upgraded as we clearly did
something with it. So we keep a list of packages instead of a number
which also help in the upgrade cmds as those want to show the list.
Closes: #981535
|
|\ \
| | |
| | |
| | |
| | | |
Mark only provides from protected versioned kernel packages
See merge request apt-team/apt!168
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
An interactive tool like aptitude needs these flags current far more
often than we do as a user can see them in apt only in one very well
defined place – the autoremove display block – so we don't need to run
it up to four times while a normal "apt install" is processed as that is
just busywork.
The effect on runtime is minimal, as a single run doesn't take too long
anyhow, but it cuts down tremendously on debug output at the expense of
requiring some manual handholding.
This is opt-in so that aptitude doesn't need to change nor do we need to
change our own tools like "apt list" where it is working correctly as
intended.
A special flag and co is needed as we want to prevent the ActionGroup
inside pkgDepCache::Init to be inhibited already so we need to insert
ourselves while the DepCache is still in the process of being built.
This is also the reason why the debug output in some tests changed to
all unmarked, but that is fine as the marking could have been already
obsoleted by the actions taken, just inhibited by a proper action group.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The autoremove algorithm would mark a package previously after exploring
it once, but it could have been that it ignored some providers due to
them not satisfying the (versioned) dependency. A later dependency which
they might satisfy would encounter the package as already marked and
hence doesn't explore the providers anymore leaving us with internal
errors (as in the contrived new testcase).
This is resolved by introducing a new flag denoting if we explored every
provider already and only skip exploring if that is true, which sounds
bad but is really not such a common occurrence that it seems noticeable
in practice. It also helps us marking virtual packages as explored now
which would previously be tried each time they are encountered mostly
hiding this problem for the (far more common) fully virtual package.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
An out-of-tree kernel module which doesn't see many new versions can
pile up a considerable amount of packages if it is depended on via
another packages (e.g.: v4l2loopback-utils recommends v4l2loopback-modules)
which in turn can prevent the old kernels from being removed if they
happen to have a dependency on the images.
To prevent this we check if a provider is a versioned kernel package
(like an out-of-tree module) and if so check if that module package is
part of the protected kernel set – if not it is probably good to go.
We only do this if at least one provider is from a protected kernel
though so that the dependency remains satisfied (this can happen e.g. if
the module is currently not buildable against a protected kernel).
|
|/
|
|
|
|
|
|
| |
This code can interact with handwritten files who can have unneeded
commas for writing easy. As dpkg allows it, we should do as well.
Reported-By: Arnaud Ferraris <arnaud.ferraris@gmail.com>
References: https://lists.debian.org/debian-devel/2021/03/msg00101.html
|
|
|
|
|
|
|
|
|
| |
dpkg 1.20.8 also made --force-remove-essential optional for
deconfiguring essential packages, so let's do this.
Also extend the test case to make sure we actuall pass
auto-deconfigure and do not make any --remove calls, or
pass --force-remove to dpkg.
|
|
|
|
|
|
| |
Ugh, this was super flaky under -j 16 and -j 4, each behaving
in slightly different ways. This seems to be stable now. No
real bug though, all behaviors were OK.
|
| |
|
|
|
|
|
|
|
|
|
| |
Hook protocol 0.2 makes the new fields we added mandatory, and
replaces `install` mode with `upgrade`, `downgrade`, `reinstall`
where appropriate.
Hook negotiation is hacky, but it's the best we can do for now.
Users are advised to upgrade to 0.2
|
|
|
|
| |
This enables hooks to output additional information.
|
| |
|
|
|
|
|
|
| |
Provide access to the origins of a package, such that tools
can display information about them; for example, you can write
a hook counting security upgrades.
|
|
|
|
| |
Gbp-Dch: ignore
|
|
|
|
| |
This is the only nullable thing we have here.
|
|
|
|
|
|
|
|
|
|
|
| |
The JSON encoder only looked at the top state, but did not
pop it, so if we nested objects, we got stuck in whatever
the last state we pushed aside was, so in our example, we
wrongly get a comma inserted _after_ key "b":
{"a":[{}],
"b":,[{}]
}
|
|
|
|
|
|
|
|
| |
This allows us to correctly encode strings containing quotation
marks, escape characters and control characters.
The test case is a bit nasty because it embeds private-cachefile.cc
for linkage reasons.
|
|
|
|
|
|
|
|
|
| |
Enable the Acquire::Retries option by default, set to 3.
This will help with slightly unreliable networking; future
work is needed for adding backoff and SRV/IP rotation.
LP: #1876035
Gbp-Dch: full
|
|
|
|
|
|
|
|
|
| |
Repositories without Size information for packages are not
proper and need fixing. This ensures people see an error in
CI, and get notifications and hence the ability to fix it.
It can be turned off by setting Acquire::AllowUnsizedPackages
to true.
|
|
|
|
|
|
|
|
|
| |
The maximum request size is accidentally set to any sized file,
so if an unsized file is present, and it turns out to be larger
than the maximum size we set, we'd error out when checking if
its size is smaller than the maximum request size.
LP: #1921626
|
|
|
|
|
|
|
|
|
| |
MaybeAddAuth() here tells us that it refused to use the credentials
for an http source; but that caused the test suite to fail at a later
stage because we checked if there were any errors/warning. Strangely,
this is only triggered with LTO enabled.
Actually check that the warning is being set and then reject it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a transaction is doomed we want to gracefully shutdown our zoo of
worker processes. As explained in the referenced commit we do this by
stopping the main process from handing out new work and ignoring the
replies it gets from the workers, so that they eventually run out of
work.
We tested this previously by checking if a rred worker was given work
items at all, but depending on how lucky the stars of the machine
working on this are the worker would have already gotten work before the
transaction was aborted – so we tried this 25 times a row (f35601e5d2).
No machine can be this lucky, right?
Turns out the autopkgtest armhf machine is very lucky.
I feel a bit sorry for feeding grep such a long "line" to work with, but
it seems to work out. Porterbox amdahl (who is considerably less lucky;
had to turn down to 1 try to get it to fail sometimes) is now happily
running the test in an endless loop.
Of course, I could have broken the test now, but its still a rather
generic grep (in some ways more generic even) and the main part of the
testcase – the update process finishes and fails – is untouched.
References: 38f8704e419ed93f433129e20df5611df6652620
Closes: #984966
|
|
|
|
|
|
|
|
|
|
|
| |
The mirror method can distribute requests for files based on various
metadata bits, but some – the main index files – weren't actually
passing those on to the methods as advertised in the manpage.
This is hidden both by mirror usually falling back to other sources
which will eventually hit the right one and that if the repository does
not support by-hash apt will automatically stick to the mirror which was
used for the Release file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Especially in small sections of an archive it can happen that an index
returns to a previous state (e.g. if a package was first added and then
removed with no other changes happening in between). The result is that
we have multiple patches which start from the same hash which if we
perform clientside merging is no problem although not ideal as we
perform needless work.
For serverside merging it would not matter, but due to rred previously
refusing to merge zero-size patches but dak ignoring failure letting it
carry these size-zero patches until they naturally expire we run into a
problem as these broken patches won't do and force us to fall back to
downloading the entire index. By always starting from the last patch
instead of the first with the starter hash we can avoid this problem
and behave optimally in clientside merge cases, too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The rred method expects the patches to have a certain name, which we
have to rename the file to before calling the method, but by delaying
the rename we ensure that if the download of one of them fails and a
successful fallback occurs they are all properly cleaned up as no longer
useful while in the error case the next apt run can potentially pick
them up as already downloaded.
Our test-pdiff-usage test was encountering this every other run, but did
not fail as the check for unaccounted files in partial/ was wrapped
in a subshell so that the failure produced failing output, but did not
change the exit code.
|
|
|
|
|
|
|
|
| |
There isn't a lot of sense in working on empty patches as they change
nothing (quite literally), but they can be the result of merging
multiple patches and so to not require our users to specifically detect
and remove them, we can be nice and just ignore them instead of erroring
out.
|
|
|
|
|
|
|
|
| |
Conflicts do require removing the package temporarily, so they really
should not be used.
We need to improve that eventually such that we can deconfigure packages
when we have to remove their dependencies due to conflicts.
|
|
|
|
|
|
|
|
|
| |
dpkg will be changed in 1.20.8 to not require --force-remove for
deconfiguration anymore, but we want to decouple our changes from the
dpkg ones, so let's always pass --force-remove-protected when installing
packages such that we can deconfigure protected packages.
Closes: #983014
|
|
|
|
| |
Gbp-Dch: ignore
|
|
|
|
|
|
|
|
|
|
|
|
| |
We ignored the failure from strtoul() that those test cases had values
out of range, hence they passed before, but now failed on 32-bit
platforms because we use strtoull() and do the limit check ourselves.
Move the tarball generator for test-github-111-invalid-armember to the
createdeb helper, and fix the helper to set all the numbers for like uid
and stuff to 0 instead of the maximum value the fields support (all 7s).
Regression-Of: e0743a85c5f5f2f83d91c305450e8ba192194cd8
|
|
|
|
|
|
|
|
| |
The grep for case-insensitive GPG finds also e.g. "/tmp/tmp.Kc5kKgPg0D"
which is not the intention, so we simply eliminate the variation of the
/tmp directory here from the output to prevent these false positives.
Gbp-Dch: Ignore
|
|
|
|
|
|
|
| |
Our configuration files are not security relevant, but having a parser
which avoids crashing on them even if they are seriously messed up is
not a bad idea anyway. It is also a good opportunity to brush up the
code a bit avoiding a few small string copies with our string_view.
|
|
|
|
|
|
|
| |
strtoul(l) surprises us with parsing negative values which should not
exist in the places we use to parse them, so we can just downright
refuse them rather than trying to work with them by having them promoted
to huge positive values.
|
|
|
|
|
|
| |
This has no attack surface though as the loop is to end very soon anyhow
and the method only used while reading CD-ROM mountpoints which seems
like a very unlikely attack vector…
|
|
|
|
|
|
|
| |
For \xff and friends with the highest bit set and hence being a negative
value on signed char systems the wrong encoding is produced as we run
into undefined behaviour accessing negative array indexes.
We can avoid this problem simply by using an unsigned data type.
|
|
|
|
|
| |
References: 2497198e9599a6a8d4d0ad08627bcfc7ea49c644
Gbp-Dch: Ignore
|
|
|
|
|
|
|
|
|
|
|
| |
Explicitly opening a tar member is a bit harder than it needs to be as
you have to remove the compressor extension so that it can be guessed
here gain potentially choosing the wrong member.
Doesn't really matter for deb packages of course as the member count is
pretty low and strongly defined, but testing is easier this way.
It also finally fixes an incorrectly formatted error message.
|
|
|
|
|
|
|
|
|
| |
We do download all translations we ever downloaded, but we don't add all
of those to the cache, meaning that if we run update with LANG=C, it
might still download your de_DE translation, but it won't insert it into
the cache, causing your de_DE user to not get translated messages.
LP: #1907850
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
People have been asking for a feature to error out on transient network
errors for a while, this gives them one while keeping the door open for
other modes we need, such as --error-on=no-success which we need to
determine when to retry the daily update job.
Closes: #594813
(and a whole bunch of duplicates...)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for Phased-Update-Percentage by pinning
upgrades that are not to be installed down to 1.
The output of policy has been changed to add the level of
phasing, and documentation has been improved to document
how phased updates work.
The patch detects if it is running in a chroot, and if so, always
includes phased updates, restoring classic apt behavior to avoid
behavioral changes on buildd chroots.
Various options are added to control this all:
* APT::Get::{Always,Never}-Include-Phased-Updates and their legacy
update-manager equivalents to always or never include phased updates
* APT::Machine-ID can be set to a UUID string to have all machines in a
fleet phase the same
* Dir::Etc::Machine-ID is weird in that it's default is sort of like
../machine-id, but not really, as ../machine-id would look up
$PWD/../machine-id and not relative to Dir::Etc; but it allows you to
override the path to machine-id (as opposed to the value)
* Dir::Bin::ischroot is the path to the ischroot(1) binary which is used
to detect whether we are running in a chroot.
|
|\
| |
| |
| |
| | |
Be compatible with Bash
See merge request apt-team/apt!142
|
| |
| |
| |
| |
| |
| | |
On many distributions, /bin/sh is Bash. Bash’s `echo` builtin doesn’t
interpret escape sequences, so most tests fail. Fix this by removing
the escape sequence.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our kernel autoremoval helper script protects the currently booted
kernel, but it only runs whenever we install or remove a kernel,
causing it to protect the kernel that was booted at that point in time,
which is not necessarily the same kernel as the one that is running
right now.
Reimplement the logic in C++ such that we can calculate it at run-time:
Provide a function to produce a regular expression that matches all
kernels that need protecting, and by changing the default root set
function in the DepCache to make use of that expression.
Note that the code groups the kernels by versions as before, and then
marks all kernel packages with the same version.
This optimized version inserts a virtual package $kernel into the cache
when building it to avoid having to iterate over all packages in the
cache to find the installed ones, significantly improving performance at
a minor cost when building the cache.
LP: #1615381
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit potentially breaks code feeding apt an encoded URI using a
method which does not get URIs send encoded. The webserverconfig
requests in our tests are an example for this – but they only worked
before if the server was expecting a double encoding as that was what
was happening to an encoded URI: so unlikely to work as expected in
practice.
Now with the new methods we can drop this double encoding and rely on
the URI being passed properly (and without modification) between the
layers so that passing in encoded URIs should now work correctly.
|
|
|
|
|
|
|
|
| |
Every method opts in to getting the encoded URI passed along while
keeping compat in case we are operated by an older acquire system.
Effectively this is just a change for the http-based methods as the
others just decode the URI as they work with files directly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We do not deal a lot with URIs which need encoding, but then we do it is
a pain that we store it decoded in the acquire system as it means we
have to decode and reencode URIs eventually which is potentially giving
us slightly different URIs.
We see that in our own testing framework while setting up redirects as
the config options are effectively double-encoded and decoded to pass
them around successfully as otherwise %2f and / in an URI are treated
the same.
This commit adds the infrastructure for methods to opt into getting URIs
send in encoded form (and returning them to us in encoded form, too) so
that we eventually do not have to touch the URIs which is how it should
be. This means though that we have to deal with methods who do not
support this yet (aka: all at the moment) for which we decode and encode
while communicating with them.
|
|
|
|
|
|
| |
Our http method encodes the URI again which results in the double
encoding we have unwrap in the webserver (we did already, but we skip
the filename handling now which does the first decode).
|