| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| |
|
|
| |
This reverts commit 64127478630b676838735b509fec5cdfa36874c8.
|
| |\
| |
| |
| |
| | |
Fix a typo in json-hooks-protocol.md
See merge request apt-team/apt!173
|
| | | |
|
| |\ \
| | |
| | |
| | |
| | | |
Count uninstallable packages in "not upgraded"
See merge request apt-team/apt!169
|
| | |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If a first step of the solver can figure out that a package is
uninstallable it might reset the candidate so that later steps are
prevented from exploring this dead end. While that helps the resolver it
can confuse the display of the found solution as this will include an
incorrect count of packages not upgraded in this solution.
It was possible before, but happens a fair bit more with the April/May
resolver changes last year so finally doing proper counting is a good
idea.
Sadly this is a bit harder than just getting the number first and than
subtracting the packages we upgraded from it as the user can influence
candidates via the command line and a package which could be upgraded,
but is removed instead shouldn't count as not upgraded as we clearly did
something with it. So we keep a list of packages instead of a number
which also help in the upgrade cmds as those want to show the list.
Closes: #981535
|
| |\ \
| | |
| | |
| | |
| | | |
Mark only provides from protected versioned kernel packages
See merge request apt-team/apt!168
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | | |
They are kinda costly, so it makes more sense to keep them around in
private storage rather than generate them all the time in the
MarkPackage method. We do keep them lazy through as we have that
implemented already.
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
An interactive tool like aptitude needs these flags current far more
often than we do as a user can see them in apt only in one very well
defined place – the autoremove display block – so we don't need to run
it up to four times while a normal "apt install" is processed as that is
just busywork.
The effect on runtime is minimal, as a single run doesn't take too long
anyhow, but it cuts down tremendously on debug output at the expense of
requiring some manual handholding.
This is opt-in so that aptitude doesn't need to change nor do we need to
change our own tools like "apt list" where it is working correctly as
intended.
A special flag and co is needed as we want to prevent the ActionGroup
inside pkgDepCache::Init to be inhibited already so we need to insert
ourselves while the DepCache is still in the process of being built.
This is also the reason why the debug output in some tests changed to
all unmarked, but that is fine as the marking could have been already
obsoleted by the actions taken, just inhibited by a proper action group.
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The autoremove algorithm would mark a package previously after exploring
it once, but it could have been that it ignored some providers due to
them not satisfying the (versioned) dependency. A later dependency which
they might satisfy would encounter the package as already marked and
hence doesn't explore the providers anymore leaving us with internal
errors (as in the contrived new testcase).
This is resolved by introducing a new flag denoting if we explored every
provider already and only skip exploring if that is true, which sounds
bad but is really not such a common occurrence that it seems noticeable
in practice. It also helps us marking virtual packages as explored now
which would previously be tried each time they are encountered mostly
hiding this problem for the (far more common) fully virtual package.
|
| | |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
An out-of-tree kernel module which doesn't see many new versions can
pile up a considerable amount of packages if it is depended on via
another packages (e.g.: v4l2loopback-utils recommends v4l2loopback-modules)
which in turn can prevent the old kernels from being removed if they
happen to have a dependency on the images.
To prevent this we check if a provider is a versioned kernel package
(like an out-of-tree module) and if so check if that module package is
part of the protected kernel set – if not it is probably good to go.
We only do this if at least one provider is from a protected kernel
though so that the dependency remains satisfied (this can happen e.g. if
the module is currently not buildable against a protected kernel).
|
| |\ \
| |/
|/|
| |
| | |
Allow superfluous commas in build-dependency lines
See merge request apt-team/apt!167
|
| | |
| |
| |
| |
| |
| |
| |
| | |
This code can interact with handwritten files who can have unneeded
commas for writing easy. As dpkg allows it, we should do as well.
Reported-By: Arnaud Ferraris <arnaud.ferraris@gmail.com>
References: https://lists.debian.org/debian-devel/2021/03/msg00101.html
|
| |/
|
|
|
|
|
|
| |
The comment and code are a bit too roundabout about what they actually
try to do, so lets just set that straight as this is really just about a
very specific case and doesn't deserve a general resetting.
Gbp-Dch: Ignore
|
| | |
|
| |
|
|
| |
Gbp-Dch: ignore
|
| |
|
|
|
|
|
|
|
| |
dpkg 1.20.8 also made --force-remove-essential optional for
deconfiguring essential packages, so let's do this.
Also extend the test case to make sure we actuall pass
auto-deconfigure and do not make any --remove calls, or
pass --force-remove to dpkg.
|
| |
|
|
|
|
| |
Ugh, this was super flaky under -j 16 and -j 4, each behaving
in slightly different ways. This seems to be stable now. No
real bug though, all behaviors were OK.
|
| |
|
|
|
|
| |
The code missed a break, so it was looping infinitely because
the while loop condition only checked for '\n' and '\r', but not
end of file.
|
| |\
| |
| |
| |
| | |
JSON Hooks 0.2
See merge request apt-team/apt!166
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
Hook protocol 0.2 makes the new fields we added mandatory, and
replaces `install` mode with `upgrade`, `downgrade`, `reinstall`
where appropriate.
Hook negotiation is hacky, but it's the best we can do for now.
Users are advised to upgrade to 0.2
|
| | |
| |
| |
| | |
This enables hooks to output additional information.
|
| | | |
|
| | |
| |
| |
| |
| |
| | |
Provide access to the origins of a package, such that tools
can display information about them; for example, you can write
a hook counting security upgrades.
|
| |\|
| |
| |
| |
| | |
Bug fixes for JSON hooks
See merge request apt-team/apt!165
|
| | |
| |
| |
| | |
Gbp-Dch: ignore
|
| | |
| |
| |
| | |
This ensures messages are displayed in the correct order.
|
| | |
| |
| |
| | |
This is the only nullable thing we have here.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The JSON encoder only looked at the top state, but did not
pop it, so if we nested objects, we got stuck in whatever
the last state we pushed aside was, so in our example, we
wrongly get a comma inserted _after_ key "b":
{"a":[{}],
"b":,[{}]
}
|
| |/
|
|
|
|
|
|
| |
This allows us to correctly encode strings containing quotation
marks, escape characters and control characters.
The test case is a bit nasty because it embeds private-cachefile.cc
for linkage reasons.
|
| |
|
|
| |
dpkg 1.20.8 no longer requires this.
|
| |
|
|
|
|
| |
We use a Breaks for the binary package instead of adding
a versioned depends, as Breaks will cause apt solver to upgrade dpkg,
while depends would make apt try to remove apt as first choice.
|
| |\
| |
| |
| |
| | |
Automatically retry failed downloads 3 times
See merge request apt-team/apt!164
|
| |/
|
|
|
|
|
|
|
| |
Enable the Acquire::Retries option by default, set to 3.
This will help with slightly unreliable networking; future
work is needed for adding backoff and SRV/IP rotation.
LP: #1876035
Gbp-Dch: full
|
| |
|
|
| |
It defaults to false, like the other options there do.
|
| | |
|
| | |
|
| |\
| |
| |
| |
| | |
Fix downloads of unsized files that are largest in pipeline
See merge request apt-team/apt!161
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
Repositories without Size information for packages are not
proper and need fixing. This ensures people see an error in
CI, and get notifications and hence the ability to fix it.
It can be turned off by setting Acquire::AllowUnsizedPackages
to true.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
The maximum request size is accidentally set to any sized file,
so if an unsized file is present, and it turns out to be larger
than the maximum size we set, we'd error out when checking if
its size is smaller than the maximum request size.
LP: #1921626
|
| |\ \
| |/
|/|
| |
| |
| |
| | |
add vendor information for Procursus.
https://github.com/ProcursusTeam/Procursus is a project to get an updated \*nix environment on Darwin platforms such as iOS and x86_64/arm64 macOS.
See merge request apt-team/apt!163
|
| | | |
|
| |\ \
| |/
|/|
| |
| | |
Replace macro and manual management with lambda and RAII
See merge request apt-team/apt!160
|
| | |
| |
| |
| |
| |
| |
| |
| | |
Having three different vectors littered over the method to manage
various parts of the lifetime of the argument vector we are creating is
a bit dangerous as it means a simple code change could result in a
desync of these three, so by moving the functionality of them all into a
wrapper class should prevent us from making such mistakes.
|
| | | |
|
| | |
| |
| |
| | |
One less thing to remember to do in all branches.
|
| | |
| |
| |
| |
| |
| | |
It is easy to make mistakes while dealing with such macros regardless of
how much you guard them, so just using a lambda removes a lot of
concerns here basically for free.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
MaybeAddAuth() here tells us that it refused to use the credentials
for an http source; but that caused the test suite to fail at a later
stage because we checked if there were any errors/warning. Strangely,
this is only triggered with LTO enabled.
Actually check that the warning is being set and then reject it.
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a transaction is doomed we want to gracefully shutdown our zoo of
worker processes. As explained in the referenced commit we do this by
stopping the main process from handing out new work and ignoring the
replies it gets from the workers, so that they eventually run out of
work.
We tested this previously by checking if a rred worker was given work
items at all, but depending on how lucky the stars of the machine
working on this are the worker would have already gotten work before the
transaction was aborted – so we tried this 25 times a row (f35601e5d2).
No machine can be this lucky, right?
Turns out the autopkgtest armhf machine is very lucky.
I feel a bit sorry for feeding grep such a long "line" to work with, but
it seems to work out. Porterbox amdahl (who is considerably less lucky;
had to turn down to 1 try to get it to fail sometimes) is now happily
running the test in an endless loop.
Of course, I could have broken the test now, but its still a rather
generic grep (in some ways more generic even) and the main part of the
testcase – the update process finishes and fails – is untouched.
References: 38f8704e419ed93f433129e20df5611df6652620
Closes: #984966
|