| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
The pkgProblemResolver incorrectly skips protected packages while
considering packages for removal, which was always wrong but is now a
lot more visible as (potentially) far more packages are considered
protected in their state.
Note that the testcase shows that we need more changes to make this
proper.
|
|
|
|
|
|
|
|
|
| |
We are leaking a d-pointer currently weighting a boolean in size and
MethodConfig is instantiated in small numbers only, so nobody will
actually notice a difference, but proper cleanup is important.
Reported-By: clang LeakSanitizer
References: 04ab37fecaf286f724bef2e0969d2b67ab5ac1b1
|
|
|
|
|
|
|
|
|
| |
The non-virtual base-destructor causes its derivate classes to leak
tiny bits of memory otherwise. The header is private and not to be
used outside of APT, so we can perform this tiny ABI break as there
is no ABI to break.
Reported-By: valgrind and clang -fsanitize=leak
|
|
|
|
|
|
|
|
|
| |
As the builtins were used in the feature test also in the default branch
clang fails to compile the test helpfully complaining that you need to
compile with sse4.2 to use that while on gcc it is optimized out as
unused code and produces only a warning for that… removing the code from
the default branch fixes this problem, but we adapt the code some more to
avoid compilers optimizing it out in the future just in case.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
../apt-pkg/init.cc:137:39: warning: adding 'int' to a string does not append to the string [-Wstring-plus-int]
Cnf.CndSet("Dir::State", STATE_DIR + 1);
../apt-pkg/init.cc:137:39: note: use array indexing to silence this warning
We have a few instances of that & it should be reasonably clear that we are not
actually trying to append here, but ignoring or silencing this warning with an
override is far more costly than just using what clang suggests here.
Reported-By: clang
Gbp-Dch: Ignore
|
|
|
|
|
|
|
|
|
|
|
| |
../apt-pkg/edsp.cc:861:23: error: object backing the pointer will be destroyed at the end of the full-expression [-Wdangling-gsl]
const char *arch = _config->Find("APT::Architecture").c_str();
Compilers are probably optimizing it the way the patch does by hand now. Small
string optimisation helps likely as well. Othwise that should have failed left
and right as EDSP is used by experimental and such builders to talk to aspcud.
Reported-By: clang
|
| |
|
|
|
|
| |
Gbp-Dch: Ignore
|
|
|
|
|
|
|
|
| |
For speed reasons pkgDepCache initializes its state once and then has a
battery of update calls you have to invoke in the right order to update
the various states – all in the name of speed. In debug and/or
simulation mode we can sacrifice this speed for a bit of extra checking
though to verify that we haven't made some critical mistake like #961266.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It looks like hack and therefore I wanted this to be a very isolated
commit so we can find it & revert it easily if need be, but for now it
seems to work.
The idea is that Status is telling us how the candidate is in relation
to the current installed version which is used to figure out if a
package is "kept back" by the algorithm or not, but by discarding the
candidate version we loose this information.
Ideally we would keep better tabs on what we do to a package and why,
but for now that seems okayish. It will cause the wrong version to be
displayed though as if the package is installed the installed version
becomes the candidate and hence (installed => installed) is displayed.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we have a negative dependency to deal with we prefer to install an
upgrade rather than remove the current version. That is why we split the
method rather explicitly in two in 57df273 but there is a case we didn't
react to: If we have seen the candidate before as a "satisfier" of this
negative dependency there is no point in trying to upgrade to it later
on. We keep that info by candidate discard if we can, but even if we
can't we can at least keep that info around locally.
This "fixes" (or would hide) the problem described in 04a020d as well as
you don't have to discard installations you never make.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For a (partially) installed package like the one MarkInstall operates on
at the moment we want to discard the candidate from, we have to first
remove the package from the internal state keeping to have proper broken
counts and such and only then reset the candidate version which is a
trivial operation in comparison.
Take a look at the testcase: Now, what is the problem? Correct,
git:i386. Didn't see that coming, right? It is M-A:foreign so apt tries
to switch the architecture of git here (which is pointless, it knows
that this won't work, but lets fix that in another commit) will
eventually realize that it can't install it and wants to discard the
candidate of git:i386 first removing the broken indication like it
should, removing the install flag and then reapplies the broken
indication: Expect it doesn't as it wants to do that over the candidate
version which the package no longer had so seemingly nothing is broken.
It is a bit of a hairball to figure out which commit it is exactly that
is wrong here as they are all influencing each other a bit, but >= 2.1
is an acceptable ballpark. Bisect says 57df273 but that is mostly a lie.
Closes: #961266
|
|
|
|
| |
References: dcdfb4723a9969b443d1c823d735e192c731df69
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turns out that pkgDepCache and pkgProblemResolver maintain two (semi)
independent sets of protected flags – except that a package if marked
protected in the pkgProblemResolver is automatically also marked in the
pkgDepCache as protected. This way the pkgProblemResolver will have as
protected only the direct user requests while pkgDepCache will
(hopefully) propagate the flag to unavoidable dependencies of these
requests nowadays. The pkgProblemResolver was only checking his own
protected flag though and based on that calls our Mark* methods usually
without checking return, leading to it believing it could e.g. remove
packages it actually can't remove as pkgDepCache will not allow it as it
is marked as protected there. Teaching it to check for the flag in the
pkgDepCache instead avoids it believing in the wrong things eventually
giving up.
The scoring is keeping the behaviour of adding the large score boost
only for the direct user requests though as there is no telling which
other sideeffects this might have if too many packages get too many
points from the get-go.
Second part of fixing #960705, now with pkgProblemResolver output which
looks more like the whole class of problem is resolved rather than a
teeny tiny edgecase it was before.
|
|
|
|
|
|
|
|
| |
The previous commit deals with negative, now we add the positive side of
things as well which makes this a recursive endevour. As we can push the
protected flag forward only if a single solution for a dependency exists
it is easy for trees to not get it, so if resolving becomes difficult it
won't help at all.
|
|
|
|
|
|
|
|
|
|
|
| |
If we propagate protected e.g. due to a user request we should also act
upon (at the moment) satisfied negative dependencies so that the
resolver knows that installing this package later is not an option.
That the problem resolver is trying bad solutions is a bug by
itself which existed before and after and should be worked on.
Closes: #960705
|
|
|
|
|
|
|
|
| |
For positive dependencies this isn't giving much as the dependency
should already be satisfied by such a provider if its protectiveness
would help, but it doesn't hurt to check them first and for negative
dependencies it means that we check those first which are the most
likely to fail to be removed – which is a good idea.
|
|
|
|
|
|
| |
The important change is adding IsIgnoreable() as it will deal with
self-conflicts and such, but while we are at it lets sprinkle in some
refactoring.
|
|
|
|
|
|
|
|
| |
Reducing the scope of these helpers might allow us to move them
elsewhere and share them or it is a rather pointless exercise,
we will see where it leads us to later on.
Gbp-Dch: Ignore
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We exit early from installing dependencies of a package only if it is
not a user request to avoid polluting the state with installs which
might not be needed (or detrimental even) for alternative choices.
We do continue with installing dependencies though if it is a user
request as it will improve error reporting for apt and can even help
aptitude not hang itself so much as we trim the problem space down for
its resolver dealing with all the easy things.
Similar things can be said about the testcase I have short-circuit
previously… keep going test, do what you should do to report errors!
|
|
|
|
|
|
|
|
| |
The variable this is read to is named Junk and that it is for usecases
like apt-ftparchive which just looks at the items metadata, so instead
of performing this hunked read for data nobody will process we just tell
our FileFd to skip ahead (Internally it might still loop over the data
depending on which compressor is involved).
|
|
|
|
|
|
| |
With FileFd::Write we already have a helper for this situation we can
just make use of here instead of hoping for the best or rolling our own
solution here.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our testcases had their own implementation of GetTempFile with the
feature of a temporary file with a choosen suffix. Merging this into
GetTempFile lets us drop this duplicate and hence test more our code
rather than testing our helpers for test implementation.
And then hashsums_test had another implementation… and extracttar wasn't
even trying to use a real tempfile… one GetTempFile to rule them all!
That also ensures that these tempfiles are created in a temporary
directory rather than the current directory which is a nice touch and
tries a little harder to clean up those tempfiles.
|
|
|
|
|
| |
Not all filesystems implement this feature in all versions of Linux,
so this open call can fail & we have to fallback to our old method.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(CVE-2020-3810)
When normalizing ar member names by removing trailing whitespace
and slashes, an out-out-bound read can be caused if the ar member
name consists only of such characters, because the code did not
stop at 0, but would wrap around and continue reading from the
stack, without any limit.
Add a check to abort if we reached the first character in the
name, effectively rejecting the use of names consisting just
of slashes and spaces.
Furthermore, certain error cases in arfile.cc and extracttar.cc have
included member names in the output that were not checked at all and
might hence not be nul terminated, leading to further out of bound reads.
Fixes Debian/apt#111
LP: #1878177
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
apt marks packages coming from the commandline among others
as protected to ensure the various resolver parts do not fiddle
with the state of these packages. aptitude (and potentially others)
do not so the state is modified (to a Keep which for uninstalled means
it is not going to be installed) due to being uninstallable before
the call fails – basically reverting at least some state changes the
call made before it realized it has to fail, which is usually a good
idea, except if users expect you to not do it.
They do set the FromUser option though which has beside controlling
autobit also gained the notion of "the user is always right" over time
and can be used for this one here as well preventing the state revert.
References: 0de399391372450d0162b5a09bfca554b2d27c3d
Reported-By: Jessica Clarke <jrtc27@debian.org> on IRC
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reinstate * wildcards as they are safe to use, but do not allow any
other special characters such as ? or [].
Notably, ? would overlap with patterns, and [] might overlap with
future pattern extensions (alternative bracketing style), it's also
hard to explain.
Closes: #953531
LP: #1872200
|
|
|
|
|
|
|
|
| |
Strange things happen if while resolving the dependencies of a package
said dependencies want to remove the package. The allow-scores test e.g.
removed the preferred alternative in favor of the last one now that they
were exclusive. In our or-group for Recommends we would "just" not
statisfy the Recommends and for Depends we engage the ProblemResolver…
|
|
|
|
|
|
|
|
| |
In normal upgrade scenarios this is no problem as the orgroup member
will be marked for upgrade already, but on a not fully upgraded system
(or while you operate on a different target release) we would go with our
usual "first come first serve" approach which might lead us to install
another provider who comes earlier – bad if the providers conflict.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a package is protected and has a dependency satisfied only by a single
package (or conflicts with a package) this package must be part of the
solution and so we can help later actions not exploring dead ends by
propagating the protected flag to these "pseudo-protected" packages.
An (obscure) bug this can help prevent (to some extend) is shown in
test-apt-never-markauto-sections by not causing irreversible autobit
transfers.
As a sideeffect it seems also to help our crude ShowBroken to display
slightly more helpful messages involving the packages which are actually
in conflict.
|
|
|
|
|
|
|
|
| |
MarkDelete is not recursive as MarkInstall is and we can not conflict
with ourselves anyhow, so we can move the unavoidable deletes before
changing the state of the package in question avoiding the need for the
state update in case of conflicts we can not deal with (e.g. the package
conflicts with an explicit user request).
|
|
|
|
|
| |
Should be easier to move the code bits around then and it helps in
documenting a bit what the blocks do and how they interact (or not).
|
|
|
|
|
|
| |
We do pretty much the same in IsInstallOk, but here we have already set
the state, so we have to unroll the state as well to sort-of replicate
the state we were in before this MarkInstall failed.
|
|
|
|
|
|
|
| |
This fixes no bugs per se, but the idea is to delay more costly changes
and check easier things first. It e.g. inhibits the moving of the
autobit until we are sure that this MarkInstall call isn't going to
fail (e.g. because a dependency isn't satisfiable).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MarkInstall only looks at the first alternative in an or-group which has
a fighting chance of being satisfiable (= the package itself satisfies
the dependency, if it is installable itself is not considered).
This is "hidden" for Depends by the problem resolver who will try
another member of the or-group later, but Recommends are not a problem
for it, so for them the alternatives are never further explored.
Exploring the or-group in MarkInstall seems like the better choice for
both types as that frees the problem resolver to deal with the hard
things like package conflicts.
|
|
|
|
|
|
|
|
| |
We reseted the candidate for installed packages back to the version
which is installed if one of the (critical) dependencies of it is not
statisfiable, but we can do the same for non-installed packages by
discarding the candidate which beside slightly helping the resolver also
improves error messages generated by apt as a sideeffect.
|
|
|
|
|
| |
Reported-By: gcc -Wuseless-cast
Gbp-Dch: Ignore
|
|
|
|
|
| |
Reported-By: clangd
Gbp-Dch: Ignore
|
|\
| |
| |
| |
| | |
Add color highlighting to E:/W:/N: prefixes
See merge request apt-team/apt!112
|
| |
| |
| |
| |
| |
| | |
This matches the definitions used by dpkg.
Closes: #953527
|
| |
| |
| |
| |
| | |
Extract the code, and reformat it with clang-format so we can
modify it.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
While merging apt-pkg and apt-inst libraries the codepath of handling
deb files in apt-pkg was adapted to use the 'old' code from apt-inst
instead of fork&exec of dpkg-deb -I. The information we get this way
forms the main part of the package stanza, but we add a few
semi-optional fields to the stanza to make it look and work more
like a stanza we got from a repository.
Just be careful with the area where these two parts touch as if,
hypothetically, we would stip all newlines around the parts,
but forget to add a newline between them later, the two lines around
the merge would stick a bit too close together forming one which could
result in fun parsing errors if this merged line was previously e.g. a
well-formed Depends line and has now extra fluff attached.
This codepath has a history with too many newlines (#802553) though,
so how likely is it really that it will some day lack one you may ask.
References: 6089a4b17c61ef30b2efc00e270b0907f51f352a
|
|/
|
|
|
|
|
|
| |
Packages from third-party sources do not always follow the established
patterns of more properly maintained archives. In that case it was a
driver package for a scanner&printer device which has only a minimum of
info attached, but also minimal non-installed packages do not include
sections, so we really shouldn't assume their availability.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Showing a percentage for a timeout is pretty non-standard. Rework the
progress class so it can show an absolute progress (currently hardcoded
to use seconds as a unit). If there is a timeout (aka if it's not the
maximum long long unsigned -1llu), then show the timeout, otherwise
just count up seconds, e.g.
Waiting for cache lock: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 33842 (apt)... 1/120s
or
Waiting for cache lock: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 33842 (apt)... 1s
Also improve the error message to use "Waiting for cache lock: %s" instead of "... (%s)", as having
multiple sentences inside parenthesis is super weird, as is having two closing parens.
We pass the information via _config, as that's reasonably easy and avoids
ABI hackage. It also provides an interesting debugging tool for other
kinds of progress.
|
|
|
|
|
|
|
|
| |
This improves the locking message, getting rid of useless details. If
we have a process holding the lock, we got that because the lock is
being hold by it, so there's no point telling the people the reason
for not getting the lock is the EAGAIN error and displaying its
strerrror().
|
| |
|
| |
|
|
|
|
|
|
| |
This is a rework of !6 with additional stuff for the frontend
lock, so we can lock the frontend lock and then keep looping
over dpkg lock.
|
| |
|
|
|
|
|
|
|
|
|
| |
Remove the operator= from Container_iterator, as it was basically
just the default anyway, and add copy constructors to *Interface
that match their operator=.
Tried adding copy constructor to Container_iterator, but that only
made things worse.
|