| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
| |
If we know both SHA256, and they're different, the packages are. This
approach stores the SHA256 only at runtime, avoiding the overhead of
storing it on-disk, because when we update repositories we update all
of them anyhow.
Note that pkgCacheGenerator is hidden, so we can just modify its
ABI, hooray.
Closes: #931175
LP: #2029268
|
| |\
| |
| |
| |
| | |
dist-upgrade: Revert phased updates using keeps only
See merge request apt-team/apt!299
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This fixes an issue where phased updates gain new dependencies
and cause them to be installed despite themselves not being
installed.
In the cause of investigation, it turned out that we also need
to evaluate the candidate version at those early stage rather
than the install version (which is only valid *after* MarkInstall).
This does not fully resolve the problem: If an update pulls in
a phased update, depends are still being installed. Resolving
this while ensuring that phased updates cannot uninstall packages
requires us to do a minimization of changes by trying to keep
back each new install removal and then seeing if any dependency
is being broken by it. This is more complex and will happen
later.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In the bug, mutter was kept back due to phasing and the new gnome-shell
depended on that, and was therefore kept back as well, however,
gnome-shell-common was not broken, and apt decided to continue upgrading
it by removing gnome-shell and the ubuntu desktop meta packages.
This is potentially a regression of LP#1990586 where we added keep
back calls to the start of the dist-upgrade to ensure that we do not
mark stuff for upgrade in the first place that depends on phasing
updates, however it was generally allowed by the resolver to also
do those removals.
To fix this, we need to resolve the update normally and then use
ResolveByKeepInternal to keep back any changes broken by held back
packages.
However, doing so breaks test-bug-591882-conkeror because ResolveByKeep
keeps back packages for broken Recommends as well, which is not
something we generally want to do in a dist-upgrade after we already
decided to upgrade it.
To circumvent that issue, extend the pkgProblemResolver to allow
a package to be policy broken, and mark all packages that already
were already going to be policy broken to be allowed to be that,
such that we don't try to undo their installs.
LP: #2025462
|
| |/
|
|
|
|
|
| |
We want to gently steer users towards having Signed-By for each
source such that we can retire a shared keyring across sources
which improves resilience against configuration issues and
incompetent malicious actors.
|
| | |
|
| |
|
|
|
| |
This will attempt to fallback to a per-server setting if we could
not determine a value from the release file.
|
| |\
| |
| |
| |
| | |
Add --snapshot and --update support
See merge request apt-team/apt!291
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Provide snapshot support for offical Debian and Ubuntu archives.
There are two ways to enable snapshots for sources:
1. Add Snapshot: yes to your sources file ([snapshot=yes]). This
will allow you to specify a snapshot to use when updating or
installing using the --snapshot,-S option.
2. Add Snapshot: ID to your sources files to request a specific
snapshot for this source.
Snapshots are discovered using Label and Origin fields in the Release
file of the main source, hence you need to have updated the source at
least once before you can use snapshots.
The Release file may also declare a snapshots server to use, similar
to Changelogs, it can contain a Snapshots field with the values:
1. `Snapshots: https://example.com/@SNAPSHOTID@` where `@SNAPSHOTID@`
is a placeholder that is replaced with the requested snapshot id
2. `Snapshots: no` to disable snapshot support for this source.
Requesting snapshots for this source will result in a failure
to load the source.
The implementation adds a SHADOWED option to deb source entries,
and marks the main entry as SHADOWED when a snapshot has been
requested, which will cause it to be updated, but not included
in the generated cache.
The concern here was that we need to keep generating the shadowed
entries because the cleanup in `apt update` deletes any files not
queued for download, so we gotta keep downloading the main source.
This design is not entirely optimal, but avoids the pitfalls of
having to reimplement list cleanup.
Gaps:
- Ubuntu Pro repositories and PPAs are not yet supported.
|
| | |
| |
| |
| |
| | |
This runs update before opening the cache and sources.list for
installing/upgrading.
|
| |/ |
|
| |\
| |
| |
| |
| | |
Fix permissions && change section matching in config files to be more gitignore style rightmost match
See merge request apt-team/apt!286
|
| | |
| |
| |
| | |
This test did not work with umask 0002
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
A source marked with trusted=yes can still fail verification of the
Release file, mostly for Date related issues, like being too new or too
old, which have other options to force them in.
The update code was not using the Release file (which was a InRelease
file but failed verification – which was overridden by trusted=yes) as
intended, but it marked it for storage, so that this "bad" Release file
would end up being moved into lists/, which is bad as the indexes it
refers to aren't updated while the next update run assumes that the
indexes are in the state the Release file claims them to be in.
Fixed simply by making the storage conditional on the usage as intended,
which also resolves a second issue: The verification can also detect that
a Release file we got is older than what we already have to avoid down-
grade attacks. The more likely explanation is a slightly outdated mirror
in a rotation/CDN through, so this gets the silent treatment to avoid
scaring users by handling it as if we had got the same Release file we
already have stored locally, removing the freshly received older file
in the process alongside setting some variables. Those variables were
already modified in the trusted=yes case though resulting in the stored
Release file being removed instead. Not modifying the variables too early
resolves this problem as well.
Both seem to exist since at least 2015 as traces are visible in 448c38bdcd
already, which shuffled lots of code around including the bad ones, but
as we are in trusted=yes land, security is of no concern here, this
"just" leads to failed pinning, hashsum mismatches and other strange
problems in follow-up calls depending on how out of sync the Release
file (if its still present) is with the rest of the trusted data.
Reported-By: Dima Kogan <dkogan@debian.org> on IRC
Tested-By: Dima Kogan <dkogan@debian.org>
|
| | |
| |
| |
| | |
Gbp-Dch: Ignore
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We only check the start of these lines to avoid hard coding the exact
command and we pick 150 as maximum line length as the longest package
name on my system is apparently 75 characters long. We could choose
longer or shorter without much issue as over-length just means we
mishandle the rest of the line as a new line and it should be really
unlikely that a) lines are that long in this file and b) that such long
lines contain one of our trigger sequences – but even if, all we do is
start a download of an online file. Could be worse.
This auto-detection can be avoided by setting
Acquire::Changelogs::AlwaysOnline (or Origin specific sub options)
to "true" if you always want the changelog from an online source.
The reverse – setting it to "false" in the hope it would not get the
changelog from an online source – was not and is still not possible.
Closes: #1024457
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In an ideal world everyone would read release notes, but if the last
sources.list change is any indication a lot of people wont. This is
even more a problem in so far as apt isn't producing errors for
invalid repositories, but instead carries on as normal even through it
will not be able to install upgrades for the moved packages.
This commit implements two scenarios and prints a notice in those cases
pointing to the release notes:
a) User has 'non-free' but not 'non-free-firmware'
b) User has a firmware package which isn't available from anywhere
Both only happen if we are talking about a repository which identifies
itself as one of Debian and is for a release codenamed bookworm (or
sid). Note that as (usually) apt/oldstable is used to upgrade to the
new stable release these suggestions only show for users after they
have upgraded to bookworm on apt command line usage after that.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Hard coding each and every component is not only boring but given that
everyone is free to add or use more we end up in situations in which apt
behaves differently for the same binary package just because metadata
said it is in different components (e.g. non-free vs. non-free-firmware).
It is also probably not what the casual user would expect.
So we instead treat a value without a component as if it applies for all
of them. The previous behaviour can be restored by prefixing the value
with "<undefined>/" as in the component is not defined.
In an ideal world we would probably use "*/foo" for the new default
instead of changing the behaviour for "foo", but it seems rather
unlikely that the old behaviour is actually desired. All existing values
were duplicated for all (previously) known components in Debian and
Ubuntu.
|
| |
|
|
|
|
|
|
| |
This is the correct behavior, but it was overlooked when aptitude
patterns where ported. I remember wondering about this, but I checked
the aptitude code and saw a check that CurrentVer != 0 or something
and then apparently did not notice another implementation for version
matching.
|
| |\
| |
| |
| |
| | |
Actually delete temporary apt-key.*.asc helper files
See merge request apt-team/apt!266
|
| | |
| |
| |
| |
| |
| |
| | |
During development there was an if (0) there for debugging purposes
that unfortunately stayed in and caused files to accumulate.
LP: #1995247
|
| |\ \
| |/
|/|
| |
| | |
Allow apt to run if no dpkg/status file exists
See merge request apt-team/apt!257
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Not having a dpkg/status file used to be a hard error which from a
boostrap perspective is suspect as in the beginning, there is no
status so you would need to touch it into existence.
We make a difference between factual non-existence and inaccessibility
to catch mistakes in which the file is not readable for some reason,
the testcase test-bug-254770-segfault-if-cache-not-buildable is an
example of this.
Note that apt has already figured out at this point that this is a
Debian-like system which should have a dpkg/status file. This change
does not effect the auto-detection and is not supposed to.
|
| | |
| |
| |
| |
| |
| |
| |
| | |
We needed a fake dpkg in our status file for dpkg --assert-multi-arch to
work in the past, but recent dpkg versions do not require this anymore,
so we can remove this somewhat surprising hackery in favour of better
hidden hackery we only use if we work with an older dpkg (e.g. on
current Debian stable).
|
| |\ \
| | |
| | |
| | |
| | | |
phased update improvements
See merge request apt-team/apt!262
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
By marking them at the end, we might make other decisions that
depend on the new phased updates, confusing the solver. Run the
marking at the start too.
The EDSP test file from Jeremy was modified to include Machine-ID
and Phased-Update-Percentage fields and then filtered to mostly
exclude packages irrelevant to the test case by running
grep-dctrl \( -FRequest "EDSP 0.5" -o -FInstalled yes \
-oFPhased-Update-Percentage 10 \) \
-a --not -FArchitecture i386
LP: #1990586
|
| | |/
| |
| |
| |
| |
| |
| |
| | |
When iterating over I's dependencies (which are called Pkg), we
accidentally checked if I was Protected() instead of Pkg when deciding
whether Pkg can be kept back.
LP: #1990684
|
| |\ \
| | |
| | |
| | |
| | | |
Respect users pkg order on `apt install` for resolving
See merge request apt-team/apt!256
|
| | |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The command line is evaluated in two steps: First all packages given
are marked for install and as a second step the resolver is started on
all of them in turn to get their dependencies installed.
This is done so a user can provide a non-default choice on the command
line and have it respected regardless of where on the command line it
appears.
On the other hand, the order in which dependencies are resolved can
matter, so instead of using a "random" order, we now do this in the
order given on the command line, so if you e.g. have a meta package
pulling in non-default choices and mention it first the choices are
respected predictably instead of depending on first appearance of the
package name while creating the binary cache.
I might have "broken" this more than a decade ago while introducing the
reworked command line parsing for Multi-Arch, which also brought in the
split into the two steps mentioned above which was the far more
impactful 'respect user choice' change. This one should hardly matter in
practice, but as the tests show, order can have surprising side effects.
|
| |/
|
|
|
|
|
| |
This ensures that it compiles when clang compiler is passing
-DFORTIFY_SOURCES=2
Signed-off-by: Khem Raj <raj.khem@gmail.com>
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Schedule all other binaries in the source package for upgrade if
the candidate version belongs to the same source version as the
package we are upgrading.
This will significantly reduce the risk of partial upgrades and
should make life a lot easier.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Currently the solver handles cases where a Breaks b (<< 1) and
if we install that a, upgrades b. However, where b Depends a (= 1),
b was removed again.
This addresses the problem by iterating over installed reverse
dependencies of upgrades and upgrading them so that both cases
work roughly similarly.
LP: #1974196
|
| |
|
|
|
|
| |
Pass some package names to upgrade to see that that works
Gbp-Dch: ignore
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
If a package is already pinned to a negative value, we should not
override this with a positive 1. This causes packages to be installable
that were pinned to -1, which is not intended.
For this, implement phasing as a ceiling of 1 for the pin instead
of a fixed 1 value. An alternative would have been to fix it to
NEVER_PIN, but that would mean entirely NEW packages would not be
installable while phasing which is not the intention either.
LP: #1978125
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a lot closer to the original implementation in update-manager,
but still has a couple of differences that might cause bugs:
- When checking whether a version is a security update, we only
check versions in between and not any later version. This happens
mostly because we do not know the suite, so we just check if there
is any version between the installed version and our target that
is a security update
- We only keep already installed packages, as we run before the
resolver. update-manager first runs the resolver, and then marks
for keep all packages that were upgraded or newly installed that
are phasing (afaict).
This approach has a significant caveat that if you have version 1
installed from a release pocket, version 2 is in security, and version
3 is phasing in updates, that it installs version 3 rather than 2
from security as the policy based implementation does.
It also means that apt install does not respect phasing and would
always install version 3 in such a scenario.
LP: #1979244
|
| |
|
|
|
|
|
| |
Some of our headers use APT_COMPILING_APT trickery to avoid exposing too
broadly details we don't want external clients to know and make use of.
The flip-side is that this can lead to different compilation units
seeing different definitions if they aren't all using the same config.
|
| |
|
|
|
|
|
|
|
|
| |
We use 'stty sane' to combat against stepped output and co caused by
(especially) failed tests, but it does so many things that it
occasionally fails to reset some bits in the parallel interaction we
have with it which fails the tests without a real problem in apt…
Ideally we would be better at stitching the output together, but for the
time being lets ignore these failures instead to stabilize the tests.
|
| |
|
|
|
|
| |
Building the library just so we can build the helpers against it is not
only wasteful but as we are supposed to test the system we can use that
as an additional simple smoke test before the real testing starts.
|
| |\
| |
| |
| |
| | |
Consistently dealing with fields via pkgTagSection::Key
See merge request apt-team/apt!233
|
| | |
| |
| |
| |
| |
| |
| | |
We abstract hashes a fair bit to be able to add new ones eventually,
which lead us to building the field names on the fly. We can do better
through by keeping a central place for these names, too, which even
helps in reducing code as we don't need the MD5 → Files dance anymore.
|
| | |
| |
| |
| |
| |
| | |
The dependency relation fields old names were deprecated in 1995
as the new ones were introduced. That seems barely long enough now
as a transition period.
|
| | |
| |
| |
| |
| | |
dpkg-dev stopped recognizing it in 2007 (1.14.7) while building packages.
The rename itself happened in 1995 (0.93.72).
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The previous regime of the file was to sort it on insert, but that
changes the values in the generated enum, which is fine as long as we
only use it in libapt itself, but breaks on other users.
The header was always intended to be private to apt itself, so we just
document this here now and lay the ground work to have the file in the
future only appended to, so that it remains sufficiently ABI stable that
we can use it outside the library in our apt tools.
We also remove some fields apt is unlikely to need or only uses in
certain cases outside of any (speed) critical path to have enough room
to add more fields soon as currently we are limited to 128 fields max
and it would be sad if we use up that allowance entirely already.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
The hack is 7 years by now, so in an attempt to make that slightly
cleaner lets move this to proper variables that can be assigned via
an extra-environment file sources by the framework rather than relying
on my user name and locate in public.
Gbp-Dch: Ignore
|
| | |
| |
| |
| |
| |
| |
| | |
It happens to the best, so it might happen for us, too, one day.
Better to catch it directly instead.
Gbp-Dch: Ignore
|
| |/
|
|
|
|
|
|
|
|
|
|
| |
The kernel autoremoval algorithm was written to accomodate
for Ubuntu's boot partition sizing, which was written to
accomodate 3 kernels - 2 installed ones + a new one being
unpacked.
It seems that when the algorithm was designed, it was overlooked
that it actually kept 3 kernels.
LP: #1968154
|
| |
|
|
|
|
|
| |
apt/test/interactive-helper/aptwebserver.cc: In function ‘std::string HTMLEncode(std::string)’:
error: variable ‘constexpr const std::array<std::array<const char*, 2>, 6> htmlencode’ has initializer but incomplete type
Reported-By: Helmut Grohne on IRC
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a repository is signed with multiple keys, apt 2.4.0 would
ignore the fallback result if some keys were still missing,
causing signature verification to fail.
Rework the logic such that when checking if fallback was "succesful",
missing keys are ignored - it only matters if we managed to verify
one key now, whether good or bad.
Likewise, simplify the logic when to do the fallback:
If there was a bad signature in trusted.gpg.d, do NOT fallback at all
- this is a minor security issue, as a key in trusted.gpg.d could
fail silently with a bad signature, and then a key in trusted.gpg
might allow the signature to succeed (as trusted.gpg.d key is then
missing).
Only fallback if we are missing a good signature, and there are
keys we have not yet checked.
|