| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Extend the Signed-By field to handle embedded public key blocks,
this allows shipping self-contained .sources files, making it
substantially easier to provide third party repositories.
|
|\
| |
| |
| |
| | |
Add AllowRange option to disable HTTP Range usage
See merge request apt-team/apt!188
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
RFC7233 3.2 If-Range specifies the comparison to be an exact match,
not a less or equal, which makes no sense in this context anyhow.
Our server exists only to write our tests against it so this isn't much
of a practical issue. I did confirm with a crashing server that no test
(silently) depends on this or exhibits a different behaviour not
explicitly checked for.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Debian buster (oldstable) ships 6.1 while bullseye (stable) ships 6.5
and so the later is 'fixed'. Upstream declares 6.0 still as supported.
It might be still a while we encounter "bad" versions in the wild, so
if we can detect and work around the issue at runtime automatically we
can save some users from running into "persistent" partial files.
References: https://varnish-cache.org/docs/6.4/whats-new/changes-6.4.html#changes-in-behavior
|
| |
| |
| |
| |
| |
| |
| |
| | |
apt makes heavy usage of HTTP1.1 features including Range and If-Range.
Sadly it is not obvious if the involved server(s) (and proxies) actually
support them all. The Acquire::http::AllowRange option defaults to true
as before, but now a user can disable Range usage if it is known that
the involved server is not dealing with such requests correctly.
|
|\ \
| | |
| | |
| | |
| | | |
Fix file:/// vs file:/ hang & https-proxy for http
See merge request apt-team/apt!187
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The settings used for unwrapping TLS connections depend on the access
and hostname we connect to more than what we eventually unwrap. The
bugreport mentions CaInfo, but all other https-settings should also
apply (regardless of generic or hostname specific) to an https proxy,
even if the connection we proxy through it is http-only.
Closes: #990555
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We allow file (and other file-based methods) URIs to either be given
as file:///path or as file:/path, but in various places of the acquire
system we perform string comparisons on URIs which do not handle this
expecting the canonical representation produced by our URI code.
That used to be hidden by us quoting and dequoting the URIs in the
system, but as we don't do this anymore we have to be a bit more careful
on input.
Ideally we would do less of these comparisons, but for now lets be
content with inserting a canonicalisation early on to prevent hangs in
the acquire system.
|
|\ \
| | |
| | |
| | |
| | | |
add pattern to select packages by priority (closes: #989558)
See merge request apt-team/apt!185
|
| | | |
|
|\ \ \
| |_|/
|/| |
| | |
| | | |
Streamline access to barbarian architecture functionality
See merge request apt-team/apt!184
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
APT is not the place this information should be stored at, but it is a
good place to experiment and see what will be (not) needed in the future
for a proper implementation higher up the stack.
This is why "BarbarianArchitectures" is chosen instead of a more neutral
and/or sensible "VeryForeign" and isn't readily exported in the API to
other clients for this PoC as a to be drawn up standard will likely
require potentially incompatible changes. Having a then outdated and
slightly different implementation block a "good" name would be bad.
The functionality itself mostly exists (ignoring bugs) since the
introduction of MultiArch as we always had the risk of encountering
packages of architectures not known to dpkg (forced onto the system,
potentially before MultiArch) we had to deal with somehow and other
edge cases.
All this commit really does is allowing what could previously only be
achieved with editing sources.list and some conf options via a single
config option: -o APT::BarbarianArchitectures=foo,bar
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
What does a M-A:allowed package from non-native/non-foreign architecture
provide? If we look at M-A:foreign, such a package satisfies
dependencies within its own architecture, but not in other
architectures, so the same should apply to :any dependencies on
M-A:allowed packages, but we have a problem: While unqualified package
names are architecture-specific, the virtual package name qualified with
:any is not (see 3addaba1ff).
We could of course make it architecture-specific now, but that would
introduce many virtual packages for this relatively minor usecase and
would reintroduce a need for special display handling.
So, we pull a trick here: Barbarian M-A:allowed packages do not provide
the architecture-independent :any package anymore, but only a specific
one and every :any dependency from a barbarian package is rewritten to
an or-group of the specific and the independent :any package.
References: 3addaba1ff
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
As we don't know which architectures we will deal with and to avoid
creating many "unneeded" packages (and provides) the cache
generation uses a scheme of on-demand creation (see ecc138f858).
This assumed a particular handling of :any which got changed later
(3addaba1ff) making this code path not only no longer needed for
M-A:allowed, but actually wrong as it would go on and create provides
for the explicit Provides of a package as if the package would be
M-A:foreign.
The result was that a package A:amd64 providing B tagged as M-A:allowed
would satisfy a "C:armel depends on B". Note that this bug does NOT
effect "C:armel depends on A" which is (correctly) not satisfied as
before.
References: ecc138f858, 3addaba1ff
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Back than M-A was added to build-dependencies (#558104) only the
qualifiers :native and :any were considered at first which for the
native case behave the same, so stripping was a good idea.
Nowadays we could encounter arch-qualified dependencies, too, through –
or slightly more likely conflicts perhaps – at least in theory as in
practice native build-dep operations in Debian and elsewhere wouldn't
have other architectures available anyhow.
Still, we have full support for all this for the crossbuilding case
which makes active use of this (at least is far more likely to do so),
so it seems better to converge on one edgecase rather than keeping
two in active use and so produce potentially different results for not
specifying -a and -a $native.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
APTs ad hoc testing framework for integration tests is not intending to
be a general propose framework, but it is relatively easy to abuse it
for other projects anyhow with some refactoring even if that is neither
recommend nor officially supported.
Gbp-Dch: Ignore
|
| |/
| |
| |
| | |
Gbp-Dch: Ignore
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
fullyExplored is needed to keep track of having explored all providers
of a package name, while Marked is tracking if we have explored a given
real package (along its chosen version), so we should stop MarkPackage
from exploring a (real) package if it is marked and let fullyExplored
only guard the looping over the individual dependencies.
The testcase is deceptively simple, but in practice only an ecosystem
like rust who makes heavy use of cyclic dependency relations intermixed
with versioned provides actually triggers this as seen by the buggy code
being in use for four months in Debian and Ubuntu development releases.
(easier to trigger if most packages are marked manual installed)
Note that the testcase is successful already due to the earlier changes
as we exit the recursion eventually and all packages are marked as they
need to be already, but this fix does work standalone as well.
Closes: #992993
|
|/
|
|
|
|
|
|
| |
If the system tells us that a core dump was created we should try to
display the contained info as that system might not be easily available
when we see the error (like C-I or autopkgtest).
Gbp-Dch: Ignore
|
|
|
|
| |
This delay of 4+2+1=7 seconds in unnecessary.
|
|
|
|
|
|
|
|
| |
This is subject to clock skew, unfortunately, as we cannot read
monotonic time in shell.
We check for >=5s out of the 7s it should take to reduce the
risk of skew a bit.
|
|
|
|
|
|
| |
This is very basic support on the testing side, we just test
the debug output but not how long it actually took. Would be
nice to check time really.
|
|\
| |
| |
| |
| | |
Restore dpkg::chroot-directory functionality
See merge request apt-team/apt!178
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If we call dpkg inside a chroot we have to ensure that the temporary
directory we construct to call dpkg --recursive is inside the chroot and
that we strip the path to the chroot from the directory name we pass to
dpkg.
Note that the added test succeeds before and (hopefully) after as we
can't really chroot here or fiddle with the needed settings as we are
already setting up apt to work with a quasi-chroot. The test perhaps
helps in ensuring we don't break it too much in the future though.
(Broken five years (and one day) ago this seems to have an immense user
base at the moment, but it might in the future via mmdebstrap)
References: f495992428a396e0f98886c9a761a804aa161c68
Reported-By: Johannes Schauer Marin Rodrigues on IRC
Tested-By: Johannes Schauer Marin Rodrigues
|
|\ \
| | |
| | |
| | |
| | | |
Allow packages from volatile sources to be reinstalled
See merge request apt-team/apt!177
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Just because two packages have the same version number doesn't mean it is
the same package. APT can detect rebuilds and other "inconsistencies",
but we had no explicit test for it so far. It turned out to be the wrong
track in this branch, but as I wrote it already, lets add it at least.
Gbp-Dch: Ignore
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Temporary hacks should be temporary, especially if they hide bugs. After
fixing one in the previous commit this is just busy work to add download
information to the places which check that output.
Gbp-Dch: Ignore
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Volatile sources are parsed after the status file, so if we have a
version already installed the size information is not stored, so that
a reinstall of said version is refused claiming a broken repository.
References: 1412cf51403286e9c040f9f86fd4d8306e62aff2
|
| |/
| |
| |
| |
| |
| |
| |
| | |
The error says the repository is broken but doesn't mention which one it
is. The item description gives us at least all the information, but is
not as nicely formatted. As this message is not even marked for
translation this is a rather temporary affair and we can survive without
the eye candy for a while.
|
|/
|
|
|
|
|
| |
We just used the pointer returned which might be nullptr, properly
call BuildSourceList() and check the result first.
Closes: #990518
|
|
|
|
|
|
|
|
| |
The code exists since ever, but no other client supports this and the
specification like debian-policy isn't asking for this either. What it
does do is breaking than all others continue working through: If the
filename includes in fact URI encoded bits (hopefully no quotes) which
is rather unlikely, but none the less possible.
|
|
|
|
|
|
|
|
|
|
| |
If a source is not copying files to the destination the download code
forces the copy – which in practice are local repositories accessed
via file:/ – but in that process takes the filename the local repo used
rather than the filename it e.g. advertised via --print-uris.
A local repository could hence override a file in the current directory
if you use 'apt download', which is a rather weak ability, but still.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Keeping URIs encoded in the acquire system depends on having them
encoded in the first place. While many other places got the encoding
2 out of 3 ArchiveURI implementations were missed which are in practice
responsible for nearly all of the URI building, just that index filename
do not contain characters to escape and the Filename fields in Packages
files usually aren't. Usually. Except if you happen to have e.g. an epoch
featuring package with the colon encoded in the filename. On the upside,
in most repositories the epoch isn't part of the filename.
Reported-By: Johannes 'josch' Schauer on IRC
References: e6c55283d235aa9404395d30f2db891f36995c49
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a package is not installed yet, we do need to apply
phasing as we otherwise get into weird situations when
installing packages:
In the launchpad bug below, ubuntu-release-upgrader-core
was installed, and hence the phasing for the upgrade to it
was applied. However, ubuntu-release-upgrader-gtk was about
to be installed - and hence the phasing did not apply, causing
a version mismatch, because ubuntu-release-upgrader-gtk from
-updates was used, but -core from release pocket. Sigh.
An alternative approach to dealing with this issue could be to
apply phasing to all packages within the same source package,
which would work in most cases. However, there might be unforeseen
side effects and it is of course possible to have = depends between
source packages, such as -signed packages on the unsigned ones for
bootloaders.
This problem does not occur in the update-manager implementation
of phased updates as update-manager only deals with upgrading packages,
but does not install new packages and thus does not see that issue. APT
however, has to apply phasing more broadly, as you can and often do
install additional packages during upgrade, or upgrade packages during
install commands, as both accept package list arguments and have the
same code in the backend.
LP: #1925745
|
|
|
|
|
|
|
| |
This makes them retriable, and brings them more into line with
TCP, where handshake is also a transient error.
LP: #1928100
|
|
|
|
| |
This reverts commit 64127478630b676838735b509fec5cdfa36874c8.
|
|\
| |
| |
| |
| | |
Count uninstallable packages in "not upgraded"
See merge request apt-team/apt!169
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If a first step of the solver can figure out that a package is
uninstallable it might reset the candidate so that later steps are
prevented from exploring this dead end. While that helps the resolver it
can confuse the display of the found solution as this will include an
incorrect count of packages not upgraded in this solution.
It was possible before, but happens a fair bit more with the April/May
resolver changes last year so finally doing proper counting is a good
idea.
Sadly this is a bit harder than just getting the number first and than
subtracting the packages we upgraded from it as the user can influence
candidates via the command line and a package which could be upgraded,
but is removed instead shouldn't count as not upgraded as we clearly did
something with it. So we keep a list of packages instead of a number
which also help in the upgrade cmds as those want to show the list.
Closes: #981535
|
|\ \
| | |
| | |
| | |
| | | |
Mark only provides from protected versioned kernel packages
See merge request apt-team/apt!168
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
An interactive tool like aptitude needs these flags current far more
often than we do as a user can see them in apt only in one very well
defined place – the autoremove display block – so we don't need to run
it up to four times while a normal "apt install" is processed as that is
just busywork.
The effect on runtime is minimal, as a single run doesn't take too long
anyhow, but it cuts down tremendously on debug output at the expense of
requiring some manual handholding.
This is opt-in so that aptitude doesn't need to change nor do we need to
change our own tools like "apt list" where it is working correctly as
intended.
A special flag and co is needed as we want to prevent the ActionGroup
inside pkgDepCache::Init to be inhibited already so we need to insert
ourselves while the DepCache is still in the process of being built.
This is also the reason why the debug output in some tests changed to
all unmarked, but that is fine as the marking could have been already
obsoleted by the actions taken, just inhibited by a proper action group.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The autoremove algorithm would mark a package previously after exploring
it once, but it could have been that it ignored some providers due to
them not satisfying the (versioned) dependency. A later dependency which
they might satisfy would encounter the package as already marked and
hence doesn't explore the providers anymore leaving us with internal
errors (as in the contrived new testcase).
This is resolved by introducing a new flag denoting if we explored every
provider already and only skip exploring if that is true, which sounds
bad but is really not such a common occurrence that it seems noticeable
in practice. It also helps us marking virtual packages as explored now
which would previously be tried each time they are encountered mostly
hiding this problem for the (far more common) fully virtual package.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
An out-of-tree kernel module which doesn't see many new versions can
pile up a considerable amount of packages if it is depended on via
another packages (e.g.: v4l2loopback-utils recommends v4l2loopback-modules)
which in turn can prevent the old kernels from being removed if they
happen to have a dependency on the images.
To prevent this we check if a provider is a versioned kernel package
(like an out-of-tree module) and if so check if that module package is
part of the protected kernel set – if not it is probably good to go.
We only do this if at least one provider is from a protected kernel
though so that the dependency remains satisfied (this can happen e.g. if
the module is currently not buildable against a protected kernel).
|
|/
|
|
|
|
|
|
| |
This code can interact with handwritten files who can have unneeded
commas for writing easy. As dpkg allows it, we should do as well.
Reported-By: Arnaud Ferraris <arnaud.ferraris@gmail.com>
References: https://lists.debian.org/debian-devel/2021/03/msg00101.html
|
|
|
|
|
|
|
|
|
| |
dpkg 1.20.8 also made --force-remove-essential optional for
deconfiguring essential packages, so let's do this.
Also extend the test case to make sure we actuall pass
auto-deconfigure and do not make any --remove calls, or
pass --force-remove to dpkg.
|
|
|
|
|
|
| |
Ugh, this was super flaky under -j 16 and -j 4, each behaving
in slightly different ways. This seems to be stable now. No
real bug though, all behaviors were OK.
|
| |
|
|
|
|
|
|
|
|
|
| |
Hook protocol 0.2 makes the new fields we added mandatory, and
replaces `install` mode with `upgrade`, `downgrade`, `reinstall`
where appropriate.
Hook negotiation is hacky, but it's the best we can do for now.
Users are advised to upgrade to 0.2
|
|
|
|
| |
This enables hooks to output additional information.
|
| |
|
|
|
|
|
|
| |
Provide access to the origins of a package, such that tools
can display information about them; for example, you can write
a hook counting security upgrades.
|