| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
| |
Assigning the result of AllocateInMap directly to Ver->d caused Ver->d
to be resolved first, and hence if Ver was remapped during the
AllocateInMap, we were trying to assign to the old value.
Closes: #980037
|
| |
|
|\
| |
| |
| |
| | |
Misc fixes
See merge request apt-team/apt!152
|
| |
| |
| |
| |
| |
| |
| | |
We interpreted "cannot detect chroot" as "not a chroot", but it's
arguably the better idea to detect it as a chroot, to avoid new behavior
from phased updations in situations where it's unclear (no /proc mounted
or stuff).
|
|/
|
|
|
| |
In case we did not find any kernels to protect, the regular expression
will be empty, and trying to substr(1) it will fail.
|
|
|
|
| |
Closes: #979725
|
| |
|
|
|
|
|
|
|
| |
No idea why we don't have manual page syntax check (what prepare-release
post-build does) in CI. Should fix that eventually.
Gbp-Dch: ignore
|
|\
| |
| |
| |
| | |
Pu/small fixes
See merge request apt-team/apt!151
|
| |
| |
| |
| | |
Gbp-Dch: ignore
|
| |
| |
| |
| | |
Gbp-Dch: ignore
|
|\ \
| |/
|/|
| |
| | |
Implement update --error-on=any
See merge request apt-team/apt!150
|
|/
|
|
|
|
|
|
|
|
|
| |
People have been asking for a feature to error out on transient network
errors for a while, this gives them one while keeping the door open for
other modes we need, such as --error-on=no-success which we need to
determine when to retry the daily update job.
Closes: #594813
(and a whole bunch of duplicates...)
|
|\
| |
| |
| |
| | |
Add support for Phased-Update-Percentage
See merge request apt-team/apt!129
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If we have different binNMU versions on different architectures,
we don't want madness to ensue.
This is a change from how update-manager does things, as Ubuntu does not
have binNMUs, but I believe it's the right thing to do for a generic
solution.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for Phased-Update-Percentage by pinning
upgrades that are not to be installed down to 1.
The output of policy has been changed to add the level of
phasing, and documentation has been improved to document
how phased updates work.
The patch detects if it is running in a chroot, and if so, always
includes phased updates, restoring classic apt behavior to avoid
behavioral changes on buildd chroots.
Various options are added to control this all:
* APT::Get::{Always,Never}-Include-Phased-Updates and their legacy
update-manager equivalents to always or never include phased updates
* APT::Machine-ID can be set to a UUID string to have all machines in a
fleet phase the same
* Dir::Etc::Machine-ID is weird in that it's default is sort of like
../machine-id, but not really, as ../machine-id would look up
$PWD/../machine-id and not relative to Dir::Etc; but it allows you to
override the path to machine-id (as opposed to the value)
* Dir::Bin::ischroot is the path to the ischroot(1) binary which is used
to detect whether we are running in a chroot.
|
|\
| |
| |
| |
| | |
Only autoremove kernels in apt(8); respect --no-auto-remove
See merge request apt-team/apt!149
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Automatically removing kernels in apt-get could be unexpected, so limit
it to apt for now. To handle --no-auto-remove correctly, rewrite the
hack that makes apt ignore APT::Get::AutomaticRemove options from config
files such that it unsets the option.
This then means we can do FindB("APT::Get::AutomaticRemove", true) as the
default for APT::Get::AutomaticRemove::Kernels and get the behavior we
want: If you set --no-auto-remove, it is respected as that FindB returns
false; if you don't set it, it will be true.
|
|\ \
| |/
|/|
| |
| | |
Make immediate configuration optional
See merge request apt-team/apt!148
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The benefits of immediate configuration are that Essential packages
will be configured immediately, so if they are wrongly not working
without being configured they won't fail later packages.
However, we've reached the point where dependencies on the essential set
are too complex for immediate configuration to always work, causing
installations to error out at the end, despite having succeeded, because
we did not correctly return the error here and did not check for pending
errors before running dpkg.
Given that we check and configure any packages at the end that have not
been configured yet, or fail if we can't configure them; making
immediate configuration optional is the best way forward - it orders as
it does now, but then does not spuriously fail after having successfully
installed everything.
Closes: #973305, #188161, #211075, #649588
LP: #1871268
|
|\|
| |
| |
| |
| | |
Bump codenames to bullseye/hirsute and adjust -security codename
See merge request apt-team/apt!147
|
| |
| |
| |
| | |
Closes: #969932
|
|\ \
| |/
|/|
| |
| | |
?depends patterns and friends
See merge request apt-team/apt!146
|
| |
| |
| |
| | |
This was easy.
|
| | |
|
| |
| |
| |
| |
| |
| | |
These match the target package, not target versions which is
slightly unfortunate but might make sense. Maybe we should add
a version that matches Versions instead.
|
|\ \
| | |
| | |
| | |
| | | |
Be compatible with Bash
See merge request apt-team/apt!142
|
| | |
| | |
| | |
| | |
| | |
| | | |
On many distributions, /bin/sh is Bash. Bash’s `echo` builtin doesn’t
interpret escape sequences, so most tests fail. Fix this by removing
the escape sequence.
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
Determine autoremovable kernels at run-time
See merge request apt-team/apt!138
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This fixes a problem on Ubuntu systems where the /boot partition
has been sized to manage 3 kernels, but does not really work with 4
kernels which was causing problems all over the place.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Kernels clutter /boot and /boot is small size, so we need to take
extra care to remove kernels when possible.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Our kernel autoremoval helper script protects the currently booted
kernel, but it only runs whenever we install or remove a kernel,
causing it to protect the kernel that was booted at that point in time,
which is not necessarily the same kernel as the one that is running
right now.
Reimplement the logic in C++ such that we can calculate it at run-time:
Provide a function to produce a regular expression that matches all
kernels that need protecting, and by changing the default root set
function in the DepCache to make use of that expression.
Note that the code groups the kernels by versions as before, and then
marks all kernel packages with the same version.
This optimized version inserts a virtual package $kernel into the cache
when building it to avoid having to iterate over all packages in the
cache to find the installed ones, significantly improving performance at
a minor cost when building the cache.
LP: #1615381
|
| | |/
| |/|
| | |
| | |
| | | |
This avoids the cost of setting up the function every time
we mark and sweep.
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
aptmethod: fix HTTP->HTTPS request sequences
See merge request apt-team/apt!140
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The "last connection" cache is currently being stored and looked up on
the combination of (LastHost, LastPort). However, these are not what the
arguments to getaddrinfo() were on the first try: the call is to
getaddrinfo(Host, ServiceNameOrPort, ...), i.e. with the port *or if 0,
the service name* (e.g. http).
Effectively this means that the connection cache lookup for:
https://example.org/... i.e. Host = example.org, Port = 0, Service = http
would end up matching the "last" connection of (if existed):
https://example.org/... i.e. Host = example.org, Port = 0, Service = https
...and thus performing a TLS request over an (unrelated) port 80
connection. Therefore, an HTTP request, followed up by an (unrelated)
HTTPS request to the same server, would always fail.
Address this by using as the cache key the ServiceNameOrPort, rather
than Port.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Convert the fixed-size (300) char array "ServStr" to a std::string, and
simplify the code by removing snprintfs in the process.
While at it, rename to the more aptly named "ServiceNameOrPort" and
update the comment to reflect what this variable is meant to be.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
ServerState->Comp() is used by the HTTP methods main loop to check
whether a connection can be reused, or whether a new one is needed.
Unfortunately, the currently implementation only compares the Host and
Port between the ServerState's internal URI, with a new URI. However
these are URIs, and therefore Port is 0 when a URI port is not
specificied, i.e. in the most common configurations.
As a result, a ServerState for http://example.org/... will be reused for
URIs of the form https://example.org/..., as both Host (example.org) and
Port (0) match. In turn this means that GET requests will happen over
port 80, in cleartext, even for those https URLs(!).
URI Acquires for an http URI and subsequently for an https one, in the
same aptmethod session, do not typically happen with apt as the
frontend, as apt opens a new pipe with the "https" aptmethod binary
(nowadays a symlink to http), which is why this hasn't been a problem in
practice and has eluded detection so far. It does happen in the wild
with other frontends (e.g. reprepro), plus is legitimately an odd and
surprising behavior on apt's end.
Therefore add a comparison for the URI's "Access" (= the scheme) in
addition to Host and Port, to ensure that we're not reusing the same
state for multiple different schemes.
|
| |/ /
|/| |
| | |
| | | |
See merge request apt-team/apt!144
|
| |/
|/| |
|
|/
|
|
| |
Closes: #977938
|
|\
| |
| |
| |
| | |
Use encoded URIs in the acquire system
See merge request apt-team/apt!139
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This commit potentially breaks code feeding apt an encoded URI using a
method which does not get URIs send encoded. The webserverconfig
requests in our tests are an example for this – but they only worked
before if the server was expecting a double encoding as that was what
was happening to an encoded URI: so unlikely to work as expected in
practice.
Now with the new methods we can drop this double encoding and rely on
the URI being passed properly (and without modification) between the
layers so that passing in encoded URIs should now work correctly.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Every method opts in to getting the encoded URI passed along while
keeping compat in case we are operated by an older acquire system.
Effectively this is just a change for the http-based methods as the
others just decode the URI as they work with files directly.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We do not deal a lot with URIs which need encoding, but then we do it is
a pain that we store it decoded in the acquire system as it means we
have to decode and reencode URIs eventually which is potentially giving
us slightly different URIs.
We see that in our own testing framework while setting up redirects as
the config options are effectively double-encoded and decoded to pass
them around successfully as otherwise %2f and / in an URI are treated
the same.
This commit adds the infrastructure for methods to opt into getting URIs
send in encoded form (and returning them to us in encoded form, too) so
that we eventually do not have to touch the URIs which is how it should
be. This means though that we have to deal with methods who do not
support this yet (aka: all at the moment) for which we decode and encode
while communicating with them.
|
|/
|
|
|
|
| |
Our http method encodes the URI again which results in the double
encoding we have unwrap in the webserver (we did already, but we skip
the filename handling now which does the first decode).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unroll pkgCache::sHash 8 times and break up the dependency between
the iterations by expanding the calculation
H(n) = 33 * H(n-1) + c
8 times rather than performing it 8 times. This seems to yield about
a 0.4% performance improvement.
I tried unrolling 4 and 2 bytes as well, those only having 3 ifs at
the end rather than 1 small loop; but that was actually slower -
potentially the code got to large and the cache went bonkers.
I also tried unrolling 4 times instead of 8, thinking that smaller
code might yield better results overall then, but that was slower as
well.
|
| |
|