| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Keeping URIs encoded in the acquire system depends on having them
encoded in the first place. While many other places got the encoding
2 out of 3 ArchiveURI implementations were missed which are in practice
responsible for nearly all of the URI building, just that index filename
do not contain characters to escape and the Filename fields in Packages
files usually aren't. Usually. Except if you happen to have e.g. an epoch
featuring package with the colon encoded in the filename. On the upside,
in most repositories the epoch isn't part of the filename.
Reported-By: Johannes 'josch' Schauer on IRC
References: e6c55283d235aa9404395d30f2db891f36995c49
|
|
|
|
|
|
|
|
|
| |
Enable the Acquire::Retries option by default, set to 3.
This will help with slightly unreliable networking; future
work is needed for adding backoff and SRV/IP rotation.
LP: #1876035
Gbp-Dch: full
|
|
|
|
|
|
|
|
|
| |
Repositories without Size information for packages are not
proper and need fixing. This ensures people see an error in
CI, and get notifications and hence the ability to fix it.
It can be turned off by setting Acquire::AllowUnsizedPackages
to true.
|
|
|
|
|
|
|
|
|
|
|
| |
The mirror method can distribute requests for files based on various
metadata bits, but some – the main index files – weren't actually
passing those on to the methods as advertised in the manpage.
This is hidden both by mirror usually falling back to other sources
which will eventually hit the right one and that if the repository does
not support by-hash apt will automatically stick to the mirror which was
used for the Release file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Especially in small sections of an archive it can happen that an index
returns to a previous state (e.g. if a package was first added and then
removed with no other changes happening in between). The result is that
we have multiple patches which start from the same hash which if we
perform clientside merging is no problem although not ideal as we
perform needless work.
For serverside merging it would not matter, but due to rred previously
refusing to merge zero-size patches but dak ignoring failure letting it
carry these size-zero patches until they naturally expire we run into a
problem as these broken patches won't do and force us to fall back to
downloading the entire index. By always starting from the last patch
instead of the first with the starter hash we can avoid this problem
and behave optimally in clientside merge cases, too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The rred method expects the patches to have a certain name, which we
have to rename the file to before calling the method, but by delaying
the rename we ensure that if the download of one of them fails and a
successful fallback occurs they are all properly cleaned up as no longer
useful while in the error case the next apt run can potentially pick
them up as already downloaded.
Our test-pdiff-usage test was encountering this every other run, but did
not fail as the check for unaccounted files in partial/ was wrapped
in a subshell so that the failure produced failing output, but did not
change the exit code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
APT tries to detect if applying patches is more costly than just
downloading the complete index by combining the size of the patches.
That is correct for client-side merging, but for server-side merging
we actually don't know if we will jump directly from old to current or
have still intermediate steps in between.
With this commit we assume it will be a jump from old to current through
as that is what dak implements and it seems reasonable if you go to
the trouble of server side merging that the server does the entire
merging in one file instead of leaving additional work for the client
to do.
Note that this just changes the heuristic to prevent apt from discarding
patches as uneconomic in the now more common one merged-patch style, it
still supports multiple merged patches as before.
To resolve this cleanly we would need another field in the index file
declaring which hash we will arrive at if a patch is applied (or a field
differentiating between these merged-patch styles at least), but that
seems like overkill for now.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit potentially breaks code feeding apt an encoded URI using a
method which does not get URIs send encoded. The webserverconfig
requests in our tests are an example for this – but they only worked
before if the server was expecting a double encoding as that was what
was happening to an encoded URI: so unlikely to work as expected in
practice.
Now with the new methods we can drop this double encoding and rely on
the URI being passed properly (and without modification) between the
layers so that passing in encoded URIs should now work correctly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We do not deal a lot with URIs which need encoding, but then we do it is
a pain that we store it decoded in the acquire system as it means we
have to decode and reencode URIs eventually which is potentially giving
us slightly different URIs.
We see that in our own testing framework while setting up redirects as
the config options are effectively double-encoded and decoded to pass
them around successfully as otherwise %2f and / in an URI are treated
the same.
This commit adds the infrastructure for methods to opt into getting URIs
send in encoded form (and returning them to us in encoded form, too) so
that we eventually do not have to touch the URIs which is how it should
be. This means though that we have to deal with methods who do not
support this yet (aka: all at the moment) for which we decode and encode
while communicating with them.
|
|
|
|
| |
Closes: #931566
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This experiment did not turn out sensibly, as some servers do not
accept credentials when none are expected and fail, so you cannot
mirror such a repository.
This reverts commit c2b9b0489538fed4770515bd8853a960b13a2618.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While passing the combi Release and Release.gpg to the gpgv method for
verification the filename of Release is placed where usually Release.gpg
is assumed in the rest of the code. The "usual" cases like passing
verification and failing verification ending in an error are taking care
of this, but the code path dealing with a failed verification, but
ignoring said failure (e.g. due to trusted=yes) was not which results in
the wrong file being removed later on (in case the index happens to be
unmodified since the last update call) leading us into the abyss of
strange failures (fixed in the previous commit) were nothing should have
changed.
This is not a security issue in this form as the repository needs to fail
verification & the user forcing apt to ignore the failure and carry on
anyhow. It does show however how complicated the code and its various
interconnected paths can become.
Reported-By: Val "pinkieval" Lorentz on IRC
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we have no old Release file, but old indices we can't compare
hashsums with the new Release file and hence must request the indices
again and have to react to IMS hits if they didn't change.
We used to symlink the old index file to the partial directory, but that
usually meant that we linked an uncompressed file to a compressed file,
which not all uncompressors can deal with transparently resulting in
strange failures.
We could do without the symlink, but that would require changes in the
codepaths dealing with failure as they would rename the file to FAILED.
|
|
|
|
|
|
|
|
| |
Unused variable, std::algorithms instead of raw for-loops.
There should be no observeable difference in behaviour.
Reported-By: cppcheck
Gbp-Dch: Ignore
|
|
|
|
| |
Reported-By: cppcheck
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new field allows a repository to declare that access to
packages requires authorization. The current implementation will
set the pin to -32768 if no authorization has been provided in
the auth.conf(.d) files.
This implementation is suboptimal in two aspects:
(1) A repository should behave more like NotSource repositories
(2) We only have the host name for the repository, we cannot use
paths yet.
- We can fix those after an ABI break.
The code also adds a check to acquire-item.cc to not use the
specified repository as a download source, mimicking NotSource.
|
|
|
|
|
|
|
| |
Telling the acquire system which keys caused the gpgv method to
succeed allows us for now just a casual check if the gpgv method
really executed catching bugs like CVE-2018-0501, but we will make use
of the information for better features in the following commits.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
APT in 1.6 saw me rewriting the mirror:// transport method, which works
comparable to the decommissioned httpredir.d.o "just" that apt requests
a mirror list and performs all the redirections internally with all the
bells like parallel download and automatic fallback (more details in the
apt-transport-mirror manpage included in the 1.6 release).
The automatic fallback is the problem here: The intend is that if a file
fails to be downloaded (e.g. because the mirror is offline, broken,
out-of-sync, …) instead of erroring out the next mirror in the list is
contacted for a retry of the download.
Internally the acquire process of an InRelease file (works with the
Release/Release.gpg pair, too) happens in steps: 1) download file and 2)
verify file, both handled as URL requests passed around. Due to an
oversight the fallbacks for the first step are still active for the
second step, so that the successful download from another mirror stands
in for the failed verification… *facepalm*
Note that the attacker can not judge by the request arriving for the
InRelease file if the user is using the mirror method or not. If entire
traffic is observed Eve might be able to observe the request for
a mirror list, but that might or might not be telling if following
requests for InRelease files will be based on that list or for another
sources.list entry not using mirror (Users have also the option to have
the mirror list locally (via e.g. mirror+file://) instead of on a remote
host). If the user isn't using mirror:// for this InRelease file apt
will fail very visibly as intended.
(The mirror list needs to include at least two mirrors and to work
reliably the attacker needs to be able to MITM all mirrors in the list.
For remotely accessed mirror lists this is no limitation as the attacker
is in full control of the file in that case)
Fixed by clearing the alternatives after a step completes (and moving a pimpl
class further to the top to make that valid compilable code). mirror://
is at the moment the only method using this code infrastructure (for all
others this set is already empty) and the only method-independent user
so far is the download of deb files, but those are downloaded and
verified in a single step; so there shouldn't be much opportunity for
regression here even through a central code area is changed.
Upgrade instructions: Given all apt-based frontends are affected, even
additional restrictions like signed-by are bypassed and the attack in
progress is hardly visible in the progress reporting of an update
operation (the InRelease file is marked "Ign", but no fallback to
"Release/Release.gpg" is happening) and leaves no trace (expect files
downloaded from the attackers repository of course) the best course of
action might be to change the sources.list to not use the mirror family
of transports ({tor+,…}mirror{,+{http{,s},file,…}}) until a fixed
version of the src:apt packages are installed.
Regression-Of: 355e1aceac1dd05c4c7daf3420b09bd860fd169d,
57fa854e4cdb060e87ca265abd5a83364f9fa681
LP: #1787752
|
|
|
|
|
|
| |
The random_device fails if not enough entropy is available. We do
not need high-quality entropy here, though, so let's switch to a
seed based on the current time in nanoseconds, XORed with the PID.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit d7c92411dc1f4c6be098d1425f9c1c075e0c2154 introduced a warning for
non-existent files from components not mentioned in Components to hint
users at a mispelling or the disappearance of a component.
The debian-installer subcomponent isn't actively advertised in the
Release file through, so if apt ends up in acquiring a file which
doesn't exist for this component (like Translation files) apt would
produce a warning:
W: Skipping acquire of configured file
'main/debian-installer/i18n/Translation-en' as repository
'http://deb.debian.org/debian buster InRelease' doesn't have the
component 'main/debian-installer' (component misspelt in sources.list?)
We prevent this in the future by checking if any file exists from this
component which results in the warning to be produced still for the
intended cases and silence it on the d-i case.
This could potentially cause the warning not to be produced in cases it
should be if some marginal file remains, but as this message is just a
hint and the setup a bit pathological lets ignore it for now.
There is also the possibility of having no file present as they would
all be 0-length files and being a "hidden" component, but that would be
easy to workaround from the repository side and isn't really actively used
at the moment in the wild.
Closes: #879591
|
|
|
|
|
|
|
|
| |
Downloading from the same mirror we got a Release file from makes sense
for non-unique URIs as their content changes between mirror states, but
if we ask for an index via by-hash we can be sure that we either get the
file we wanted or a 404 for which we can perform a fallback for which
allows us to pull indexes from different mirror in parallel.
|
|
|
|
|
|
|
| |
Individual items shouldn't concern themselves with these alternative
locations, we can deal with this more efficiently within the
infrastructure created for other alternative URIs now avoiding the need
to implement this in each item.
|
|
|
|
|
|
|
|
|
|
|
| |
If we got a file but it produced a hash error, mismatched size or
similar we shouldn't fallback to alternative URIs as they likely result
in the same error. If we can we should instead use another mirror.
We used to be a lot stricter by stopping all trys for this file if we
got a non-404 (or a hash-based) failure, but that is too hard as we
really want to try other mirrors (if we have them) in the hope that they
have the expected and correct files.
|
|
|
|
| |
Prompted-by: Jakub Wilk <jwilk@debian.org>
|
|
|
|
|
| |
Reported-By: codespell & spellintian
Gbp-Dch: Ignore
|
|\
| |
| |
| |
| | |
Check that Date of Release file is not in the future
See merge request apt-team/apt!3
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
By restricting the Date field to be in the past, an attacker cannot
just create a repository from the future that would be accepted as
a valid update for a repository.
This check can be disabled by Acquire::Check-Date set to false. This
will also disable Check-Valid-Until and any future date related checking,
if any - the option means: "my computers date cannot be trusted."
Modify the tests to allow repositories to be up to 10 hours in the
future, so we can keep using hours there to simulate time changes.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
The interesting takeaway here is perhaps that 'chmod +w' is effected by
the umask – obvious in hindsight of course. The usual setup helps with
hiding that applying that recursively on all directories (and files)
isn't correct. Ensuring files will not be stored with the wrong
permissions even if in strange umask contexts is trivial in comparison.
Fixing the test also highlighted that it wasn't bulletproof as apt will
automatically fix the permissions of the directories it works with, so
for this test we actually need to introduce a shortcut in the code.
Reported-By: Ubuntu autopkgtest CI
|
|
|
|
|
|
|
|
|
|
| |
apt 1.6~alpha6 introduced aux requests to revamp the implementation of
a-t-mirror. This already included the potential of running as non-root,
but the detection wasn't complete resulting in errors or could produce
spurious warnings along the way if the directory didn't exist yet.
References: ef9677831f62a1554a888ebc7b162517d7881116
Closes: 887624
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow specifying an alternative path to the InRelease file, so
you can have multiple versions of a repository, for example.
Enabling this option disables fallback to Release and Release.gpg,
so setting it to InRelease can be used to ensure that only that
will be tried.
We add two test cases: One for checking that it works, and another
for checking that the fallback does not happen.
Closes: #886745
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a method needs a file to operate like e.g. mirror needs to get a list
of mirrors before it can redirect the the actual requests to them. That
could easily be solved by moving the logic into libapt directly, but by
allowing a method to request other methods to do something we can keep
this logic contained in the method and allow e.g. also methods which
perform binary patching or similar things.
Previously they would need to implement their own acquire system inside
the existing one which in all likelyhood will not support the same
features and methods nor operate with similar security compared to what
we have already running 'above' the requesting method. That said, to
avoid methods producing conflicts with "proper" files we are downloading
a new directory is introduced to keep the auxiliary files in.
[The message magic number 351 is a tribute to the german Grundgesetz
article 35 paragraph 1 which defines that all authorities of the
state(s) help each other on request.]
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier gcc versions used to complain that you should add them althrough
there isn't a lot of point to it if you think about it, but now gcc (>= 8)
complains about the attribute being present.
warning: ‘pure’ attribute on function returning ‘void’ [-Wattributes]
Reported-By: gcc -Wattributes
Gbp-Dch: Ignore
|
|
|
|
|
|
|
|
| |
For deb files we always supported falling back from one server to the
other if one failed to download the deb, but that was hardwired in the
handling of this specific item. Moving this alongside the retry
infrastructure we can implement it for all items and allow methods to
use this as well by providing additional URIs in a redirect.
|
|
|
|
|
|
|
|
| |
We have quite a bit of metadata available for the files we acquire, but
the methods weren't told about it and got just the URI. That is indeed
fine for most, but to avoid methods trying to parse the metadata out of
the provided URIs (and fail horribly in edgecases) we can just as well
be nice and tell them stuff directly.
|
|
|
|
|
|
|
| |
Moving the Retry-implementation from individual items to the worker
implementation not only gives every file retry capability instead of
just a selected few but also avoids needing to implement it in each item
(incorrectly).
|
|
|
|
|
|
|
|
|
|
| |
gcc has problems understanding this construct and additionally thinks it
would produce multiple lines and stuff, so to keep using it isn't really
worth it for the few instances we have: We can just write the long form
there which works better.
Reported-By: gcc
Gbp-Dch: Ignore
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a InRelease file fails to download with a non-404 error
we assumed there is some general problem with repository like
a webportal or your are blocked from access (wrong auth, Tor, …).
Turns out some server like S3 return 403 if a file doesn't exist.
Allowing this in general seems like a step backwards as 403 is a
reasonable response if auth failed, so failing here seems better
than letting those users run into problems.
What we can do is show our insecure warnings through and allow the
failures for insecure repos: If the repo is signed it is easy to add
an InRelease file and if not you are setup for trouble anyhow.
References: cbbf185c3c55effe47f218a07e7b1f324973a8a6
|
|
|
|
| |
Regression-Of: cc1f94c95373670fdfdb8e2d6cf9125181f7df0c
|
|
|
|
|
| |
As a follow up to the last commit, let's replace APT_CONST
with APT_PURE everywhere to clean stuff up.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The comment says this is intended, but looking at the history reveals
that the comment comes from a different era. Nowadays we don't really
need it anymore (and even back then it was disputeable) as we haven't
used that file for our update in the end and nothing really needs this
file after the update.
Triggered is this by 188f297a2af4c15cb1d502360d1e478644b5b810 which
moves various error conditions forward including this code expecting the
file to exist – but it doesn't need to as download could have failed.
We could fix that by simple checking if the file exists and only stage
it if it does, but instead we don't stage it and instead even rename it
out of the way with our conventional FAILED name (if it exists).
That restores support for partial mirrors (= in this case mirrors which
don't ship pdiff files). Note that apt heals itself even if only such a
mirror is used as the update is successful even if that error is shown.
Closes: 869425
|
|
|
|
|
|
|
|
| |
RenameOnError does the rename already, so the check for existence will
always fail making this some completely harmles but also completely
pointless two lines of code we are better of removing.
Gbp-Dch: Ignore
|
|\
| |
| |
| |
| |
| | |
Minor grammar fix
[jak@d.o: Fixed up po/]
|
| |
| |
| | |
Modified the wording of an error message when a repository no longer has a release file.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This makes it easier to see which headers includes what.
The changes were done by running
git grep -l '#\s*include' \
| grep -E '.(cc|h)$' \
| xargs sed -i -E 's/(^\s*)#(\s*)include/\1#\2 include/'
To modify all include lines by adding a space, and then running
./git-clang-format.sh.
|
| |
| |
| |
| |
| | |
Including cacheiterators.h before pkgcache.h fails because
pkgcache.h depends on cacheiterators.h.
|
| |
| |
| |
| |
| |
| |
| | |
Progress only shows if we have an idea of how much files we will
acquire, but if a transaction fails before we have got an idea we ended
up never showing progress even through we know that a failed transaction
will not download additional files.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Having messages being printed on the error stack and confirm them by
commandline flags is an okayish first step, but some frontends will
probably want to have a more interactive feeling here with a proper
question the user can just press yes/no for as for some frontends a
commandline flag makes no sense…
|