| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Change the logic to use "Valid" instead of "Good" to determine
whether we need to fallback and if fallback was successful. That
means that if you have an expired key in trusted.gpg.d, and a
non-expired in trusted.gpg, verification will now fail directly
with the expired key in trusted.gpg.d and not try to fallback.
Likewise, if the key in trusted.gpg is expired, this will now
also be reported correctly again, instead of producing an error
message that the key could not be found.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a repository is signed with multiple keys, apt 2.4.0 would
ignore the fallback result if some keys were still missing,
causing signature verification to fail.
Rework the logic such that when checking if fallback was "succesful",
missing keys are ignored - it only matters if we managed to verify
one key now, whether good or bad.
Likewise, simplify the logic when to do the fallback:
If there was a bad signature in trusted.gpg.d, do NOT fallback at all
- this is a minor security issue, as a key in trusted.gpg.d could
fail silently with a bad signature, and then a key in trusted.gpg
might allow the signature to succeed (as trusted.gpg.d key is then
missing).
Only fallback if we are missing a good signature, and there are
keys we have not yet checked.
|
|
|
|
|
| |
With apt-key going away, people need to manage key files, rather
than keys, so they need to know if any keys are in the legacy keyring.
|
| |
|
|
|
|
|
|
| |
Darwin systems define TRUE and FALSE as preprocessor macros for use with
bool. This conflicts with the enum values causing the compilation to
fail.
|
|
|
|
|
|
| |
Extend the Signed-By field to handle embedded public key blocks,
this allows shipping self-contained .sources files, making it
substantially easier to provide third party repositories.
|
|\
| |
| |
| |
| | |
basehttp: Turn HaveContent into a TriState
See merge request apt-team/apt!179
|
| |
| |
| |
| |
| |
| |
| |
| | |
Set haveContent to HaveContent::FALSE when Content-Length is 0,
and change remaining code to only set it to TRUE if it has not
been set so far.
Closes: #990281
|
| |
| |
| |
| |
| |
| | |
We need to be able to set HaveContent to false if the Content-Length
is 0, and not have that overriden just because a later header is
Content-Type.
|
|\ \
| | |
| | |
| | |
| | | |
Add AllowRange option to disable HTTP Range usage
See merge request apt-team/apt!188
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Debian buster (oldstable) ships 6.1 while bullseye (stable) ships 6.5
and so the later is 'fixed'. Upstream declares 6.0 still as supported.
It might be still a while we encounter "bad" versions in the wild, so
if we can detect and work around the issue at runtime automatically we
can save some users from running into "persistent" partial files.
References: https://varnish-cache.org/docs/6.4/whats-new/changes-6.4.html#changes-in-behavior
|
| |/
| |
| |
| |
| |
| |
| |
| | |
apt makes heavy usage of HTTP1.1 features including Range and If-Range.
Sadly it is not obvious if the involved server(s) (and proxies) actually
support them all. The Acquire::http::AllowRange option defaults to true
as before, but now a user can disable Range usage if it is known that
the involved server is not dealing with such requests correctly.
|
|/
|
|
|
|
|
|
|
|
| |
The settings used for unwrapping TLS connections depend on the access
and hostname we connect to more than what we eventually unwrap. The
bugreport mentions CaInfo, but all other https-settings should also
apply (regardless of generic or hostname specific) to an https proxy,
even if the connection we proxy through it is http-only.
Closes: #990555
|
|
|
|
|
|
|
| |
This makes them retriable, and brings them more into line with
TCP, where handshake is also a transient error.
LP: #1928100
|
|
|
|
|
|
|
|
|
| |
The maximum request size is accidentally set to any sized file,
so if an unsized file is present, and it turns out to be larger
than the maximum size we set, we'd error out when checking if
its size is smaller than the maximum request size.
LP: #1921626
|
|
|
|
|
|
|
|
| |
There isn't a lot of sense in working on empty patches as they change
nothing (quite literally), but they can be the result of merging
multiple patches and so to not require our users to specifically detect
and remove them, we can be nice and just ignore them instead of erroring
out.
|
|
|
|
|
|
|
| |
We use the code in error messages, so at least for that edgecase we
should ensure that it has sensible content. Note that the acquire
system aborts on non-sensible message content in SendMessage, so you
can't really exploit this.
|
|
|
|
|
|
|
| |
varg API is a nightmare as the symbols seems different on ever other
arch, but more importantly SendMessage does a few checks on the content
of the message and it is all outputted via C++ iostreams and not mixed
in FILE* which is handy for overriding the streams.
|
|
|
|
|
|
|
|
|
|
|
| |
The rest of our code uses return-code errors and it isn't that nice to
crash the rred method on bad patches anyway, so we move to properly
reporting errors in our usual way which incidently also helps writing
a fuzzer for it.
This is not really security relevant though as the patches passed hash
verification while they were downloaded, so an attacker has to overtake a
trusted repository first which gives plenty better options of attack.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "last connection" cache is currently being stored and looked up on
the combination of (LastHost, LastPort). However, these are not what the
arguments to getaddrinfo() were on the first try: the call is to
getaddrinfo(Host, ServiceNameOrPort, ...), i.e. with the port *or if 0,
the service name* (e.g. http).
Effectively this means that the connection cache lookup for:
https://example.org/... i.e. Host = example.org, Port = 0, Service = http
would end up matching the "last" connection of (if existed):
https://example.org/... i.e. Host = example.org, Port = 0, Service = https
...and thus performing a TLS request over an (unrelated) port 80
connection. Therefore, an HTTP request, followed up by an (unrelated)
HTTPS request to the same server, would always fail.
Address this by using as the cache key the ServiceNameOrPort, rather
than Port.
|
|
|
|
|
|
|
|
| |
Convert the fixed-size (300) char array "ServStr" to a std::string, and
simplify the code by removing snprintfs in the process.
While at it, rename to the more aptly named "ServiceNameOrPort" and
update the comment to reflect what this variable is meant to be.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ServerState->Comp() is used by the HTTP methods main loop to check
whether a connection can be reused, or whether a new one is needed.
Unfortunately, the currently implementation only compares the Host and
Port between the ServerState's internal URI, with a new URI. However
these are URIs, and therefore Port is 0 when a URI port is not
specificied, i.e. in the most common configurations.
As a result, a ServerState for http://example.org/... will be reused for
URIs of the form https://example.org/..., as both Host (example.org) and
Port (0) match. In turn this means that GET requests will happen over
port 80, in cleartext, even for those https URLs(!).
URI Acquires for an http URI and subsequently for an https one, in the
same aptmethod session, do not typically happen with apt as the
frontend, as apt opens a new pipe with the "https" aptmethod binary
(nowadays a symlink to http), which is why this hasn't been a problem in
practice and has eluded detection so far. It does happen in the wild
with other frontends (e.g. reprepro), plus is legitimately an odd and
surprising behavior on apt's end.
Therefore add a comparison for the URI's "Access" (= the scheme) in
addition to Host and Port, to ensure that we're not reusing the same
state for multiple different schemes.
|
|
|
|
|
|
|
|
| |
Every method opts in to getting the encoded URI passed along while
keeping compat in case we are operated by an older acquire system.
Effectively this is just a change for the http-based methods as the
others just decode the URI as they work with files directly.
|
| |
|
|
|
|
|
|
| |
The acquire system mode does this for a long time already and as it is
easy to implement and handy for manual testing as well we can support
it in the other modes, too.
|
|
|
|
|
|
|
|
|
|
|
| |
Merging patches is a bit of non-trivial code we have for client-side
work, but as we support also server-side merging we can export this
functionality so that server software can reuse it.
Note that this just cleans up and makes rred behave a bit more like all
our other binaries by supporting setting configuration at runtime and
supporting --help and --version. If you can make due without this, the
now advertised functionality is provided already in earlier versions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old code was fairly confusing, and contradictory. Notably, the
second `if` also only applied to the Data state, whereas we already
terminated the Data state earlier. This was bad.
The else fallback applied in three cases:
(1) We reached our limit
(2) We are Persistent
(3) We are headers
Now, it always failed as a transient error if it had
nothing left in the buffer. BUT: Nothing left in the buffer
is the correct thing to happen if we were fetching content.
Checking all combinations for the flags, we can compare the results
of Die() between 2.1.7 - the last "known-acceptable-ish" version
and this version:
2.1.7 this
Data !Persist !Space !Limit OK (A) OK
Data !Persist !Space Limit OK (A) OK
Data !Persist Space !Limit OK (C) OK
Data !Persist Space Limit OK OK
Data Persist !Space !Limit ERR ERR *
Data Persist !Space Limit OK (B) OK
Data Persist Space !Limit ERR ERR
Data Persist Space Limit OK OK
=> Data connections are OK if they have not reached their limit,
or are persistent (in which case they'll probably be chunked)
Header !Persist !Space !Limit ERR ERR
Header !Persist !Space Limit ERR ERR
Header !Persist Space !Limit OK OK
Header !Persist Space Limit OK OK
Header Persist !Space !Limit ERR ERR
Header Persist !Space Limit ERR ERR
Header Persist Space !Limit OK OK
Header Persist Space Limit OK OK
=> Common scheme here is that header connections are fine if they have
read something into the input buffer (Space). The rest does not matter.
(A) Non-persistent connections with !space always enter the else clause, hence success
(B) no Space means we enter the if/else, we go with else because IsLimit(), and we succeed because we don't have space
(C) Having space we do enter the while (WriteSpace()) loop, but we never reach IsLimit(),
hence we fall through. Given that our connection is not persistent, we fall through to the
else case, and there we win because we have data left to write.
|
|
|
|
|
|
|
|
|
|
|
| |
We do not want to end up in a code path while reading content
from the server where we have local data left to write, which
can happen if a previous read included both headers and content.
Restructure Flush() to accept a new argument to allow incomplete
flushs (which do not match our limit), so that it can flush as
far as possible, and modify Go() and use that before and after
reading from the server.
|
|
|
|
| |
This causes some more issues, really.
|
|
|
|
|
| |
We have successfully finished reading data if our buffer is empty,
so we don't need to do any further checks.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While we fixed the infinite retrying earlier, we still have
problems if we retry in the middle of a transfer, we might
end up resuming downloads that are already done and read
more than we should (removing the IsOpen() check so that
it always retries makes test-ubuntu-bug-1098738-apt-get-source-md5sum
fail with wrong file sizes).
I think the retrying was added to fixup pipelining messups,
but we have better solutions now, so let's get rid of it,
until we have implemented this properly.
|
|
|
|
|
|
|
|
|
|
| |
When we failed after a retry, we only communicated failure as
transient, but this seems wrong, especially given that the code
now always triggers a retry when Die() is called, as Die() closes
the server fd.
Instead, remove the error handling in that code path, and reuse
the existing fatal-ish error code handling path.
|
|
|
|
|
|
|
|
|
|
|
|
| |
If there was a transient error and the server fd was closed, the
code would infinitely retry - it never reached FailCounter >= 2
because it falls through to the end of the loop, which sets
FailCounter = 0.
Add a continue just like the DNS rotation code has, so that the
retry actually fails after 2 attempts.
Also rework the error logic to forward the actual error message.
|
|\
| |
| |
| |
| | |
Pu/http fixes 2
See merge request apt-team/apt!125
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We only add the file to the select() call if we have data to
write to it prior to the select() call. This is problematic:
Assuming we enter Go() with no data to write to the file,
but we read some from the server as well as an EOF, we end
up not writing it to the file because we did not add the file
to the select.
We can't always add the file to the select(), because it's
basically always ready and we don't want to wake up if we
don't have anything to read or write.
So for a solution, let's just always write data to the file
if there's data to write to it. If some gets leftover, or if
some was already present when we started Go(), it will still
be added to the select() call and unblock it.
Closes: #959518
|
|\ \
| |/
|/|
| |
| | |
Remove master/slave terminology
See merge request apt-team/apt!124
|
| | |
|
| |
| |
| |
| | |
Sorry!
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of reading the data early, disable the timeout for the
select() call and read the data later. Also, change Read() to
call only once to drain the buffer in such instances.
We could optimize this to call read() multiple times if there
is also pending stuff on the socket, but that it slightly more
complex and should not provide any benefits.
|
| |
| |
| |
| |
| | |
The error handling in Die() that's supposed to add useful error
messages is not super useful here.
|
| |
| |
| |
| |
| |
| | |
This avoids a case where we read data, then write to the server
and only then realize the connection was closed. It is somewhat
slower, though.
|
| |
| |
| |
| |
| |
| |
| | |
By changing the buffer implementation to return true if it
read or wrote something, even on EOF, we should not have a
need to flush the buffer in Die() anymore - we should only
be calling Die() if the buffer is empty now.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This should avoid the need to Flush the buffer in Die(), because
if we read anything, we are returning true, and not entering Die()
at that point.
Also Write() does not have a concept of EOF, so get rid of code
handling that there. Was that copied from Read()?
|
| |
| |
| |
| |
| |
| |
| | |
Die() needs its own Copy() of Flush() because it needs to return
success or failure based on some states, but those are not precisely
the same as Flush(), as Flush() will always return false at the end,
for example, but we want to fall through to our error handling.
|
|/
|
|
|
| |
If we reached Die() there was an issue with the server connection,
so we should always explicitly close it.
|
|
|
|
|
|
| |
If this option is disabled (which it is by default in Debian), we don't
have to make the call and the checks around it. Not that it really
matters that much as if it would we would be better checking only once.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Include that apt is being run from a service in the user
agent, so traffic can be analysed for interactive vs
non-interactive use, and prioritised accordingly.
It looks like this now:
User-Agent: Debian APT-HTTP/1.3 (2.0.1) non-interactive
A previous version included the full service names, but this
raised some privacy concerns.
LP: #1825000
|
| |
|
|
|
|
|
|
| |
Remove all code scheduled to be removed after 5.90, and fix
files to include files they previously got from hashes.h
including more headers.
|