| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
This makes them retriable, and brings them more into line with
TCP, where handshake is also a transient error.
LP: #1928100
|
|
|
|
|
|
|
|
|
| |
The maximum request size is accidentally set to any sized file,
so if an unsized file is present, and it turns out to be larger
than the maximum size we set, we'd error out when checking if
its size is smaller than the maximum request size.
LP: #1921626
|
|
|
|
|
|
|
|
| |
There isn't a lot of sense in working on empty patches as they change
nothing (quite literally), but they can be the result of merging
multiple patches and so to not require our users to specifically detect
and remove them, we can be nice and just ignore them instead of erroring
out.
|
|
|
|
|
|
|
| |
We use the code in error messages, so at least for that edgecase we
should ensure that it has sensible content. Note that the acquire
system aborts on non-sensible message content in SendMessage, so you
can't really exploit this.
|
|
|
|
|
|
|
| |
varg API is a nightmare as the symbols seems different on ever other
arch, but more importantly SendMessage does a few checks on the content
of the message and it is all outputted via C++ iostreams and not mixed
in FILE* which is handy for overriding the streams.
|
|
|
|
|
|
|
|
|
|
|
| |
The rest of our code uses return-code errors and it isn't that nice to
crash the rred method on bad patches anyway, so we move to properly
reporting errors in our usual way which incidently also helps writing
a fuzzer for it.
This is not really security relevant though as the patches passed hash
verification while they were downloaded, so an attacker has to overtake a
trusted repository first which gives plenty better options of attack.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "last connection" cache is currently being stored and looked up on
the combination of (LastHost, LastPort). However, these are not what the
arguments to getaddrinfo() were on the first try: the call is to
getaddrinfo(Host, ServiceNameOrPort, ...), i.e. with the port *or if 0,
the service name* (e.g. http).
Effectively this means that the connection cache lookup for:
https://example.org/... i.e. Host = example.org, Port = 0, Service = http
would end up matching the "last" connection of (if existed):
https://example.org/... i.e. Host = example.org, Port = 0, Service = https
...and thus performing a TLS request over an (unrelated) port 80
connection. Therefore, an HTTP request, followed up by an (unrelated)
HTTPS request to the same server, would always fail.
Address this by using as the cache key the ServiceNameOrPort, rather
than Port.
|
|
|
|
|
|
|
|
| |
Convert the fixed-size (300) char array "ServStr" to a std::string, and
simplify the code by removing snprintfs in the process.
While at it, rename to the more aptly named "ServiceNameOrPort" and
update the comment to reflect what this variable is meant to be.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ServerState->Comp() is used by the HTTP methods main loop to check
whether a connection can be reused, or whether a new one is needed.
Unfortunately, the currently implementation only compares the Host and
Port between the ServerState's internal URI, with a new URI. However
these are URIs, and therefore Port is 0 when a URI port is not
specificied, i.e. in the most common configurations.
As a result, a ServerState for http://example.org/... will be reused for
URIs of the form https://example.org/..., as both Host (example.org) and
Port (0) match. In turn this means that GET requests will happen over
port 80, in cleartext, even for those https URLs(!).
URI Acquires for an http URI and subsequently for an https one, in the
same aptmethod session, do not typically happen with apt as the
frontend, as apt opens a new pipe with the "https" aptmethod binary
(nowadays a symlink to http), which is why this hasn't been a problem in
practice and has eluded detection so far. It does happen in the wild
with other frontends (e.g. reprepro), plus is legitimately an odd and
surprising behavior on apt's end.
Therefore add a comparison for the URI's "Access" (= the scheme) in
addition to Host and Port, to ensure that we're not reusing the same
state for multiple different schemes.
|
|
|
|
|
|
|
|
| |
Every method opts in to getting the encoded URI passed along while
keeping compat in case we are operated by an older acquire system.
Effectively this is just a change for the http-based methods as the
others just decode the URI as they work with files directly.
|
| |
|
|
|
|
|
|
| |
The acquire system mode does this for a long time already and as it is
easy to implement and handy for manual testing as well we can support
it in the other modes, too.
|
|
|
|
|
|
|
|
|
|
|
| |
Merging patches is a bit of non-trivial code we have for client-side
work, but as we support also server-side merging we can export this
functionality so that server software can reuse it.
Note that this just cleans up and makes rred behave a bit more like all
our other binaries by supporting setting configuration at runtime and
supporting --help and --version. If you can make due without this, the
now advertised functionality is provided already in earlier versions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old code was fairly confusing, and contradictory. Notably, the
second `if` also only applied to the Data state, whereas we already
terminated the Data state earlier. This was bad.
The else fallback applied in three cases:
(1) We reached our limit
(2) We are Persistent
(3) We are headers
Now, it always failed as a transient error if it had
nothing left in the buffer. BUT: Nothing left in the buffer
is the correct thing to happen if we were fetching content.
Checking all combinations for the flags, we can compare the results
of Die() between 2.1.7 - the last "known-acceptable-ish" version
and this version:
2.1.7 this
Data !Persist !Space !Limit OK (A) OK
Data !Persist !Space Limit OK (A) OK
Data !Persist Space !Limit OK (C) OK
Data !Persist Space Limit OK OK
Data Persist !Space !Limit ERR ERR *
Data Persist !Space Limit OK (B) OK
Data Persist Space !Limit ERR ERR
Data Persist Space Limit OK OK
=> Data connections are OK if they have not reached their limit,
or are persistent (in which case they'll probably be chunked)
Header !Persist !Space !Limit ERR ERR
Header !Persist !Space Limit ERR ERR
Header !Persist Space !Limit OK OK
Header !Persist Space Limit OK OK
Header Persist !Space !Limit ERR ERR
Header Persist !Space Limit ERR ERR
Header Persist Space !Limit OK OK
Header Persist Space Limit OK OK
=> Common scheme here is that header connections are fine if they have
read something into the input buffer (Space). The rest does not matter.
(A) Non-persistent connections with !space always enter the else clause, hence success
(B) no Space means we enter the if/else, we go with else because IsLimit(), and we succeed because we don't have space
(C) Having space we do enter the while (WriteSpace()) loop, but we never reach IsLimit(),
hence we fall through. Given that our connection is not persistent, we fall through to the
else case, and there we win because we have data left to write.
|
|
|
|
|
|
|
|
|
|
|
| |
We do not want to end up in a code path while reading content
from the server where we have local data left to write, which
can happen if a previous read included both headers and content.
Restructure Flush() to accept a new argument to allow incomplete
flushs (which do not match our limit), so that it can flush as
far as possible, and modify Go() and use that before and after
reading from the server.
|
|
|
|
| |
This causes some more issues, really.
|
|
|
|
|
| |
We have successfully finished reading data if our buffer is empty,
so we don't need to do any further checks.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While we fixed the infinite retrying earlier, we still have
problems if we retry in the middle of a transfer, we might
end up resuming downloads that are already done and read
more than we should (removing the IsOpen() check so that
it always retries makes test-ubuntu-bug-1098738-apt-get-source-md5sum
fail with wrong file sizes).
I think the retrying was added to fixup pipelining messups,
but we have better solutions now, so let's get rid of it,
until we have implemented this properly.
|
|
|
|
|
|
|
|
|
|
| |
When we failed after a retry, we only communicated failure as
transient, but this seems wrong, especially given that the code
now always triggers a retry when Die() is called, as Die() closes
the server fd.
Instead, remove the error handling in that code path, and reuse
the existing fatal-ish error code handling path.
|
|
|
|
|
|
|
|
|
|
|
|
| |
If there was a transient error and the server fd was closed, the
code would infinitely retry - it never reached FailCounter >= 2
because it falls through to the end of the loop, which sets
FailCounter = 0.
Add a continue just like the DNS rotation code has, so that the
retry actually fails after 2 attempts.
Also rework the error logic to forward the actual error message.
|
|\
| |
| |
| |
| | |
Pu/http fixes 2
See merge request apt-team/apt!125
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We only add the file to the select() call if we have data to
write to it prior to the select() call. This is problematic:
Assuming we enter Go() with no data to write to the file,
but we read some from the server as well as an EOF, we end
up not writing it to the file because we did not add the file
to the select.
We can't always add the file to the select(), because it's
basically always ready and we don't want to wake up if we
don't have anything to read or write.
So for a solution, let's just always write data to the file
if there's data to write to it. If some gets leftover, or if
some was already present when we started Go(), it will still
be added to the select() call and unblock it.
Closes: #959518
|
|\ \
| |/
|/|
| |
| | |
Remove master/slave terminology
See merge request apt-team/apt!124
|
| | |
|
| |
| |
| |
| | |
Sorry!
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of reading the data early, disable the timeout for the
select() call and read the data later. Also, change Read() to
call only once to drain the buffer in such instances.
We could optimize this to call read() multiple times if there
is also pending stuff on the socket, but that it slightly more
complex and should not provide any benefits.
|
| |
| |
| |
| |
| | |
The error handling in Die() that's supposed to add useful error
messages is not super useful here.
|
| |
| |
| |
| |
| |
| | |
This avoids a case where we read data, then write to the server
and only then realize the connection was closed. It is somewhat
slower, though.
|
| |
| |
| |
| |
| |
| |
| | |
By changing the buffer implementation to return true if it
read or wrote something, even on EOF, we should not have a
need to flush the buffer in Die() anymore - we should only
be calling Die() if the buffer is empty now.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This should avoid the need to Flush the buffer in Die(), because
if we read anything, we are returning true, and not entering Die()
at that point.
Also Write() does not have a concept of EOF, so get rid of code
handling that there. Was that copied from Read()?
|
| |
| |
| |
| |
| |
| |
| | |
Die() needs its own Copy() of Flush() because it needs to return
success or failure based on some states, but those are not precisely
the same as Flush(), as Flush() will always return false at the end,
for example, but we want to fall through to our error handling.
|
|/
|
|
|
| |
If we reached Die() there was an issue with the server connection,
so we should always explicitly close it.
|
|
|
|
|
|
| |
If this option is disabled (which it is by default in Debian), we don't
have to make the call and the checks around it. Not that it really
matters that much as if it would we would be better checking only once.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Include that apt is being run from a service in the user
agent, so traffic can be analysed for interactive vs
non-interactive use, and prioritised accordingly.
It looks like this now:
User-Agent: Debian APT-HTTP/1.3 (2.0.1) non-interactive
A previous version included the full service names, but this
raised some privacy concerns.
LP: #1825000
|
| |
|
|
|
|
|
|
| |
Remove all code scheduled to be removed after 5.90, and fix
files to include files they previously got from hashes.h
including more headers.
|
|
|
|
|
| |
Saw those in libseccomp commit, figured should add them
too.
|
|
|
|
|
|
|
|
| |
Went through the list of syscalls starting at 403 (and looked
one back) up to the next unknown syscalls, and added all those that
were present in their non-time64 variant.
Closes: #951012
|
|
|
|
|
|
|
|
| |
Commit 93f33052de84e9aeaf19c92291d043dad2665bbd restricted auth.conf
entries to only apply to https by default, but this was silent - there
was no information why http sources with auth.conf entries suddenly
started failing. Add such information, and extend test case to cover
it.
|
| |
|
|
|
|
|
|
|
|
| |
Unused variable, std::algorithms instead of raw for-loops.
There should be no observeable difference in behaviour.
Reported-By: cppcheck
Gbp-Dch: Ignore
|
|
|
|
|
|
|
|
| |
Also in old changelogs, but nothing really user visible
like error messages or alike so barely noteworthy.
Reported-By: codespell
Gbp-Dch: Ignore
|
|
|
|
| |
Reported-By: cppcheck
|
|
|
|
|
|
| |
We are converting to std::string anyway by passing to
istringstream, and this removes the need for .c_str()
in callers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently CONNECT requests use the name of the proxy as Host value, instead of
the origin server's name.
According to RFC 2616 "The Host field value MUST represent the naming authority
of the origin server or gateway given by the original URL."
The current implementation causes problems with some proxy vendors. This
commit fixes this.
[jak: Adding a test case]
See merge request apt-team/apt!66
|
|\
| |
| |
| | |
apt Debian release 1.8.2
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When accessing repository protected by TLS mutual auth, apt may receive
a "re-handshake" request from the server, which must be handled
in order for download to proceed.
This situation arises when the server requests a client certificate
based on the resource path provided in the GET request, after the inital
handshake in UnwrapTLS() has already occurred, and a secure connection
has been established.
This issue has been observed with Artifactory-backed Debian repository.
To address the issue, split TLS handshake code out into its own method
in TlsFd, and call it when GNUTLS_E_REHANDSHAKE error is received.
Signed-off-by: Michael Zhivich <mzhivich@akamai.com>
(merged from Debian/apt#93)
LP: #1829861
|
| |
| |
| |
| |
| | |
This needs a fair amount of changes elsewhere in the code,
hence this is separate from the previous commits.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This prevents implicit conversions that we do not want, such
as having a FileFd* being converted to a debListParser.
Two cases are not yet handled because they require changes
in code using them:
1. The classes in hashes.h
2. The URI class - this one is used quite a lot
|