summaryrefslogtreecommitdiff
path: root/methods
Commit message (Collapse)AuthorAgeFilesLines
* Turn TLS handshake issues into transient errorsJulian Andres Klode2021-05-121-1/+1
| | | | | | | This makes them retriable, and brings them more into line with TCP, where handshake is also a transient error. LP: #1928100
* Fix downloads of unsized files that are largest in pipelineJulian Andres Klode2021-04-131-0/+4
| | | | | | | | | The maximum request size is accidentally set to any sized file, so if an unsized file is present, and it turns out to be larger than the maximum size we set, we'd error out when checking if its size is smaller than the maximum request size. LP: #1921626
* Allow merging with empty pdiff patchesDavid Kalnischkies2021-03-061-1/+5
| | | | | | | | There isn't a lot of sense in working on empty patches as they change nothing (quite literally), but they can be the result of merging multiple patches and so to not require our users to specifically detect and remove them, we can be nice and just ignore them instead of erroring out.
* Ensure HTTP status code text has sensible contentDavid Kalnischkies2021-02-042-1/+4
| | | | | | | We use the code in error messages, so at least for that edgecase we should ensure that it has sensible content. Note that the acquire system aborts on non-sensible message content in SendMessage, so you can't really exploit this.
* Replace PrintStatus with SendMessage usageDavid Kalnischkies2021-02-043-15/+27
| | | | | | | varg API is a nightmare as the symbols seems different on ever other arch, but more importantly SendMessage does a few checks on the content of the message and it is all outputted via C++ iostreams and not mixed in FILE* which is handy for overriding the streams.
* Use error reporting instead of assert in rred patchingDavid Kalnischkies2021-02-041-68/+88
| | | | | | | | | | | The rest of our code uses return-code errors and it isn't that nice to crash the rred method on bad patches anyway, so we move to properly reporting errors in our usual way which incidently also helps writing a fuzzer for it. This is not really security relevant though as the patches passed hash verification while they were downloaded, so an attacker has to overtake a trusted repository first which gives plenty better options of attack.
* connect: use ServiceNameOrPort, not Port, as the cache keyFaidon Liambotis2020-12-231-4/+7
| | | | | | | | | | | | | | | | | | | The "last connection" cache is currently being stored and looked up on the combination of (LastHost, LastPort). However, these are not what the arguments to getaddrinfo() were on the first try: the call is to getaddrinfo(Host, ServiceNameOrPort, ...), i.e. with the port *or if 0, the service name* (e.g. http). Effectively this means that the connection cache lookup for: https://example.org/... i.e. Host = example.org, Port = 0, Service = http would end up matching the "last" connection of (if existed): https://example.org/... i.e. Host = example.org, Port = 0, Service = https ...and thus performing a TLS request over an (unrelated) port 80 connection. Therefore, an HTTP request, followed up by an (unrelated) HTTPS request to the same server, would always fail. Address this by using as the cache key the ServiceNameOrPort, rather than Port.
* connect: convert a C-style string to std::stringFaidon Liambotis2020-12-231-11/+8
| | | | | | | | Convert the fixed-size (300) char array "ServStr" to a std::string, and simplify the code by removing snprintfs in the process. While at it, rename to the more aptly named "ServiceNameOrPort" and update the comment to reflect what this variable is meant to be.
* basehttp: also consider Access when a Server's URIFaidon Liambotis2020-12-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | ServerState->Comp() is used by the HTTP methods main loop to check whether a connection can be reused, or whether a new one is needed. Unfortunately, the currently implementation only compares the Host and Port between the ServerState's internal URI, with a new URI. However these are URIs, and therefore Port is 0 when a URI port is not specificied, i.e. in the most common configurations. As a result, a ServerState for http://example.org/... will be reused for URIs of the form https://example.org/..., as both Host (example.org) and Port (0) match. In turn this means that GET requests will happen over port 80, in cleartext, even for those https URLs(!). URI Acquires for an http URI and subsequently for an https one, in the same aptmethod session, do not typically happen with apt as the frontend, as apt opens a new pipe with the "https" aptmethod binary (nowadays a symlink to http), which is why this hasn't been a problem in practice and has eluded detection so far. It does happen in the wild with other frontends (e.g. reprepro), plus is legitimately an odd and surprising behavior on apt's end. Therefore add a comparison for the URI's "Access" (= the scheme) in addition to Host and Port, to ensure that we're not reusing the same state for multiple different schemes.
* Implement encoded URI handling in all methodsDavid Kalnischkies2020-12-1812-30/+57
| | | | | | | | Every method opts in to getting the encoded URI passed along while keeping compat in case we are operated by an older acquire system. Effectively this is just a change for the http-based methods as the others just decode the URI as they work with files directly.
* Support compressed output from rred similar to apt-helper cat-filefeature/rredDavid Kalnischkies2020-11-071-2/+34
|
* Support reading compressed patches in rred direct call modesDavid Kalnischkies2020-11-071-1/+1
| | | | | | The acquire system mode does this for a long time already and as it is easy to implement and handy for manual testing as well we can support it in the other modes, too.
* Prepare rred binary for external usageDavid Kalnischkies2020-11-072-45/+88
| | | | | | | | | | | Merging patches is a bit of non-trivial code we have for client-side work, but as we support also server-side merging we can export this functionality so that server software can reuse it. Note that this just cleans up and makes rred behave a bit more like all our other binaries by supporting setting configuration at runtime and supporting --help and --version. If you can make due without this, the now advertised functionality is provided already in earlier versions.
* Rewrite HttpServerState::Die()Julian Andres Klode2020-08-111-28/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The old code was fairly confusing, and contradictory. Notably, the second `if` also only applied to the Data state, whereas we already terminated the Data state earlier. This was bad. The else fallback applied in three cases: (1) We reached our limit (2) We are Persistent (3) We are headers Now, it always failed as a transient error if it had nothing left in the buffer. BUT: Nothing left in the buffer is the correct thing to happen if we were fetching content. Checking all combinations for the flags, we can compare the results of Die() between 2.1.7 - the last "known-acceptable-ish" version and this version: 2.1.7 this Data !Persist !Space !Limit OK (A) OK Data !Persist !Space Limit OK (A) OK Data !Persist Space !Limit OK (C) OK Data !Persist Space Limit OK OK Data Persist !Space !Limit ERR ERR * Data Persist !Space Limit OK (B) OK Data Persist Space !Limit ERR ERR Data Persist Space Limit OK OK => Data connections are OK if they have not reached their limit, or are persistent (in which case they'll probably be chunked) Header !Persist !Space !Limit ERR ERR Header !Persist !Space Limit ERR ERR Header !Persist Space !Limit OK OK Header !Persist Space Limit OK OK Header Persist !Space !Limit ERR ERR Header Persist !Space Limit ERR ERR Header Persist Space !Limit OK OK Header Persist Space Limit OK OK => Common scheme here is that header connections are fine if they have read something into the input buffer (Space). The rest does not matter. (A) Non-persistent connections with !space always enter the else clause, hence success (B) no Space means we enter the if/else, we go with else because IsLimit(), and we succeed because we don't have space (C) Having space we do enter the while (WriteSpace()) loop, but we never reach IsLimit(), hence we fall through. Given that our connection is not persistent, we fall through to the else case, and there we win because we have data left to write.
* http: Fully flush local file both before/after server readJulian Andres Klode2020-08-113-19/+26
| | | | | | | | | | | We do not want to end up in a code path while reading content from the server where we have local data left to write, which can happen if a previous read included both headers and content. Restructure Flush() to accept a new argument to allow incomplete flushs (which do not match our limit), so that it can flush as far as possible, and modify Go() and use that before and after reading from the server.
* http: Do not use non-blocking local I/OJulian Andres Klode2020-08-111-10/+0
| | | | This causes some more issues, really.
* http: Restore successful exits from Die()Julian Andres Klode2020-08-111-4/+6
| | | | | We have successfully finished reading data if our buffer is empty, so we don't need to do any further checks.
* Do not retry on failure to fetchJulian Andres Klode2020-08-101-20/+13
| | | | | | | | | | | | | While we fixed the infinite retrying earlier, we still have problems if we retry in the middle of a transfer, we might end up resuming downloads that are already done and read more than we should (removing the IsOpen() check so that it always retries makes test-ubuntu-bug-1098738-apt-get-source-md5sum fail with wrong file sizes). I think the retrying was added to fixup pipelining messups, but we have better solutions now, so let's get rid of it, until we have implemented this properly.
* basehttp: Correctly handle non-transient failure from RunData()Julian Andres Klode2020-08-051-12/+3
| | | | | | | | | | When we failed after a retry, we only communicated failure as transient, but this seems wrong, especially given that the code now always triggers a retry when Die() is called, as Die() closes the server fd. Instead, remove the error handling in that code path, and reuse the existing fatal-ish error code handling path.
* http: Fix infinite loop on read errorsJulian Andres Klode2020-08-051-3/+10
| | | | | | | | | | | | If there was a transient error and the server fd was closed, the code would infinitely retry - it never reached FailCounter >= 2 because it falls through to the end of the loop, which sets FailCounter = 0. Add a continue just like the DNS rotation code has, so that the retry actually fails after 2 attempts. Also rework the error logic to forward the actual error message.
* Merge branch 'pu/http-fixes-2' into 'master'Julian Andres Klode2020-08-041-1/+2
|\ | | | | | | | | Pu/http fixes 2 See merge request apt-team/apt!125
| * http: Always write to the file if there's something to writeJulian Andres Klode2020-08-041-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We only add the file to the select() call if we have data to write to it prior to the select() call. This is problematic: Assuming we enter Go() with no data to write to the file, but we read some from the server as well as an EOF, we end up not writing it to the file because we did not add the file to the select. We can't always add the file to the select(), because it's basically always ready and we don't want to wake up if we don't have anything to read or write. So for a solution, let's just always write data to the file if there's data to write to it. If some gets leftover, or if some was already present when we started Go(), it will still be added to the select() call and unblock it. Closes: #959518
* | Merge branch 'pu/less-slaves' into 'master'Julian Andres Klode2020-08-042-7/+7
|\ \ | |/ |/| | | | | Remove master/slave terminology See merge request apt-team/apt!124
| * gpgv: Rename master to primaryJulian Andres Klode2020-08-041-4/+4
| |
| * CMake: Rename add_slaves() to add_links()Julian Andres Klode2020-07-141-3/+3
| | | | | | | | Sorry!
* | http: Redesign reading of pending dataJulian Andres Klode2020-07-241-10/+8
| | | | | | | | | | | | | | | | | | | | Instead of reading the data early, disable the timeout for the select() call and read the data later. Also, change Read() to call only once to drain the buffer in such instances. We could optimize this to call read() multiple times if there is also pending stuff on the socket, but that it slightly more complex and should not provide any benefits.
* | http: On select timeout, error out directly, do not call Die()Julian Andres Klode2020-07-241-1/+1
| | | | | | | | | | The error handling in Die() that's supposed to add useful error messages is not super useful here.
* | http: Finish copying data from server to file before sending stuff to serverJulian Andres Klode2020-07-241-7/+7
| | | | | | | | | | | | This avoids a case where we read data, then write to the server and only then realize the connection was closed. It is somewhat slower, though.
* | http: Die(): Do not flush the buffer, error out insteadJulian Andres Klode2020-07-241-18/+3
| | | | | | | | | | | | | | By changing the buffer implementation to return true if it read or wrote something, even on EOF, we should not have a need to flush the buffer in Die() anymore - we should only be calling Die() if the buffer is empty now.
* | http: Only return false for EOF if we actually did not read anythingJulian Andres Klode2020-07-241-4/+4
| | | | | | | | | | | | | | | | | | This should avoid the need to Flush the buffer in Die(), because if we read anything, we are returning true, and not entering Die() at that point. Also Write() does not have a concept of EOF, so get rid of code handling that there. Was that copied from Read()?
* | http: Die(): Merge flushing code from Flush()Julian Andres Klode2020-07-241-0/+5
| | | | | | | | | | | | | | Die() needs its own Copy() of Flush() because it needs to return success or failure based on some states, but those are not precisely the same as Flush(), as Flush() will always return false at the end, for example, but we want to fall through to our error handling.
* | http: Always Close() the connection in Die()Julian Andres Klode2020-07-241-2/+2
|/ | | | | If we reached Die() there was an issue with the server connection, so we should always explicitly close it.
* Reorder config check before checking systemd for non-interactive httpDavid Kalnischkies2020-07-021-9/+13
| | | | | | If this option is disabled (which it is by default in Debian), we don't have to make the call and the checks around it. Not that it really matters that much as if it would we would be better checking only once.
* Replace some magic 64*1024 with APT_BUFFER_SIZEJulian Andres Klode2020-06-231-1/+1
|
* ubuntu: http: Add non-interactive to user agent if run by systemdJulian Andres Klode2020-04-092-2/+21
| | | | | | | | | | | | | | | Include that apt is being run from a service in the user agent, so traffic can be analysed for interactive vs non-interactive use, and prioritised accordingly. It looks like this now: User-Agent: Debian APT-HTTP/1.3 (2.0.1) non-interactive A previous version included the full service names, but this raised some privacy concerns. LP: #1825000
* cdrom: Remove old udev dlopen stuffJulian Andres Klode2020-02-261-1/+0
|
* Remove code tagged APT_PKG_590, add some missing includesJulian Andres Klode2020-02-181-0/+1
| | | | | | Remove all code scheduled to be removed after 5.90, and fix files to include files they previously got from hashes.h including more headers.
* seccomp: Allow recvmmsg_time64() and futex_time64()Julian Andres Klode2020-02-101-0/+2
| | | | | Saw those in libseccomp commit, figured should add them too.
* seccomp: Allow time64 variants (>402,<415) of allowed syscallsJulian Andres Klode2020-02-091-0/+6
| | | | | | | | Went through the list of syscalls starting at 403 (and looked one back) up to the next unknown syscalls, and added all those that were present in their non-time64 variant. Closes: #951012
* netrc: Add warning when ignoring entries for unencrypted protocolsJulian Andres Klode2020-01-151-0/+12
| | | | | | | | Commit 93f33052de84e9aeaf19c92291d043dad2665bbd restricted auth.conf entries to only apply to https by default, but this was silent - there was no information why http sources with auth.conf entries suddenly started failing. Add such information, and extend test case to cover it.
* Rename _count() macro to APT_ARRAY_SIZE()Julian Andres Klode2020-01-071-1/+2
|
* Fix some style warnings from cppcheckDavid Kalnischkies2019-11-261-11/+9
| | | | | | | | Unused variable, std::algorithms instead of raw for-loops. There should be no observeable difference in behaviour. Reported-By: cppcheck Gbp-Dch: Ignore
* Fix typos reported by codespell in code commentsDavid Kalnischkies2019-07-102-5/+6
| | | | | | | | Also in old changelogs, but nothing really user visible like error messages or alike so barely noteworthy. Reported-By: codespell Gbp-Dch: Ignore
* Apply various suggestions by cppcheckDavid Kalnischkies2019-07-0810-22/+16
| | | | Reported-By: cppcheck
* RFC1123StrToTime: Accept const std::string& as first argumentJulian Andres Klode2019-06-171-1/+1
| | | | | | We are converting to std::string anyway by passing to istringstream, and this removes the need for .c_str() in callers.
* http: Fix Host header in proxied https connectionsSimon Körner2019-06-111-3/+3
| | | | | | | | | | | | | | Currently CONNECT requests use the name of the proxy as Host value, instead of the origin server's name. According to RFC 2616 "The Host field value MUST represent the naming authority of the origin server or gateway given by the original URL." The current implementation causes problems with some proxy vendors. This commit fixes this. [jak: Adding a test case] See merge request apt-team/apt!66
* Merge tag '1.8.2'Julian Andres Klode2019-06-111-28/+51
|\ | | | | | | apt Debian release 1.8.2
| * methods: https: handle requests for TLS re-handshakeMichael Zhivich2019-05-211-28/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When accessing repository protected by TLS mutual auth, apt may receive a "re-handshake" request from the server, which must be handled in order for download to proceed. This situation arises when the server requests a client certificate based on the resource path provided in the GET request, after the inital handshake in UnwrapTLS() has already occurred, and a secure connection has been established. This issue has been observed with Artifactory-backed Debian repository. To address the issue, split TLS handshake code out into its own method in TlsFd, and call it when GNUTLS_E_REHANDSHAKE error is received. Signed-off-by: Michael Zhivich <mzhivich@akamai.com> (merged from Debian/apt#93) LP: #1829861
* | apt-pkg: URI: Add 'explicit' to single argument constructorJulian Andres Klode2019-04-309-17/+17
| | | | | | | | | | This needs a fair amount of changes elsewhere in the code, hence this is separate from the previous commits.
* | Add 'explicit' to most single argument constructorsJulian Andres Klode2019-04-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | This prevents implicit conversions that we do not want, such as having a FileFd* being converted to a debListParser. Two cases are not yet handled because they require changes in code using them: 1. The classes in hashes.h 2. The URI class - this one is used quite a lot