summaryrefslogtreecommitdiff
path: root/apt-pkg/acquire-worker.cc
Commit message (Collapse)AuthorAgeFilesLines
* apt-pkg: URI: Add 'explicit' to single argument constructorJulian Andres Klode2019-04-301-1/+1
| | | | | This needs a fair amount of changes elsewhere in the code, hence this is separate from the previous commits.
* acq: worker: Move CurrentSize, TotalSize, ResumePoint to CurrentItemJulian Andres Klode2019-04-301-9/+7
| | | | | These status fields belong to the current item, move them there. This prepares us for eventually having multiple current items.
* Disable deprecated methods (ftp, rsh, ssh) by defaultJulian Andres Klode2019-01-311-0/+2
| | | | | | | These methods are not supposed to be used anymore, they are not actively maintained and may hence contain odd bugs. Fixes !49
* Drop alternative URIs we got a hash-based fail fromDavid Kalnischkies2018-05-111-36/+48
| | | | | | | | | | | If we got a file but it produced a hash error, mismatched size or similar we shouldn't fallback to alternative URIs as they likely result in the same error. If we can we should instead use another mirror. We used to be a lot stricter by stopping all trys for this file if we got a non-404 (or a hash-based) failure, but that is too hard as we really want to try other mirrors (if we have them) in the hope that they have the expected and correct files.
* Remove obsolete RCS keywordsGuillem Jover2018-05-071-1/+0
| | | | Prompted-by: Jakub Wilk <jwilk@debian.org>
* Fix various typos reported by spellcheckersDavid Kalnischkies2018-05-051-1/+1
| | | | | Reported-By: codespell & spellintian Gbp-Dch: Ignore
* require methods to request AuxRequest capability at startupDavid Kalnischkies2018-01-031-45/+66
| | | | | | Allowing a method to request work from other methods is a powerful capability which could be misused or exploited, so to slightly limited the surface let method opt-in into this capability on startup.
* reimplement and simplify mirror:// methodDavid Kalnischkies2018-01-031-19/+6
| | | | | | | | | | Embedding an entire acquire stack and HTTP logic in the mirror method made it rather heavy weight and fragile. This reimplement goes the other way by doing only the bare minimum in the method itself and instead redirect the actual download of files to their proper methods. The reimplementation drops the (in the real world) unused query-string feature as it isn't really implementable in the new architecture.
* allow a method to request auxiliary filesDavid Kalnischkies2018-01-031-1/+59
| | | | | | | | | | | | | | | | | | | | If a method needs a file to operate like e.g. mirror needs to get a list of mirrors before it can redirect the the actual requests to them. That could easily be solved by moving the logic into libapt directly, but by allowing a method to request other methods to do something we can keep this logic contained in the method and allow e.g. also methods which perform binary patching or similar things. Previously they would need to implement their own acquire system inside the existing one which in all likelyhood will not support the same features and methods nor operate with similar security compared to what we have already running 'above' the requesting method. That said, to avoid methods producing conflicts with "proper" files we are downloading a new directory is introduced to keep the auxiliary files in. [The message magic number 351 is a tribute to the german Grundgesetz article 35 paragraph 1 which defines that all authorities of the state(s) help each other on request.]
* implement fallback to alternative URIs for all itemsDavid Kalnischkies2017-12-131-7/+46
| | | | | | | | For deb files we always supported falling back from one server to the other if one failed to download the deb, but that was hardwired in the handling of this specific item. Moving this alongside the retry infrastructure we can implement it for all items and allow methods to use this as well by providing additional URIs in a redirect.
* implement Acquire::Retries support for all itemsDavid Kalnischkies2017-12-131-9/+25
| | | | | | | Moving the Retry-implementation from individual items to the worker implementation not only gives every file retry capability instead of just a selected few but also avoids needing to implement it in each item (incorrectly).
* Don't segfault if receiving a method warning on empty queueJulian Andres Klode2017-10-251-1/+1
| | | | | | | We would like to issue a warning about seccomp support in Configuration(), but since the queue is empty, there is no current item to show the URL for and we get a segfault. Show the protocol instead.
* Run Proxy-Auto-Detect script from main processJulian Andres Klode2017-10-221-0/+12
| | | | | | | This avoids running the Proxy-Auto-Detect script inside the untrusted (well, less trusted for now) sandbox. This will allow us to restrict the http method from fork()ing or exec()ing via seccomp.
* Reformat and sort all includes with clang-formatJulian Andres Klode2017-07-121-9/+9
| | | | | | | | | | | | | This makes it easier to see which headers includes what. The changes were done by running git grep -l '#\s*include' \ | grep -E '.(cc|h)$' \ | xargs sed -i -E 's/(^\s*)#(\s*)include/\1#\2 include/' To modify all include lines by adding a space, and then running ./git-clang-format.sh.
* do not generate Maximum-Size if we already have that fieldDavid Kalnischkies2016-12-311-3/+5
| | | | | | | Any respective parser will do the right thing and grab the last value, but its better for style to generate that field only once. Gbp-Dch: Ignore
* Merge branch 'portability/freebsd'Julian Andres Klode2016-08-271-2/+2
|\
| * Make root group configurable via ROOT_GROUPJulian Andres Klode2016-08-261-2/+2
| | | | | | | | | | This is needed on BSD where root's default group is wheel, not root.
* | do fail on weakhash/loop earlier in acquireDavid Kalnischkies2016-08-241-41/+1
|/ | | | | | | | | | | | | | | | | The bugreport shows a segfault caused by the code not doing the correct magical dance to remove an item from inside a queue in all cases. We could try hard to fix this, but it is actually better and also easier to perform these checks (which cause instant failure) earlier so that they haven't entered queue(s) yet, which in return makes cleanup trivial. The result is that we actually end up failing "too early" as if we wouldn't be careful download errors would be logged before that process was even started. Not a problem for the acquire system, but likely to confuse users and programs alike if they see the download process producing errors before apt was technically allowed to do an acquire (it didn't, so no violation, but it looks like it to the untrained eye). Closes: 835195
* check internal redirections for loops, tooDavid Kalnischkies2016-08-171-0/+19
| | | | | | | | | | Now that we have the redirections loopchecker centrally in our items we can use it also to prevent internal redirections to loop caused by bugs as in a few instances we get into the business of rewriting the URI we will query by ourself as we predict we would see such a redirect anyway. Our code has no bugs of course, hence no practical difference. ;) Gbp-Dch: Ignore
* log with the failed item description, not with next tryDavid Kalnischkies2016-08-161-3/+4
| | | | | | | | | | The failure handling frequently changes URI & Description of the failed item to try a slightly different combination which might work, but the logging of the failure happens only afterwards as the same failure handling decides if this is a critical error or not so we need a backup here instead of potentially new content. A purely cosmetic issue, but can still be confusing for humans.
* allow methods to be disabled and redirected via configDavid Kalnischkies2016-08-101-7/+24
| | | | | | | | | | To prevent accidents like adding http-sources while using tor+http it can make sense to allow disabling methods. It might even make sense to allow "redirections" and adding "symlinked" methods via configuration. This could e.g. allow using different options for certain sources by adding and configuring a "virtual" new method which picks up the config based on the name it was called with like e.g. http does if called as tor+http.
* detect redirection loops in acquire instead of workersDavid Kalnischkies2016-08-101-0/+10
| | | | | | | Having the detection handled in specific (http) workers means that a redirection loop over different hostnames isn't detected. Its also not a good idea have this implement in each method independently even if it would work
* suggest transport-packages based on established nameschemeDavid Kalnischkies2016-08-101-2/+4
| | | | | | | | | | | | apt-transports not shipped in apt directly are usually named apt-transport-% with % being what is in the name of the transport. tor additional introduced aliases via %+something, which isn't a bad idea, so be strip the +something part from the method name before suggesting the installation of an apt-transport-% package. This avoids us the maintainance of a list of existing transports creating a two class system of known and unknown transports which would be quite arbitrary and is unfriendly to backports.
* add insecure (and weak) allow-options for sources.listDavid Kalnischkies2016-06-221-11/+5
| | | | | | | | Weak had no dedicated option before and Insecure and Downgrade were both global options, which given the effect they all have on security is rather bad. Setting them for individual repositories only isn't great but at least slightly better and also more consistent with other settings for repositories.
* better error message for insufficient hashsumsDavid Kalnischkies2016-06-221-3/+33
| | | | | | | | Downloading and saying "Hash Sum mismatch" isn't very friendly from a user POV, so with this change we try to detect such cases early on and report it, preferably before download even started. Closes: 827758
* allow redirection for items without a space in the desc againDavid Kalnischkies2016-05-031-8/+11
| | | | | | | | | | Broken in a4b8112b19763cbd2c12b81d55bc7d43a591d610. If an item has a description which includes no space and is redirected to another mirror the code which wants to rewrite the description expects a space in there, but can't find it and the unguarded substr command on the string will fail with an exception thrown… Guarding it properly and everything is fine.
* show more details for "Writing more data" errors, tooDavid Kalnischkies2016-04-251-5/+15
| | | | | | They are the small brothers of the hashsum mismatch, so they deserve a similar treatment even through we have for architectual reasons not a much to display as for hashsum mismatches for now.
* show more details for "Hash Sum mismatch" errorsDavid Kalnischkies2016-04-251-0/+3
| | | | | | | | | | | | | | | | Users tend to report these errors with just this error message… not very actionable and hard to figure out if this is a temporary or 'permanent' mirror-sync issue or even the occasional apt bug. Showing the involved hashsums and modification times should help in triaging these kind of bugs – and eventually we will have less of them via by-hash. The subheaders aren't marked for translation for now as they are technical glibberish and probably easier to deal with if not translated. After all, our iconic "Hash Sum mismatch" is translated at least. These additions were proposed in #817240 by Peter Palfrader.
* stop handling items in doomed transactionsDavid Kalnischkies2016-04-071-52/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the previous commit we track the state of transactions, so we can now use our knowledge to avoid processing data for a transaction which was already closed (via an abort in this case). This is needed as multiple independent processes are interacting in the process, so there isn't a simple immediate full-engine stop and it would also be bad to teach each and every item how to check if its manager has failed subordinate and what to do in that case. In the pdiff case, which deals (potentially) with many items during its lifetime e.g. a hashsum mismatch in another file can abort the transaction the file we try to patch via pdiff belongs to. This causes some of the items (which are already done) to be aborted with it, but items still in the process of acquisition continue in the processing and will later try to use all the items together failing in strange ways as cleanup already happened. The chosen solution is to dry up the communication channels instead by ignoring new requests for data acquisition, canceling requests which are not assigned to a queue and not calling Done/Failed on items anymore. This means that e.g. already started or pending (e.g. pipelined) downloads aren't stopped and continue as normal for now, but they remain in partial/ and aren't processed further so the next update command will pick them up and put them to good use while the current process fails updating (for this transaction group) in an orderly fashion. Closes: 817240 Thanks: Barr Detwix & Vincent Lefevre for log files
* Use descriptive URIs in 104 Warning messagesJulian Andres Klode2016-03-161-1/+1
| | | | | | | | This makes the new GPG related warnings much nicer to read, for example, the second one here replaces the first one: W: gpgv:/var/lib/apt/lists/example.com_dists_stable_InRelease: Weak ... W: http://example.com/dists/stable/InRelease: Weak ...
* apt-pkg/acquire-worker.cc: Introduce 104 Warning messageJulian Andres Klode2016-03-151-0/+4
| | | | | | | | | This can be used by workers to send warnings to the main program. The messages will be passed to _error->Warning() by APT with the URI prepended. We are not going to make that really public now, as the interface might change a bit.
* act on various suggestions from cppcheckDavid Kalnischkies2016-01-261-17/+4
| | | | | Reported-By: cppcheck Git-Dch: Ignore
* do not use _apt for file/copy sources if it isn't world-accessibleDavid Kalnischkies2015-11-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | In 0940230d we started dropping privileges for file (and a bit later for copy, too) with the intend of uniforming this for all methods. The commit message says that the source will likely fail based on the compressors already – and there isn't much secret in the repository content. After all, after apt has run the update everyone can access the content via apt anyway… There are sources through which worked before which are mostly single-deb (and those with the uncompressed files available). The first one being especially surprising for users maybe, so instead of failing, we make it so that apt detects that it can't access a source as _apt and if so doesn't drop (for all sources!) privileges – but we limit this to file/copy, so the uncompress which might be needed will still fail – but that failed before this regression. We display a notice about this, mostly so that if it still fails (e.g. compressed) the user has some idea what is wrong. Closes: 805069
* wrap every unlink call to check for != /dev/nullDavid Kalnischkies2015-11-041-3/+3
| | | | | | | | | | | | | | | | Unlinking /dev/null is bad, we shouldn't do that. Also, we should print at least a warning if we tried to unlink a file but didn't manage to pull it of (ignoring the case were the file is /dev/null or doesn't exist in the first place). This got triggered by a relatively unlikely to cause problem in pkgAcquire::Worker::PrepareFiles which would while temporary uncompressed files (which are set to keep compressed) figure out that to files are the same and prepare for sharing by deleting them. Bad move. That also shows why not printing a warning is a bad idea as this hide the error for in non-root test runs. Git-Dch: Ignore
* add ConnectionTimedOut to transient failreasons listDavid Kalnischkies2015-11-041-6/+10
| | | | | | All other reasons from methods/connect.cc were already included. Git-Dch: Ignore
* use std-algorithms instead of manual loops to avoid overflow warningDavid Kalnischkies2015-09-141-2/+2
| | | | | | Reported-By: gcc Understandable: no Git-Dch: Ignore
* use unusable-for-security hashes for integrity checksDavid Kalnischkies2015-09-011-0/+6
| | | | | | | We want to declare some hashes as not enough for security, so that a user will need --allow-unauthenticated or similar to get data secured only by those hashes, but we can still us these hashes for integrity checks if we got them.
* correct 'apt update' download summary lineDavid Kalnischkies2015-08-271-5/+6
| | | | | | | | Fetched() was reported for mostly nothing, while we should be calling it for files worked with from non-local sources (e.g. http, but not file or xz). Previously this was called from an acquire item, but got moved to the acquire worker instead to avoid having it (re)implemented in all items, but the checks were faulty.
* Fix all the wrong removals of includes that iwyu got wrongMichael Vogt2015-08-171-0/+1
| | | | Git-Dch: ignore
* Cleanup includes after running iwyuMichael Vogt2015-08-171-4/+0
|
* Replace all "press enter" occurrences with "press [Enter]"Luca Bruno2015-08-121-1/+1
| | | | | Thanks: Andre Felipe Machado for initial patch Closes: 414848
* enhance "hit paywall" error message to mention the probable causeDavid Kalnischkies2015-08-101-4/+5
| | | | | | | | | Reporting errors from Done() is bad for progress reporting and such, so factoring this out is a good idea and we start with moving the supposed- to-be clearsigned file isn't clearsigned out first – improving the error message in the process as we use the same message for a similar case (NODATA) as this is what I have to look at with the venue wifi at DebCamp and the old errormessage doesn't really say anything.
* handle site-changing redirects as mirror changesDavid Kalnischkies2015-08-101-16/+32
| | | | | | | | | | | | | | | | | | | | | | | Redirectors like httpredir.debian.org orchestra the download from multiple (hopefully close) mirrors while having only a single central sources.list entry by using redirects. This has the effect that the progress report always shows the source it started with, not the mirror it ends up fetching from, which is especially problematic for error reporting as having a report for a "Hashsum mismatch" for the redirector URI is next to useless as nobody knows which URI it was really fetched from (regardless of it coming from a user or via the report script) from this output alone. You would need to enable debug output and hope for the same situation to arise again… We hence reuse the UsedMirror field of the mirror:// method and detect redirects which change the site and declare this new site as the UsedMirrror (and adapt the description). The disadvantage is that there is no obvious mapping anymore (it is relatively easy to guess through with some experience) from progress lines to sources.list lines, so error messages need to take care to use the Target description (rather than current Item description) if they want to refer to the sources.list entry.
* fix memory leaks reported by -fsanitizeDavid Kalnischkies2015-08-101-1/+1
| | | | | | | | Various small leaks here and there. Nothing particularily big, but still good to fix. Found by the sanitizers while running our testcases. Reported-By: gcc -fsanitize Git-Dch: Ignore
* make all d-pointer * const pointersDavid Kalnischkies2015-08-101-9/+4
| | | | | | | | | | | | | | Doing this disables the implicit copy assignment operator (among others) which would cause hovac if used on the classes as it would just copy the pointer, not the data the d-pointer points to. For most of the classes we don't need a copy assignment operator anyway and in many classes it was broken before as many contain a pointer of some sort. Only for our Cacheset Container interfaces we define an explicit copy assignment operator which could later be implemented to copy the data from one d-pointer to the other if we need it. Git-Dch: Ignore
* apply various style suggestions by cppcheckDavid Kalnischkies2015-08-101-1/+1
| | | | | | | Some of them modify the ABI, but given that we prepare a big one already, these few hardly count for much. Git-Dch: Ignore
* call URIStart in cdrom and file methodDavid Kalnischkies2015-06-151-1/+0
| | | | | | | | | | | | | | | All other methods call it, so they should follow along even if the work they do afterwards is hardly breathtaking and usually results in a URIDone pretty soon, but the acquire system tells the individual item about this via a virtual method call, so even through none of our existing items contains any critical code in these, maybe one day they might. Consistency at least once… Which is also why this has a good sideeffect: file: and cdrom: requests appear now in the 'apt-get update' output. Finally - it never made sense to hide them for me. Okay, I guess it made before the new hit behavior, but now that you can actually see the difference in an update it makes sense to see if a file: repository changed or not as well.
* deal better with acquiring the same URI multiple timesDavid Kalnischkies2015-06-151-111/+169
| | | | | | | | | | | | | | | This is an unlikely event for indexes and co, but it can happen quiet easily e.g. for changelogs where you want to get the changelogs for multiple binary package(version)s which happen to all be built from a single source. The interesting part is that the Acquire system actually detected this already and set the item requesting the URI again to StatDone - expect that this is hardly sufficient: an Item must be Complete=true as well to be considered truely done and that is only the tip of the ::Done handling iceberg. So instead of this StatDone hack we allow QItems to be owned by multiple items and notify all owners about everything now, so that for the point of each item they got it downloaded just for them.
* rework hashsum verification in the acquire systemDavid Kalnischkies2015-06-091-78/+111
| | | | | | | | | | | | | | | | | | | | | Having every item having its own code to verify the file(s) it handles is an errorprune process and easy to break, especially if items move through various stages (download, uncompress, patching, …). With a giant rework we centralize (most of) the verification to have a better enforcement rate and (hopefully) less chance for bugs, but it breaks the ABI bigtime in exchange – and as we break it anyway, it is broken even harder. It shouldn't effect most frontends as they don't deal with the acquire system at all or implement their own items, but some do and will need to be patched (might be an opportunity to use apt on-board material). The theory is simple: Items implement methods to decide if hashes need to be checked (in this stage) and to return the expected hashes for this item (in this stage). The verification itself is done in worker message passing which has the benefit that a hashsum error is now a proper error for the acquire system rather than a Done() which is later revised to a Failed().
* detect Releasefile IMS hits even if the server doesn'tDavid Kalnischkies2015-05-131-4/+4
| | | | | | | | | | | Not all servers we are talking to support If-Modified-Since and some are not even sending Last-Modified for us, so in an effort to detect such hits we run a hashsum check on the 'old' compared to the 'new' file, we got the hashes for the 'new' already for "free" from the methods anyway and hence just need to calculated the old ones. This allows us to detect hits even with unsupported servers, which in turn means we benefit from all the new hit behavior also here.