summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* implement 'apt-get files' to access index targetsDavid Kalnischkies2015-06-118-10/+223
| | | | | | | | | | | | | Downloading additional files is only half the job. We still need a way to allow external tools to know where the files are they requested for download given that we don't want them to choose their own location. 'apt-get files' is our answer to this showing by default in a deb822 format information about each IndexTarget with the potential to filter the records based on lines and an option to change the output format. The command serves also as an example on how to get to this information via libapt.
* use an enum instead of strings as IndexTarget::Option interfaceDavid Kalnischkies2015-06-114-12/+38
| | | | | | | Strings are easy to typo and we can keep the extensibility we require here with a simple enum we can append to without endangering ABI. Git-Dch: Ignore
* use IndexTarget to get to IndexFileDavid Kalnischkies2015-06-117-509/+197
| | | | | | | | Removes a bunch of duplicated code in the deb-specific parts. Especially the Description part is now handled centrally by IndexTarget instead of being duplicated to the derivations of IndexFile. Git-Dch: Ignore
* show URI.Path in all acquire item descriptionsDavid Kalnischkies2015-06-1117-50/+98
| | | | | | | | | | | | | | It is a rather strange sight that index items use SiteOnly which strips the Path, while e.g. deb files are downloaded with NoUserPassword which does not. Important to note here is that for the file transport Path is pretty important as there is no Host which would be displayed by Site, which always resulted in "interesting" unspecific errors for "file:". Adding a 'middle' ground between the two which does show the Path but potentially modifies it (it strips a pending / at the end if existing) solves this "file:" issue, syncs the output and in the end helps to identify which file is meant exactly in progress output and co as a single site can have multiple repositories in different paths.
* rename Calculate- to GetIndexTargets and use it as official APIDavid Kalnischkies2015-06-103-27/+21
| | | | | | | | We need a general way to get from a sources.list entry to IndexTargets and with this change we can move from pkgSourceList over the list of metaIndexes it includes to the IndexTargets each metaIndex can have. Git-Dch: Ignore
* stop using IndexTarget pointers which are never freedDavid Kalnischkies2015-06-104-118/+113
| | | | | | | | | | | | | | Creating and passing around a bunch of pointers of IndexTargets (and of a vector of pointers of IndexTargets) is probably done to avoid the 'costly' copy of container, but we are really not in a timecritical operation here and move semantics will help us even further in the future. On the other hand we never do a proper cleanup of these pointers, which is very dirty, even if structures aren't that big… The changes will effecting many items only effect our own hidden class, so we can do that without fearing breaking interfaces or anything. Git-Dch: Ignore
* store all targets data in IndexTarget structDavid Kalnischkies2015-06-106-108/+97
| | | | | | | We still need an API for the targets, so slowly prepare the IndexTargets to let them take this job. Git-Dch: Ignore
* abstract the code to iterate over all targets a bitDavid Kalnischkies2015-06-103-95/+142
| | | | | | | | | We have two places in the code which need to iterate over targets and do certain things with it. The first one is actually creating these targets for download and the second instance pepares certain targets for reading. Git-Dch: Ignore
* replace ULONG_MAX with c++ style std::numeric_limitsDavid Kalnischkies2015-06-091-2/+2
| | | | | | | For some reason travis seems to be unhappy about it claiming it is not defined. Well, lets not think to deeply about it… Git-Dch: Ignore
* configureable acquire targets to download additional filesDavid Kalnischkies2015-06-098-192/+376
| | | | | | | | First pass at making the acquire system capable of downloading files based on configuration rather than hardcoded entries. It is now possible to instruct 'deb' and 'deb-src' sources.list lines to download more than just Packages/Translation-* and Sources files. Details on how to do that can be found in the included documentation file.
* remove debianism file-content verificationDavid Kalnischkies2015-06-092-40/+2
| | | | | | | | | | | | | | The code requires every index file we download to have a Package field, but that doesn't hold true for all index we might want to download in the future. Some might not even be deb822 formatted files… The check was needed as apt used to accept unverifiable files like Translation-*, but nowadays it requires hashes for these as well. Even for unsigned repositories we interpret the Release file as binding now, which means this check isn't triggerable expect for repositories which do not have a Release file at all – something which is highly discouraged! Git-Dch: Ignore
* do not request files if we expect an IMS hitDavid Kalnischkies2015-06-098-45/+127
| | | | | | | | | | If we have a file on disk and the hashes are the same in the new Release file and the old one we have on disk we know that if we ask the server for the file, we will at best get an IMS hit – at worse the server doesn't support this and sends us the (unchanged) file and we have to run all our checks on it again for nothing. So, we can save ourselves (and the servers) some unneeded requests if we figure this out on our own.
* cleanup pdiff support detection decisionDavid Kalnischkies2015-06-092-45/+45
| | | | | | | | | | | Its a bit unclean to create an item just to let the item decide that it can't do anything and let it fail, so instead we let the item creator decide in all cases if patching should be attempted. Also pulls a small trick to get the hashes for the current file without calculating them by looking at the 'old' Release file if we have it. Git-Dch: Ignore
* support hashes for compressed pdiff filesDavid Kalnischkies2015-06-097-28/+125
| | | | | | | | At the moment we only have hashes for the uncompressed pdiff files, but via the new '$HASH-Download' field in the .diff/Index hashes can be provided for the .gz compressed pdiff file, which apt will pick up now and use to verify the download. Now, we "just" need a buy in from the creators of repositories…
* fix download-file using testcases to run as rootDavid Kalnischkies2015-06-092-16/+17
| | | | Git-Dch: Ignore
* add more parsing error checking for rredDavid Kalnischkies2015-06-093-22/+245
| | | | | | | The rred parser is very accepting regarding 'invalid' files. Given that we can't trust the input it might be a bit too relaxed. In any case, checking for more errors can't hurt given that we support only a very specific subset of ed commands.
* check patch hashes in rred worker instead of in the handlerDavid Kalnischkies2015-06-096-59/+121
| | | | | | | | | | | | | | | | rred is responsible for unpacking and reading the patch files in one go, but we currently only have hashes for the uncompressed patch files, so the handler read the entire patch file before dispatching it to the worker which would read it again – both with an implicit uncompress. Worse, while the workers operate in parallel the handler is the central orchestration unit, so having it busy with work means the workers do (potentially) nothing. This means rred is working with 'untrusted' data, which is bad. Yet, having the unpack in the handler meant that the untrusted uncompress was done as root which isn't better either. Now, we have it at least contained in a binary which we can harden a bit better. In the long run, we want hashes for the compressed patch files through to be safe.
* rework hashsum verification in the acquire systemDavid Kalnischkies2015-06-0922-1871/+1824
| | | | | | | | | | | | | | | | | | | | | Having every item having its own code to verify the file(s) it handles is an errorprune process and easy to break, especially if items move through various stages (download, uncompress, patching, …). With a giant rework we centralize (most of) the verification to have a better enforcement rate and (hopefully) less chance for bugs, but it breaks the ABI bigtime in exchange – and as we break it anyway, it is broken even harder. It shouldn't effect most frontends as they don't deal with the acquire system at all or implement their own items, but some do and will need to be patched (might be an opportunity to use apt on-board material). The theory is simple: Items implement methods to decide if hashes need to be checked (in this stage) and to return the expected hashes for this item (in this stage). The verification itself is done in worker message passing which has the benefit that a hashsum error is now a proper error for the acquire system rather than a Done() which is later revised to a Failed().
* don't try other compressions on hashsum mismatchDavid Kalnischkies2015-06-074-16/+52
| | | | | | | | | If we e.g. fail on hash verification for Packages.xz its highly unlikely that it will be any better with Packages.gz, so we just waste download bandwidth and time. It also causes us always to fallback to the uncompressed Packages file for which the error will finally be reported, which in turn confuses users as the file usually doesn't exist on the mirrors, so a bug in apt is suspected for even trying it…
* Merge branch 'debian/sid' into debian/experimentalMichael Vogt2015-05-2211-40/+158
|\ | | | | | | | | | | | | | | | | Conflicts: apt-pkg/pkgcache.h debian/changelog methods/https.cc methods/server.cc test/integration/test-apt-download-progress
| * Update methods/https.cc now that ServerState::Size is renamedMichael Vogt2015-05-221-1/+1
| | | | | | | | Git-Dch: ignore
| * Merge remote-tracking branch 'upstream/debian/jessie' into debian/sidMichael Vogt2015-05-2261-21141/+21431
| |\ | | | | | | | | | | | | Conflicts: apt-pkg/deb/dpkgpm.cc
| | * releasing package apt version 1.0.9.9Michael Vogt2015-04-281-0/+10
| | |
| | * remove "first package seen is native package" assumptionDavid Kalnischkies2015-04-223-14/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The fix for #777760 causes packages of foreign (and the native) architectures, to be created correctly, but invalidates (like the previously existing, but policy-forbidden architecture-less packages we had to support for some upgrade scenarios) the assumption that the first (and only) package in the cache for a single architecture system must be the package for the native architecture (as, where should the other architectures come from, right? Wrong.). Depending on the order of parsing sources more or less packages can be effected by this. The effects are strange (for apt it mostly effects simulation/debug output, but also apt-mark on these specific packages), which complicates debugging, but relatively harmless if understood as most actions do not need direct named access to packages. The problem is fixed by removing the single-arch special casing in the paths who had them (Cache.FindPkg), so they use the same code as multi-arch systems, which use them as a wrapper for Grp.FindPkg. Note that single-arch system code was using Grp.FindPkg before as well if a Grp structure was handily available, so we don't introduce new untested code here: We remove more brittle special cases which are less tested instead (this was planed to be done for Stretch anyhow). Note further that the method with the assumption itself isn't fixed. As it is a private method I opted for declaring it deprecated instead and remove all its call positions. As it is private no-one can call this method legally (thanks to how c++ works by default its still an exported symbol through) and fixing it basically means reimplementing code we already have in Grp.FindPkg. Removing rather than fixing seems hence like a good solution. Closes: 782777 Thanks: Axel Beckert for testing
| * | parse arch-qualified Provides correctlyHelmut Grohne2015-05-221-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The underlying problem is that libapt-pkg does not correctly parse these provides. Internally, it creates a version named "baz:i386" with architecture amd64. Of course, such a package name is invalid and thus this version is completely inaccessible. Thus, this bug should not cause apt to accept a broken situation as valid. Nevertheless, it prevents using architecture qualified depends. Closes: 777071
| * | Add regression test for LP: #1445239Michael Vogt2015-05-222-0/+31
| | | | | | | | | | | | | | | | | | | | | Add a regression test that reproduced the hang of apt when a partial file is present. Git-Dch: ignore
| * | Rename "Size" in ServerState to TotalFileSizeMichael Vogt2015-05-223-16/+22
| | | | | | | | | | | | | | | | | | | | | | | | The variable "Size" was misleading and caused bug #1445239. To avoid similar issues in the future, rename it to make the meaning more obvious. git-dch: ignore
| * | Fix endless loop in apt-get update that can cause disk fillupMichael Vogt2015-05-224-10/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The apt http code parses Content-Length and Content-Range. For both requests the variable "Size" is used and the semantic for this Size is the total file size. However Content-Length is not the entire file size for partital file requests. For servers that send the Content-Range header first and then the Content-Length header this can lead to globbing of Size so that its less than the real file size. This may lead to a subsequent passing of a negative number into the CircleBuf which leads to a endless loop that writes data. Thanks to Anton Blanchard for the analysis and initial patch. LP: #1445239
| * | Merge remote-tracking branch 'upstream/debian/sid' into debian/sidMichael Vogt2015-05-2297-53522/+54712
| |\ \
| | * | Move sysconf(_SC_OPEN_MAX); out of the for() loop to avoid unneeded syscallsMichael Vogt2015-04-281-1/+2
| | | |
| | * | Revert "HttpsMethod::Fetch(): Zero the FetchResult object when leaving due ↵Michael Vogt2015-04-131-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | to 404" This reverts commit 1296bc7c466181a7978c313c40a041b34ce3eaeb.
| | * | HttpsMethod::Fetch(): Zero the FetchResult object when leaving due to 404Robert Edmonds2015-04-071-0/+2
| | | |
| | * | Fix crash in pkgDPkgPM::WriteApportReport(() (LP: #1436626)Michael Vogt2015-04-071-2/+13
| | | |
| | * | test/integration/test-apt-download-progress: fix test failure on fast hardwareMichael Vogt2015-03-201-2/+2
| | | |
| * | | Merge remote-tracking branch 'upstream/debian/sid' into debian/sidMichael Vogt2014-10-27175-74650/+82998
| |\ \ \
| * \ \ \ Merge remote-tracking branch 'upstream/debian/sid' into debian/sidMichael Vogt2014-06-1870-4111/+4371
| |\ \ \ \
| * | | | | fix test-apt-ftparchive-cachedb-lp1274466 and apt-internal-solver testsMichael Vogt2014-06-183-3/+5
| | | | | |
| * | | | | fix autopkgtest testsMichael Vogt2014-06-184-2/+5
| | | | | |
* | | | | | treat older Release files than we already have as an IMSHitDavid Kalnischkies2015-05-1812-217/+383
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Valid-Until protects us from long-living downgrade attacks, but not all repositories have it and an attacker could still use older but still valid files to downgrade us. While this makes it sounds like a security improvement now, its a bit theoretical at best as an attacker with capabilities to pull this off could just as well always keep us days (but in the valid period) behind and always knows which state we have, as we tell him with the If-Modified-Since header. This is also why this is 'silently' ignored and treated as an IMSHit rather than screamed at the user as this can at best be an annoyance for attackers. An error here would 'regularily' be encountered by users by out-of-sync mirrors serving a single run (e.g. load balancer) or in two consecutive runs on the other hand, so it would just help teaching people ignore it. That said, most of the code churn is caused by enforcing this additional requirement. Crisscross from InRelease to Release.gpg is e.g. very unlikely in practice, but if we would ignore it an attacker could sidestep it this way.
* | | | | | detect Releasefile IMS hits even if the server doesn'tDavid Kalnischkies2015-05-139-15/+99
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Not all servers we are talking to support If-Modified-Since and some are not even sending Last-Modified for us, so in an effort to detect such hits we run a hashsum check on the 'old' compared to the 'new' file, we got the hashes for the 'new' already for "free" from the methods anyway and hence just need to calculated the old ones. This allows us to detect hits even with unsupported servers, which in turn means we benefit from all the new hit behavior also here.
* | | | | | implement VerifyFile as all-hashes checkDavid Kalnischkies2015-05-122-8/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It isn't used much compared to what the methodname suggests, but in the remaining uses it can't hurt to check more than strictly necessary by calculating and verifying with all hashes we can compare with rather than "just" the best known hash.
* | | | | | detect 416 complete file in partial by expected hashDavid Kalnischkies2015-05-127-17/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we have the expected hashes we can check with them if the file we have in partial we got a 416 for is the expected file. We detected this with same-size before, but not every server sends a good Content-Range header with a 416 response.
* | | | | | rewrite all TFRewrite instances to use the new pkgTagSection::WriteDavid Kalnischkies2015-05-1116-342/+559
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While it is mostly busywork to rewrite all instances it actually fixes bugs as the data storage used by the new method is std::string rather than a char*, the later mostly created by c_str() from a std::string which the caller has to ensure keeps in scope – something apt-ftparchive actually didn't ensure and relied on copy-on-write behavior instead which c++11 forbids and hence the new default gcc abi doesn't use it.
* | | | | | implement a more c++-style TFRewrite alternativeDavid Kalnischkies2015-05-113-14/+187
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TFRewrite is okay, but it has obscure limitations (256 Tags), even more obscure bugs (order for renames is defined by the old name) and the interface is very c-style encouraging bad usage like we do it in apt-ftparchive passing massive amounts of c_str() from std::string in. The old-style is marked as deprecated accordingly. The next commit will fix all places in the apt code to not use the old-style anymore.
* | | | | | stop depending on copy-on-write for std::stringDavid Kalnischkies2015-05-112-21/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In 66c3875df391b1120b43831efcbe88a78569fbfe we workaround/fixed a problem where the code makes the assumption that the compiler uses copy-on-write implementations for std::string. Turns out that for c++11 compatibility gcc >= 5 will stop doing this by default.
* | | | | | sync TFRewrite*Order arrays with dpkg and dakDavid Kalnischkies2015-05-118-75/+211
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dpkg and dak know various field names and order them in their output, while we have yet another order and have to play catch up with them as we are sitting between chairs here and neither order is ideal for us, too. A little testcase is from now on supposed to help ensureing that we do not derivate to far away from which fields dpkg knows and orders.
* | | | | | fix 'Source' to 'Package' rename in apt-ftparchiveDavid Kalnischkies2015-05-111-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This rename with value is ordered by the 'old' name 'Source', but should be ordered by the new name… by splitting the operation in a delete and a new field we can easily fix this problem locally for now.
* | | | | | drop incorrect parameter implicitely converted to boolDavid Kalnischkies2015-05-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The helper expects to be told if it should generate messages, not where these messages should be printed – as it isn't printing such messages, but puts them in _error. apt-get uses in other methods a helper specialisation which does also print stuff to a stream through, so this is likely a copy&paste error. Git-Dch: Ignore
* | | | | | fix macro definition for very old GCC < 3David Kalnischkies2015-05-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Git-Dch: Ignore
* | | | | | show non-matching m-a:same versions in debug messageDavid Kalnischkies2015-05-111-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Slightly rewriting the code to ensure we only use two sources for the versions as it could otherwise be confusing to look at.