summaryrefslogtreecommitdiff
path: root/test/integration
Commit message (Collapse)AuthorAgeFilesLines
* test/json: Make the test hook more reliableJulian Andres Klode2021-04-231-4/+11
| | | | | | Ugh, this was super flaky under -j 16 and -j 4, each behaving in slightly different ways. This seems to be stable now. No real bug though, all behaviors were OK.
* 2.3-only: Warn that the 0.1 protocol is deprecatedJulian Andres Klode2021-04-231-7/+11
|
* json: Hook protocol 0.2 (added upgrade,downgrade,reinstall modes)Julian Andres Klode2021-04-231-15/+73
| | | | | | | | | Hook protocol 0.2 makes the new fields we added mandatory, and replaces `install` mode with `upgrade`, `downgrade`, `reinstall` where appropriate. Hook negotiation is hacky, but it's the best we can do for now. Users are advised to upgrade to 0.2
* json: Add `package-list` and `statistics` install hooksJulian Andres Klode2021-04-231-0/+24
| | | | This enables hooks to output additional information.
* upgrade: Add JSON hook support (AptCli::Hooks::Upgrade)Julian Andres Klode2021-04-231-4/+41
|
* json: Add origins fields to versionJulian Andres Klode2021-04-231-6/+10
| | | | | | Provide access to the origins of a package, such that tools can display information about them; for example, you can write a hook counting security upgrades.
* test: Set -e in our test hookJulian Andres Klode2021-04-231-0/+1
| | | | Gbp-Dch: ignore
* Automatically retry failed downloads 3 timesJulian Andres Klode2021-04-151-0/+9
| | | | | | | | | Enable the Acquire::Retries option by default, set to 3. This will help with slightly unreliable networking; future work is needed for adding backoff and SRV/IP rotation. LP: #1876035 Gbp-Dch: full
* Error on packages without a Size field (option Acquire::AllowUnsizedPackages)Julian Andres Klode2021-04-132-0/+9
| | | | | | | | | Repositories without Size information for packages are not proper and need fixing. This ensures people see an error in CI, and get notifications and hence the ability to fix it. It can be turned off by setting Acquire::AllowUnsizedPackages to true.
* Fix downloads of unsized files that are largest in pipelineJulian Andres Klode2021-04-131-0/+38
| | | | | | | | | The maximum request size is accidentally set to any sized file, so if an unsized file is present, and it turns out to be larger than the maximum size we set, we'd error out when checking if its size is smaller than the maximum request size. LP: #1921626
* Harden test for no new acquires after transaction abortHEADmasterDavid Kalnischkies2021-03-111-9/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If a transaction is doomed we want to gracefully shutdown our zoo of worker processes. As explained in the referenced commit we do this by stopping the main process from handing out new work and ignoring the replies it gets from the workers, so that they eventually run out of work. We tested this previously by checking if a rred worker was given work items at all, but depending on how lucky the stars of the machine working on this are the worker would have already gotten work before the transaction was aborted – so we tried this 25 times a row (f35601e5d2). No machine can be this lucky, right? Turns out the autopkgtest armhf machine is very lucky. I feel a bit sorry for feeding grep such a long "line" to work with, but it seems to work out. Porterbox amdahl (who is considerably less lucky; had to turn down to 1 try to get it to fail sometimes) is now happily running the test in an endless loop. Of course, I could have broken the test now, but its still a rather generic grep (in some ways more generic even) and the main part of the testcase – the update process finishes and fails – is untouched. References: 38f8704e419ed93f433129e20df5611df6652620 Closes: #984966
* Ensure all index files sent custom tags to the methodsDavid Kalnischkies2021-03-071-0/+10
| | | | | | | | | | | The mirror method can distribute requests for files based on various metadata bits, but some – the main index files – weren't actually passing those on to the methods as advertised in the manpage. This is hidden both by mirror usually falling back to other sources which will eventually hit the right one and that if the repository does not support by-hash apt will automatically stick to the mirror which was used for the Release file.
* Start pdiff patching from the last possible starting pointDavid Kalnischkies2021-03-071-0/+3
| | | | | | | | | | | | | | | | | Especially in small sections of an archive it can happen that an index returns to a previous state (e.g. if a package was first added and then removed with no other changes happening in between). The result is that we have multiple patches which start from the same hash which if we perform clientside merging is no problem although not ideal as we perform needless work. For serverside merging it would not matter, but due to rred previously refusing to merge zero-size patches but dak ignoring failure letting it carry these size-zero patches until they naturally expire we run into a problem as these broken patches won't do and force us to fall back to downloading the entire index. By always starting from the last patch instead of the first with the starter hash we can avoid this problem and behave optimally in clientside merge cases, too.
* Rename pdiff merge patches only after they are all downloadedDavid Kalnischkies2021-03-072-8/+9
| | | | | | | | | | | | | | The rred method expects the patches to have a certain name, which we have to rename the file to before calling the method, but by delaying the rename we ensure that if the download of one of them fails and a successful fallback occurs they are all properly cleaned up as no longer useful while in the error case the next apt run can potentially pick them up as already downloaded. Our test-pdiff-usage test was encountering this every other run, but did not fail as the check for unaccounted files in partial/ was wrapped in a subshell so that the failure produced failing output, but did not change the exit code.
* Allow merging with empty pdiff patchesDavid Kalnischkies2021-03-061-2/+3
| | | | | | | | There isn't a lot of sense in working on empty patches as they change nothing (quite literally), but they can be the result of merging multiple patches and so to not require our users to specifically detect and remove them, we can be nice and just ignore them instead of erroring out.
* regression fix: do require force-loopbreak for ConflictsJulian Andres Klode2021-03-011-15/+21
| | | | | | | | Conflicts do require removing the package temporarily, so they really should not be used. We need to improve that eventually such that we can deconfigure packages when we have to remove their dependencies due to conflicts.
* Do not require force-loopbreak on Protected packagesJulian Andres Klode2021-02-231-1/+56
| | | | | | | | | dpkg will be changed in 1.20.8 to not require --force-remove for deconfiguration anymore, but we want to decouple our changes from the dpkg ones, so let's always pass --force-remove-protected when installing packages such that we can deconfigure protected packages. Closes: #983014
* Fix test suite regression from StrToNum fixesJulian Andres Klode2021-02-091-56/+2
| | | | | | | | | | | | We ignored the failure from strtoul() that those test cases had values out of range, hence they passed before, but now failed on 32-bit platforms because we use strtoull() and do the limit check ourselves. Move the tarball generator for test-github-111-invalid-armember to the createdeb helper, and fix the helper to set all the numbers for like uid and stuff to 0 instead of the maximum value the fields support (all 7s). Regression-Of: e0743a85c5f5f2f83d91c305450e8ba192194cd8
* Prevent temporary directory from triggering failure greppingDavid Kalnischkies2021-02-041-0/+1
| | | | | | | | The grep for case-insensitive GPG finds also e.g. "/tmp/tmp.Kc5kKgPg0D" which is not the intention, so we simply eliminate the variation of the /tmp directory here from the output to prevent these false positives. Gbp-Dch: Ignore
* Guess compressor only if no AR nember with exact name existsDavid Kalnischkies2021-02-021-1/+1
| | | | | | | | | | | Explicitly opening a tar member is a bit harder than it needs to be as you have to remove the compressor extension so that it can be guessed here gain potentially choosing the wrong member. Doesn't really matter for deb packages of course as the member count is pretty low and strongly defined, but testing is easier this way. It also finally fixes an incorrectly formatted error message.
* Include all translations when building the cacheJulian Andres Klode2021-01-271-0/+13
| | | | | | | | | We do download all translations we ever downloaded, but we don't add all of those to the cache, meaning that if we run update with LANG=C, it might still download your de_DE translation, but it won't insert it into the cache, causing your de_DE user to not get translated messages. LP: #1907850
* Adjust apt-mark test for dpkg 1.20.7Julian Andres Klode2021-01-131-2/+13
|
* Implement update --error-on=anyJulian Andres Klode2021-01-081-0/+6
| | | | | | | | | | | People have been asking for a feature to error out on transient network errors for a while, this gives them one while keeping the door open for other modes we need, such as --error-on=no-success which we need to determine when to retry the daily update job. Closes: #594813 (and a whole bunch of duplicates...)
* Add support for Phased-Update-PercentageJulian Andres Klode2021-01-082-0/+275
| | | | | | | | | | | | | | | | | | | | | | | | | | This adds support for Phased-Update-Percentage by pinning upgrades that are not to be installed down to 1. The output of policy has been changed to add the level of phasing, and documentation has been improved to document how phased updates work. The patch detects if it is running in a chroot, and if so, always includes phased updates, restoring classic apt behavior to avoid behavioral changes on buildd chroots. Various options are added to control this all: * APT::Get::{Always,Never}-Include-Phased-Updates and their legacy update-manager equivalents to always or never include phased updates * APT::Machine-ID can be set to a UUID string to have all machines in a fleet phase the same * Dir::Etc::Machine-ID is weird in that it's default is sort of like ../machine-id, but not really, as ../machine-id would look up $PWD/../machine-id and not relative to Dir::Etc; but it allows you to override the path to machine-id (as opposed to the value) * Dir::Bin::ischroot is the path to the ischroot(1) binary which is used to detect whether we are running in a chroot.
* Merge branch 'bash-compat' into 'master'Julian Andres Klode2021-01-051-1/+1
|\ | | | | | | | | Be compatible with Bash See merge request apt-team/apt!142
| * Be compatible with BashDemi M. Obenour2020-12-281-1/+1
| | | | | | | | | | | | On many distributions, /bin/sh is Bash. Bash’s `echo` builtin doesn’t interpret escape sequences, so most tests fail. Fix this by removing the escape sequence.
* | Determine autoremovable kernels at run-timeJulian Andres Klode2021-01-041-8/+7
|/ | | | | | | | | | | | | | | | | | | | | | | Our kernel autoremoval helper script protects the currently booted kernel, but it only runs whenever we install or remove a kernel, causing it to protect the kernel that was booted at that point in time, which is not necessarily the same kernel as the one that is running right now. Reimplement the logic in C++ such that we can calculate it at run-time: Provide a function to produce a regular expression that matches all kernels that need protecting, and by changing the default root set function in the DepCache to make use of that expression. Note that the code groups the kernels by versions as before, and then marks all kernel packages with the same version. This optimized version inserts a virtual package $kernel into the cache when building it to avoid having to iterate over all packages in the cache to find the installed ones, significantly improving performance at a minor cost when building the cache. LP: #1615381
* Implement encoded URI handling in all methodsDavid Kalnischkies2020-12-181-7/+20
| | | | | | | | Every method opts in to getting the encoded URI passed along while keeping compat in case we are operated by an older acquire system. Effectively this is just a change for the http-based methods as the others just decode the URI as they work with files directly.
* Keep URIs encoded in the acquire systemDavid Kalnischkies2020-12-185-30/+30
| | | | | | | | | | | | | | | | | | | We do not deal a lot with URIs which need encoding, but then we do it is a pain that we store it decoded in the acquire system as it means we have to decode and reencode URIs eventually which is potentially giving us slightly different URIs. We see that in our own testing framework while setting up redirects as the config options are effectively double-encoded and decoded to pass them around successfully as otherwise %2f and / in an URI are treated the same. This commit adds the infrastructure for methods to opt into getting URIs send in encoded form (and returning them to us in encoded form, too) so that we eventually do not have to touch the URIs which is how it should be. This means though that we have to deal with methods who do not support this yet (aka: all at the moment) for which we decode and encode while communicating with them.
* Proper URI encoding for config requests to our test webserverDavid Kalnischkies2020-12-182-10/+10
| | | | | | Our http method encodes the URI again which results in the double encoding we have unwrap in the webserver (we did already, but we skip the filename handling now which does the first decode).
* test: fixup for hash table size increase (changed output order)Julian Andres Klode2020-12-155-13/+12
|
* CVE-2020-27350: tarfile: integer overflow: Limit tar items to 128 GiBJulian Andres Klode2020-12-091-0/+3
| | | | | | | | | | | | | | | | | | | The integer overflow was detected by DonKult who added a check like this: (std::numeric_limits<decltype(Itm.Size)>::max() - (2 * sizeof(Block))) Which deals with the code as is, but also still is a fairly big limit, and could become fragile if we change the code. Let's limit our file sizes to 128 GiB, which should be sufficient for everyone. Original comment by DonKult: The code assumes that it can add sizeof(Block)-1 to the size of the item later on, but if we are close to a 64bit overflow this is not possible. Fixing this seems too complex compared to just ensuring there is enough room left given that we will have a lot more problems the moment we will be acting on files that large as if the item is that large, the (valid) tar including it probably doesn't fit in 64bit either.
* CVE-2020-27350: debfile: integer overflow: Limit control size to 64 MiBJulian Andres Klode2020-12-091-0/+3
| | | | | | | | | Like the code in arfile.cc, MemControlExtract also has buffer overflows, in code allocating memory for parsing control files. Specify an upper limit of 64 MiB for control files to both protect against the Size overflowing (we allocate Size + 2 bytes), and protect a bit against control files consisting only of zeroes.
* tarfile: OOM hardening: Limit size of long names/links to 1 MiBJulian Andres Klode2020-12-091-0/+6
| | | | | | | | | | | | | | | Tarballs have long names and long link targets structured by a special tar header with a GNU extension followed by the actual content (padded to 512 bytes). Essentially, think of a name as a special kind of file. The limit of a file size in a header is 12 bytes, aka 10**12 or 1 TB. While this works OK-ish for file content that we stream to extractors, we need to copy file names into memory, and this opens us up to an OOM DoS attack. Limit the file name size to 1 MiB, as libarchive does, to make things safer.
* CVE-2020-27350: arfile: Integer overflow in parsingJulian Andres Klode2020-12-091-0/+13
| | | | | | | | | | | | | | | | | | | | | | GHSL-2020-169: This first hunk adds a check that we have more files left to read in the file than the size of the member, ensuring that (a) the number is not negative, which caused the crash here and (b) ensures that we similarly avoid other issues with trying to read too much data. GHSL-2020-168: Long file names are encoded by a special marker in the filename and then the real filename is part of what is normally the data. We did not check that the length of the file name is within the length of the member, which means that we got a overflow later when subtracting the length from the member size to get the remaining member size. The file createdeb-lp1899193.cc was provided by GitHub Security Lab and reformatted using apt coding style for inclusion in the test case, both of these issues have an automated test case in test/integration/test-ubuntu-bug-1899193-security-issues. LP: #1899193
* test-method-rred: Use apthelper instead of apt-helperJulian Andres Klode2020-12-021-1/+1
| | | | | | Fixes lookup in as-installed testing Gbp-Dch: ignore
* Merge branch 'feature/rred' into 'master'Julian Andres Klode2020-11-252-1/+56
|\ | | | | | | | | Enhance rred for possible external usage See merge request apt-team/apt!136
| * Support compressed output from rred similar to apt-helper cat-filefeature/rredDavid Kalnischkies2020-11-071-2/+13
| |
| * Support reading compressed patches in rred direct call modesDavid Kalnischkies2020-11-071-0/+3
| | | | | | | | | | | | The acquire system mode does this for a long time already and as it is easy to implement and handy for manual testing as well we can support it in the other modes, too.
| * Prepare rred binary for external usageDavid Kalnischkies2020-11-072-1/+42
| | | | | | | | | | | | | | | | | | | | | | Merging patches is a bit of non-trivial code we have for client-side work, but as we support also server-side merging we can export this functionality so that server software can reuse it. Note that this just cleans up and makes rred behave a bit more like all our other binaries by supporting setting configuration at runtime and supporting --help and --version. If you can make due without this, the now advertised functionality is provided already in earlier versions.
* | Do not immediately configure m-a: same packages in lockstepJulian Andres Klode2020-11-061-2/+2
|/ | | | | | | | | | | | | | | | | | | In LP#835625, it was reported that apt did not unpack multi-arch packages in the correct order, and dpkg did not like that. The fix also made apt configure packages together, which is not strictly necessary. This turned out to cause issues now, because of dependencies on libc6:i386 that caused immediate configuration of that to not work. Work around the issue by not configuring multi-arch: same packages in lockstep if they have the immediate flag set. This will be the pseudo-essential set, and given how essential works, we mostly need the native arch to work correctly anyway. LP: #1871268 Regression-Of: 30426f4822516bdd26528aa2e6d8d69c1291c8d3
* pkgnames: Do not exclude virtual packages with --all-namesJulian Andres Klode2020-10-261-1/+2
| | | | | | | | | We accidentally excluded virtual packages by excluding every group that had a package, but where the package had no versions. Rewrite the code so the lookup consistently uses VersionList() instead of FirstVersion and FindPkg("any") - those are all the same, and this is easier to read.
* pkgnames: Correctly set the default for AllNames to falseJulian Andres Klode2020-10-261-0/+23
| | | | | | | We passed "false" instead of false, and that apparently got cast to bool, because it's a non-null pointer. LP: #1876495
* Default Acquire::AllowReleaseInfoChange::Suite to "true"Julian Andres Klode2020-08-101-0/+14
| | | | Closes: #931566
* Replace whitelist/blacklist with allowlist/denylistJulian Andres Klode2020-08-042-13/+13
|
* Detect pkg-config-dpkghook failure in tests to avoid fallbackDavid Kalnischkies2020-07-071-4/+8
| | | | | | | | | | | | | | | | | dpkg (>= 1.20.3) has better support for its own DPKG_ROOT resulting in architectures for the root being reported rather than the host system. Sadly the hookscript from pkg-config is not prepared for this resulting in our `dpkg --add-architecture` calls failing in the hook after dpkg has successfully added the architecture internally. The failure triggered fallback handling in the tests to work with an older version of dpkg with a different multi-arch implementation. So instead of doing the fallback, we ignore the failure if it seems like pkg-config-dpkghook is involved only producing a bunch of warnings to hint at this problem, but otherwise make the tests work again as it is a post-invoke script. References: #824774
* Fix test due to display change in ls (coreutils 8.32)David Kalnischkies2020-07-071-1/+1
| | | | | | | | | | | | | | | | | | | The test runs ls on the opened fds and greps the result for 'root root' which is how ls (<= 8.30) used to report user and group for these. Now that Debian contains 8.32 it reports user and group of the process owning them (supposedly). grepping for both unbreaks the test. lr-x------ 1 root root 64 Jul 7 19:07 0 -> 'pipe:[10458045]' lrwx------ 1 root root 64 Jul 7 19:07 1 -> /dev/pts/12 lrwx------ 1 root root 64 Jul 7 19:07 2 -> /dev/pts/12 lr-x------ 1 root root 64 Jul 7 19:07 3 -> /proc/1266484/fd vs (assuming user:group is david:david) lr-x------ 1 david david 64 Jul 7 19:07 0 -> 'pipe:[10458045]' lrwx------ 1 david david 64 Jul 7 19:07 1 -> /dev/pts/12 lrwx------ 1 david david 64 Jul 7 19:07 2 -> /dev/pts/12 lr-x------ 1 david david 64 Jul 7 19:07 3 -> /proc/1266484/fd
* Add dependency points in the resolver also to providersDavid Kalnischkies2020-07-021-0/+107
| | | | | | | | | | | We were traditionally adding points for some dependency types to the real package, but we should also do it for providers of that package to help the resolver especially if the real package is for some reason not tagged for removal yet/anymore. While at it we ensure that the points are only attributed once for each package as especially with versioned provides a package can nowadays provide another many times and would hence acquire a lot of points.
* Filter out impossible solutions for protected propagationDavid Kalnischkies2020-07-023-2/+33
| | | | | | | | If the package providing the given solution is tagged already for removal (or at least for "not installing") we can ignore this solution as a possibility as it is not one, which means we can avoid exploring the option and potentially forward the protected flag further if that helps in reducing the possibilities to a single one.
* Delay removals due to Conflicts until Depends are resolvedDavid Kalnischkies2020-07-022-1/+75
| | | | | | | | | | | | | | | | | Marking a package for removal is fine if we know that we have to remove that package, but if we are in an alternative branch we might not go this route in the end and hence have a package pointlessly marked for removal which isn't questioned later on. We check if we are allowed to remove that package to avoid working on the positive dependencies if not, but we mark them for removal only after all the other dependencies are successfully resolved. In an ideal world we would let the problemResolver do its job on them, but the resolver might decide against doing the removal exploring another option like the next alternative, which might be a good idea, but is not the behaviour we had before, so that is the best we can do for now without changing the resolver drastically.