summaryrefslogtreecommitdiff
path: root/test
Commit message (Collapse)AuthorAgeFilesLines
* tests (retry-downloads): Avoid delay in second testJulian Andres Klode2021-07-291-1/+1
| | | | This delay of 4+2+1=7 seconds in unnecessary.
* Enhance test to check time spentJulian Andres Klode2021-07-291-1/+17
| | | | | | | | This is subject to clock skew, unfortunately, as we cannot read monotonic time in shell. We check for >=5s out of the 7s it should take to reduce the risk of skew a bit.
* Add support for a maximum delay and testing of delayJulian Andres Klode2021-07-281-1/+12
| | | | | | This is very basic support on the testing side, we just test the debug output but not how long it actually took. Would be nice to check time really.
* Merge branch 'fix/dpkgchroot' into 'main'Julian Andres Klode2021-07-051-0/+8
|\ | | | | | | | | Restore dpkg::chroot-directory functionality See merge request apt-team/apt!178
| * Restore dpkg::chroot-directory functionalityDavid Kalnischkies2021-06-101-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we call dpkg inside a chroot we have to ensure that the temporary directory we construct to call dpkg --recursive is inside the chroot and that we strip the path to the chroot from the directory name we pass to dpkg. Note that the added test succeeds before and (hopefully) after as we can't really chroot here or fiddle with the needed settings as we are already setting up apt to work with a quasi-chroot. The test perhaps helps in ensuring we don't break it too much in the future though. (Broken five years (and one day) ago this seems to have an immense user base at the moment, but it might in the future via mmdebstrap) References: f495992428a396e0f98886c9a761a804aa161c68 Reported-By: Johannes Schauer Marin Rodrigues on IRC Tested-By: Johannes Schauer Marin Rodrigues
* | Merge branch 'fix/sizesharing' into 'main'Julian Andres Klode2021-07-0516-39/+165
|\ \ | | | | | | | | | | | | Allow packages from volatile sources to be reinstalled See merge request apt-team/apt!177
| * | Test that tiny differences result in different versionsDavid Kalnischkies2021-06-101-0/+98
| | | | | | | | | | | | | | | | | | | | | | | | | | | Just because two packages have the same version number doesn't mean it is the same package. APT can detect rebuilds and other "inconsistencies", but we had no explicit test for it so far. It turned out to be the wrong track in this branch, but as I wrote it already, lets add it at least. Gbp-Dch: Ignore
| * | Give our test packages proper size informationDavid Kalnischkies2021-06-1016-41/+66
| | | | | | | | | | | | | | | | | | | | | | | | Temporary hacks should be temporary, especially if they hide bugs. After fixing one in the previous commit this is just busy work to add download information to the places which check that output. Gbp-Dch: Ignore
| * | Store size from volatile sources for already installed versionsDavid Kalnischkies2021-06-101-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | Volatile sources are parsed after the status file, so if we have a version already installed the size information is not stored, so that a reinstall of said version is refused claiming a broken repository. References: 1412cf51403286e9c040f9f86fd4d8306e62aff2
| * | Use full item description in broken repo errorDavid Kalnischkies2021-06-101-1/+1
| |/ | | | | | | | | | | | | | | The error says the repository is broken but doesn't mention which one it is. The item description gives us at least all the information, but is not as nicely formatted. As this message is not even marked for translation this is a rather temporary affair and we can survive without the eye candy for a while.
* / Check sources.list could be parsed before adding volatile filesJulian Andres Klode2021-07-011-0/+15
|/ | | | | | | We just used the pointer returned which might be nullptr, properly call BuildSourceList() and check the result first. Closes: #990518
* No URL decode and quoting support for Files in SourcesDavid Kalnischkies2021-06-041-0/+10
| | | | | | | | The code exists since ever, but no other client supports this and the specification like debian-policy isn't asking for this either. What it does do is breaking than all others continue working through: If the filename includes in fact URI encoded bits (hopefully no quotes) which is rather unlikely, but none the less possible.
* Do not use filename of local sources in 'apt download'David Kalnischkies2021-06-042-6/+4
| | | | | | | | | | If a source is not copying files to the destination the download code forces the copy – which in practice are local repositories accessed via file:/ – but in that process takes the filename the local repo used rather than the filename it e.g. advertised via --print-uris. A local repository could hence override a file in the current directory if you use 'apt download', which is a rather weak ability, but still.
* URI encode Filename field of Packages files (again)David Kalnischkies2021-06-043-7/+50
| | | | | | | | | | | | | | Keeping URIs encoded in the acquire system depends on having them encoded in the first place. While many other places got the encoding 2 out of 3 ArchiveURI implementations were missed which are in practice responsible for nearly all of the URI building, just that index filename do not contain characters to escape and the Filename fields in Packages files usually aren't. Usually. Except if you happen to have e.g. an epoch featuring package with the colon encoded in the filename. On the upside, in most repositories the epoch isn't part of the filename. Reported-By: Johannes 'josch' Schauer on IRC References: e6c55283d235aa9404395d30f2db891f36995c49
* policy: Apply phasing to uninstalled packages tooJulian Andres Klode2021-05-171-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a package is not installed yet, we do need to apply phasing as we otherwise get into weird situations when installing packages: In the launchpad bug below, ubuntu-release-upgrader-core was installed, and hence the phasing for the upgrade to it was applied. However, ubuntu-release-upgrader-gtk was about to be installed - and hence the phasing did not apply, causing a version mismatch, because ubuntu-release-upgrader-gtk from -updates was used, but -core from release pocket. Sigh. An alternative approach to dealing with this issue could be to apply phasing to all packages within the same source package, which would work in most cases. However, there might be unforeseen side effects and it is of course possible to have = depends between source packages, such as -signed packages on the unsigned ones for bootloaders. This problem does not occur in the update-manager implementation of phased updates as update-manager only deals with upgrading packages, but does not install new packages and thus does not see that issue. APT however, has to apply phasing more broadly, as you can and often do install additional packages during upgrade, or upgrade packages during install commands, as both accept package list arguments and have the same code in the backend. LP: #1925745
* Turn TLS handshake issues into transient errorsJulian Andres Klode2021-05-121-0/+43
| | | | | | | This makes them retriable, and brings them more into line with TCP, where handshake is also a transient error. LP: #1928100
* Temporarily Revert "2.3-only: Warn that the 0.1 protocol is deprecated"Julian Andres Klode2021-04-291-11/+7
| | | | This reverts commit 64127478630b676838735b509fec5cdfa36874c8.
* Merge branch 'pu/upgradecounter' into 'main'Julian Andres Klode2021-04-291-2/+17
|\ | | | | | | | | Count uninstallable packages in "not upgraded" See merge request apt-team/apt!169
| * Count uninstallable packages in "not upgraded"David Kalnischkies2021-04-251-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a first step of the solver can figure out that a package is uninstallable it might reset the candidate so that later steps are prevented from exploring this dead end. While that helps the resolver it can confuse the display of the found solution as this will include an incorrect count of packages not upgraded in this solution. It was possible before, but happens a fair bit more with the April/May resolver changes last year so finally doing proper counting is a good idea. Sadly this is a bit harder than just getting the number first and than subtracting the packages we upgraded from it as the user can influence candidates via the command line and a package which could be upgraded, but is removed instead shouldn't count as not upgraded as we clearly did something with it. So we keep a list of packages instead of a number which also help in the upgrade cmds as those want to show the list. Closes: #981535
* | Merge branch 'pu/autoremove' into 'main'Julian Andres Klode2021-04-297-18/+150
|\ \ | | | | | | | | | | | | Mark only provides from protected versioned kernel packages See merge request apt-team/apt!168
| * | Call MarkAndSweep only manually in apt-get for autoremoveDavid Kalnischkies2021-04-265-21/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An interactive tool like aptitude needs these flags current far more often than we do as a user can see them in apt only in one very well defined place – the autoremove display block – so we don't need to run it up to four times while a normal "apt install" is processed as that is just busywork. The effect on runtime is minimal, as a single run doesn't take too long anyhow, but it cuts down tremendously on debug output at the expense of requiring some manual handholding. This is opt-in so that aptitude doesn't need to change nor do we need to change our own tools like "apt list" where it is working correctly as intended. A special flag and co is needed as we want to prevent the ActionGroup inside pkgDepCache::Init to be inhibited already so we need to insert ourselves while the DepCache is still in the process of being built. This is also the reason why the debug output in some tests changed to all unmarked, but that is fine as the marking could have been already obsoleted by the actions taken, just inhibited by a proper action group.
| * | Reexplore providers of marked packages if some didn't satisfy beforeDavid Kalnischkies2021-04-263-3/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The autoremove algorithm would mark a package previously after exploring it once, but it could have been that it ignored some providers due to them not satisfying the (versioned) dependency. A later dependency which they might satisfy would encounter the package as already marked and hence doesn't explore the providers anymore leaving us with internal errors (as in the contrived new testcase). This is resolved by introducing a new flag denoting if we explored every provider already and only skip exploring if that is true, which sounds bad but is really not such a common occurrence that it seems noticeable in practice. It also helps us marking virtual packages as explored now which would previously be tried each time they are encountered mostly hiding this problem for the (far more common) fully virtual package.
| * | Mark only provides from protected versioned kernel packagesDavid Kalnischkies2021-04-251-0/+103
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An out-of-tree kernel module which doesn't see many new versions can pile up a considerable amount of packages if it is depended on via another packages (e.g.: v4l2loopback-utils recommends v4l2loopback-modules) which in turn can prevent the old kernels from being removed if they happen to have a dependency on the images. To prevent this we check if a provider is a versioned kernel package (like an out-of-tree module) and if so check if that module package is part of the protected kernel set – if not it is probably good to go. We only do this if at least one provider is from a protected kernel though so that the dependency remains satisfied (this can happen e.g. if the module is currently not buildable against a protected kernel).
* / Allow superfluous commas in build-dependency linesDavid Kalnischkies2021-04-251-1/+2
|/ | | | | | | | This code can interact with handwritten files who can have unneeded commas for writing easy. As dpkg allows it, we should do as well. Reported-By: Arnaud Ferraris <arnaud.ferraris@gmail.com> References: https://lists.debian.org/debian-devel/2021/03/msg00101.html
* Support deconfiguring Essential packagesJulian Andres Klode2021-04-231-23/+30
| | | | | | | | | dpkg 1.20.8 also made --force-remove-essential optional for deconfiguring essential packages, so let's do this. Also extend the test case to make sure we actuall pass auto-deconfigure and do not make any --remove calls, or pass --force-remove to dpkg.
* test/json: Make the test hook more reliableJulian Andres Klode2021-04-231-4/+11
| | | | | | Ugh, this was super flaky under -j 16 and -j 4, each behaving in slightly different ways. This seems to be stable now. No real bug though, all behaviors were OK.
* 2.3-only: Warn that the 0.1 protocol is deprecatedJulian Andres Klode2021-04-231-7/+11
|
* json: Hook protocol 0.2 (added upgrade,downgrade,reinstall modes)Julian Andres Klode2021-04-231-15/+73
| | | | | | | | | Hook protocol 0.2 makes the new fields we added mandatory, and replaces `install` mode with `upgrade`, `downgrade`, `reinstall` where appropriate. Hook negotiation is hacky, but it's the best we can do for now. Users are advised to upgrade to 0.2
* json: Add `package-list` and `statistics` install hooksJulian Andres Klode2021-04-231-0/+24
| | | | This enables hooks to output additional information.
* upgrade: Add JSON hook support (AptCli::Hooks::Upgrade)Julian Andres Klode2021-04-231-4/+41
|
* json: Add origins fields to versionJulian Andres Klode2021-04-231-6/+10
| | | | | | Provide access to the origins of a package, such that tools can display information about them; for example, you can write a hook counting security upgrades.
* test: Set -e in our test hookJulian Andres Klode2021-04-231-0/+1
| | | | Gbp-Dch: ignore
* json: Encode NULL strings as nullJulian Andres Klode2021-04-231-0/+8
| | | | This is the only nullable thing we have here.
* json: Actually pop statesJulian Andres Klode2021-04-231-0/+16
| | | | | | | | | | | The JSON encoder only looked at the top state, but did not pop it, so if we nested objects, we got stuck in whatever the last state we pushed aside was, so in our example, we wrongly get a comma inserted _after_ key "b": {"a":[{}], "b":,[{}] }
* json: Escape strings using \u escape sequences, add testJulian Andres Klode2021-04-231-0/+45
| | | | | | | | This allows us to correctly encode strings containing quotation marks, escape characters and control characters. The test case is a bit nasty because it embeds private-cachefile.cc for linkage reasons.
* Automatically retry failed downloads 3 timesJulian Andres Klode2021-04-151-0/+9
| | | | | | | | | Enable the Acquire::Retries option by default, set to 3. This will help with slightly unreliable networking; future work is needed for adding backoff and SRV/IP rotation. LP: #1876035 Gbp-Dch: full
* Error on packages without a Size field (option Acquire::AllowUnsizedPackages)Julian Andres Klode2021-04-132-0/+9
| | | | | | | | | Repositories without Size information for packages are not proper and need fixing. This ensures people see an error in CI, and get notifications and hence the ability to fix it. It can be turned off by setting Acquire::AllowUnsizedPackages to true.
* Fix downloads of unsized files that are largest in pipelineJulian Andres Klode2021-04-131-0/+38
| | | | | | | | | The maximum request size is accidentally set to any sized file, so if an unsized file is present, and it turns out to be larger than the maximum size we set, we'd error out when checking if its size is smaller than the maximum request size. LP: #1921626
* Check for and discard expected warning from MaybeAddAuthJulian Andres Klode2021-03-311-0/+5
| | | | | | | | | MaybeAddAuth() here tells us that it refused to use the credentials for an http source; but that caused the test suite to fail at a later stage because we checked if there were any errors/warning. Strangely, this is only triggered with LTO enabled. Actually check that the warning is being set and then reject it.
* Harden test for no new acquires after transaction abortHEADmasterDavid Kalnischkies2021-03-111-9/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If a transaction is doomed we want to gracefully shutdown our zoo of worker processes. As explained in the referenced commit we do this by stopping the main process from handing out new work and ignoring the replies it gets from the workers, so that they eventually run out of work. We tested this previously by checking if a rred worker was given work items at all, but depending on how lucky the stars of the machine working on this are the worker would have already gotten work before the transaction was aborted – so we tried this 25 times a row (f35601e5d2). No machine can be this lucky, right? Turns out the autopkgtest armhf machine is very lucky. I feel a bit sorry for feeding grep such a long "line" to work with, but it seems to work out. Porterbox amdahl (who is considerably less lucky; had to turn down to 1 try to get it to fail sometimes) is now happily running the test in an endless loop. Of course, I could have broken the test now, but its still a rather generic grep (in some ways more generic even) and the main part of the testcase – the update process finishes and fails – is untouched. References: 38f8704e419ed93f433129e20df5611df6652620 Closes: #984966
* Ensure all index files sent custom tags to the methodsDavid Kalnischkies2021-03-071-0/+10
| | | | | | | | | | | The mirror method can distribute requests for files based on various metadata bits, but some – the main index files – weren't actually passing those on to the methods as advertised in the manpage. This is hidden both by mirror usually falling back to other sources which will eventually hit the right one and that if the repository does not support by-hash apt will automatically stick to the mirror which was used for the Release file.
* Start pdiff patching from the last possible starting pointDavid Kalnischkies2021-03-071-0/+3
| | | | | | | | | | | | | | | | | Especially in small sections of an archive it can happen that an index returns to a previous state (e.g. if a package was first added and then removed with no other changes happening in between). The result is that we have multiple patches which start from the same hash which if we perform clientside merging is no problem although not ideal as we perform needless work. For serverside merging it would not matter, but due to rred previously refusing to merge zero-size patches but dak ignoring failure letting it carry these size-zero patches until they naturally expire we run into a problem as these broken patches won't do and force us to fall back to downloading the entire index. By always starting from the last patch instead of the first with the starter hash we can avoid this problem and behave optimally in clientside merge cases, too.
* Rename pdiff merge patches only after they are all downloadedDavid Kalnischkies2021-03-072-8/+9
| | | | | | | | | | | | | | The rred method expects the patches to have a certain name, which we have to rename the file to before calling the method, but by delaying the rename we ensure that if the download of one of them fails and a successful fallback occurs they are all properly cleaned up as no longer useful while in the error case the next apt run can potentially pick them up as already downloaded. Our test-pdiff-usage test was encountering this every other run, but did not fail as the check for unaccounted files in partial/ was wrapped in a subshell so that the failure produced failing output, but did not change the exit code.
* Allow merging with empty pdiff patchesDavid Kalnischkies2021-03-061-2/+3
| | | | | | | | There isn't a lot of sense in working on empty patches as they change nothing (quite literally), but they can be the result of merging multiple patches and so to not require our users to specifically detect and remove them, we can be nice and just ignore them instead of erroring out.
* regression fix: do require force-loopbreak for ConflictsJulian Andres Klode2021-03-011-15/+21
| | | | | | | | Conflicts do require removing the package temporarily, so they really should not be used. We need to improve that eventually such that we can deconfigure packages when we have to remove their dependencies due to conflicts.
* Do not require force-loopbreak on Protected packagesJulian Andres Klode2021-02-231-1/+56
| | | | | | | | | dpkg will be changed in 1.20.8 to not require --force-remove for deconfiguration anymore, but we want to decouple our changes from the dpkg ones, so let's always pass --force-remove-protected when installing packages such that we can deconfigure protected packages. Closes: #983014
* Adjust loops to use size_t instead of intJulian Andres Klode2021-02-091-3/+3
| | | | Gbp-Dch: ignore
* Fix test suite regression from StrToNum fixesJulian Andres Klode2021-02-092-56/+44
| | | | | | | | | | | | We ignored the failure from strtoul() that those test cases had values out of range, hence they passed before, but now failed on 32-bit platforms because we use strtoull() and do the limit check ourselves. Move the tarball generator for test-github-111-invalid-armember to the createdeb helper, and fix the helper to set all the numbers for like uid and stuff to 0 instead of the maximum value the fields support (all 7s). Regression-Of: e0743a85c5f5f2f83d91c305450e8ba192194cd8
* Prevent temporary directory from triggering failure greppingDavid Kalnischkies2021-02-041-0/+1
| | | | | | | | The grep for case-insensitive GPG finds also e.g. "/tmp/tmp.Kc5kKgPg0D" which is not the intention, so we simply eliminate the variation of the /tmp directory here from the output to prevent these false positives. Gbp-Dch: Ignore
* Avoid overstepping bounds in config file parsingDavid Kalnischkies2021-02-031-0/+8
| | | | | | | Our configuration files are not security relevant, but having a parser which avoids crashing on them even if they are seriously messed up is not a bad idea anyway. It is also a good opportunity to brush up the code a bit avoiding a few small string copies with our string_view.