summaryrefslogtreecommitdiff
path: root/apt-pkg
Commit message (Collapse)AuthorAgeFilesLines
* Fail ConfigDir reading if directory listing failedDavid Kalnischkies2021-02-031-1/+4
| | | | | | We were printing an error and hence have non-zero exit code either way, but API wise it makes sense to have this properly reported back to the caller to propagate it down the chain e.g. while parsing #include stanzas.
* Free XXH3 state to avoid leak in cache hashingDavid Kalnischkies2021-02-031-1/+3
| | | | | We do this once (usually), so the leak is tremendously big, but it is detected as a leak by the fuzzer and trips it up.
* Guess compressor only if no AR nember with exact name existsDavid Kalnischkies2021-02-021-25/+34
| | | | | | | | | | | Explicitly opening a tar member is a bit harder than it needs to be as you have to remove the compressor extension so that it can be guessed here gain potentially choosing the wrong member. Doesn't really matter for deb packages of course as the member count is pretty low and strongly defined, but testing is easier this way. It also finally fixes an incorrectly formatted error message.
* Use 500 MB memory limit for xz/lzma decodingDavid Kalnischkies2021-02-021-15/+6
| | | | | | | | | | | | The buffers we feed in and read out are usually a couple kilobytes big so allowing lzma to use an unlimited amount of memory is easy & okay, but not needed and confuses memory checkers as it will cause lzma to malloc a huge chunk of memory (which it will never use). So lets just use a "big enough" value instead. In exchange we simplify the decoder calling as we were already using the auto-variant for xz, so we can just avoid the if-else and let liblzma decide what it decodes.
* Merge branch 'pu/include-all-translations' into 'master'Julian Andres Klode2021-01-271-1/+1
|\ | | | | | | | | Include all translations when building the cache See merge request apt-team/apt!156
| * Include all translations when building the cacheJulian Andres Klode2021-01-271-1/+1
| | | | | | | | | | | | | | | | | | We do download all translations we ever downloaded, but we don't add all of those to the cache, meaning that if we run update with LANG=C, it might still download your de_DE translation, but it won't insert it into the cache, causing your de_DE user to not get translated messages. LP: #1907850
* | dpkg: fix passing readonly /dev/null fd as stdout/stderrYoufu Zhang2021-01-221-1/+1
|/ | | | | | | | | | | The read-only /dev/null was duplicated to stdout and stderr, causing writes to those descriptors to fail: [pid 260] openat(AT_FDCWD, "/dev/null", O_RDONLY) = 7 [pid 260] dup2(7, 0) = 0 [pid 260] close(5) = 0 [pid 260] dup2(6, 1) = 1 [pid 260] dup2(7, 2) = 2 [pid 260] write(2, "Chrooting into ", 15) = -1 EBADF (Bad file descriptor) [pid 260] chroot("/chroot/") = 0
* pkgcachegen: Avoid write to old cache for Version::ExtraJulian Andres Klode2021-01-131-1/+2
| | | | | | | | Assigning the result of AllocateInMap directly to Ver->d caused Ver->d to be resolved first, and hence if Ver was remapped during the AllocateInMap, we were trying to assign to the old value. Closes: #980037
* Call ischroot with -tJulian Andres Klode2021-01-111-0/+1
| | | | | | | We interpreted "cannot detect chroot" as "not a chroot", but it's arguably the better idea to detect it as a chroot, to avoid new behavior from phased updations in situations where it's unclear (no /proc mounted or stuff).
* kernels: Fix std::out_of_range if no kernels to protectJulian Andres Klode2021-01-111-1/+6
| | | | | In case we did not find any kernels to protect, the regular expression will be empty, and trying to substr(1) it will fail.
* Merge branch 'pu/small-fixes' into 'master'Julian Andres Klode2021-01-082-4/+4
|\ | | | | | | | | Pu/small fixes See merge request apt-team/apt!151
| * kernels: remove spurious || falseJulian Andres Klode2021-01-081-3/+3
| | | | | | | | Gbp-Dch: ignore
| * Fix getMachineID copy-paste errorJulian Andres Klode2021-01-081-1/+1
| | | | | | | | Gbp-Dch: ignore
* | Implement update --error-on=anyJulian Andres Klode2021-01-081-2/+19
|/ | | | | | | | | | | People have been asking for a feature to error out on transient network errors for a while, this gives them one while keeping the door open for other modes we need, such as --error-on=no-success which we need to determine when to retry the daily update job. Closes: #594813 (and a whole bunch of duplicates...)
* Phase using source version to be binNMU-correctJulian Andres Klode2021-01-081-1/+1
| | | | | | | | | If we have different binNMU versions on different architectures, we don't want madness to ensue. This is a change from how update-manager does things, as Ubuntu does not have binNMUs, but I believe it's the right thing to do for a generic solution.
* Add support for Phased-Update-PercentageJulian Andres Klode2021-01-089-5/+152
| | | | | | | | | | | | | | | | | | | | | | | | | | This adds support for Phased-Update-Percentage by pinning upgrades that are not to be installed down to 1. The output of policy has been changed to add the level of phasing, and documentation has been improved to document how phased updates work. The patch detects if it is running in a chroot, and if so, always includes phased updates, restoring classic apt behavior to avoid behavioral changes on buildd chroots. Various options are added to control this all: * APT::Get::{Always,Never}-Include-Phased-Updates and their legacy update-manager equivalents to always or never include phased updates * APT::Machine-ID can be set to a UUID string to have all machines in a fleet phase the same * Dir::Etc::Machine-ID is weird in that it's default is sort of like ../machine-id, but not really, as ../machine-id would look up $PWD/../machine-id and not relative to Dir::Etc; but it allows you to override the path to machine-id (as opposed to the value) * Dir::Bin::ischroot is the path to the ischroot(1) binary which is used to detect whether we are running in a chroot.
* Merge branch 'pu/optional-immediate' into 'master'Julian Andres Klode2021-01-081-4/+12
|\ | | | | | | | | Make immediate configuration optional See merge request apt-team/apt!148
| * Make immediate configuration optionalJulian Andres Klode2021-01-081-4/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The benefits of immediate configuration are that Essential packages will be configured immediately, so if they are wrongly not working without being configured they won't fail later packages. However, we've reached the point where dependencies on the essential set are too complex for immediate configuration to always work, causing installations to error out at the end, despite having succeeded, because we did not correctly return the error here and did not check for pending errors before running dpkg. Given that we check and configure any packages at the end that have not been configured yet, or fail if we can't configure them; making immediate configuration optional is the best way forward - it orders as it does now, but then does not spuriously fail after having successfully installed everything. Closes: #973305, #188161, #211075, #649588 LP: #1871268
* | Merge branch 'pu/depends' into 'master'Julian Andres Klode2021-01-072-0/+100
|\ \ | |/ |/| | | | | ?depends patterns and friends See merge request apt-team/apt!146
| * Implement ?reverse-depends/~R and friendsJulian Andres Klode2020-12-272-0/+49
| | | | | | | | This was easy.
| * patterns: Add dependency patterns ?depends, ?conflicts, etc.Julian Andres Klode2020-12-272-0/+51
| | | | | | | | | | | | These match the target package, not target versions which is slightly unfortunate but might make sense. Maybe we should add a version that matches Versions instead.
* | Only keep up to 3 (not 4) kernelsJulian Andres Klode2021-01-041-1/+1
| | | | | | | | | | | | This fixes a problem on Ubuntu systems where the /boot partition has been sized to manage 3 kernels, but does not really work with 4 kernels which was causing problems all over the place.
* | Determine autoremovable kernels at run-timeJulian Andres Klode2021-01-044-7/+228
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our kernel autoremoval helper script protects the currently booted kernel, but it only runs whenever we install or remove a kernel, causing it to protect the kernel that was booted at that point in time, which is not necessarily the same kernel as the one that is running right now. Reimplement the logic in C++ such that we can calculate it at run-time: Provide a function to produce a regular expression that matches all kernels that need protecting, and by changing the default root set function in the DepCache to make use of that expression. Note that the code groups the kernels by versions as before, and then marks all kernel packages with the same version. This optimized version inserts a virtual package $kernel into the cache when building it to avoid having to iterate over all packages in the cache to find the installed ones, significantly improving performance at a minor cost when building the cache. LP: #1615381
* | depcache: Cache our InRootSetFuncJulian Andres Klode2021-01-042-8/+24
|/ | | | | This avoids the cost of setting up the function every time we mark and sweep.
* Don't re-encode encoded URIs in pkgAcqFileDavid Kalnischkies2020-12-181-1/+2
| | | | | | | | | | | | | This commit potentially breaks code feeding apt an encoded URI using a method which does not get URIs send encoded. The webserverconfig requests in our tests are an example for this – but they only worked before if the server was expecting a double encoding as that was what was happening to an encoded URI: so unlikely to work as expected in practice. Now with the new methods we can drop this double encoding and rely on the URI being passed properly (and without modification) between the layers so that passing in encoded URIs should now work correctly.
* Keep URIs encoded in the acquire systemDavid Kalnischkies2020-12-189-43/+140
| | | | | | | | | | | | | | | | | | | We do not deal a lot with URIs which need encoding, but then we do it is a pain that we store it decoded in the acquire system as it means we have to decode and reencode URIs eventually which is potentially giving us slightly different URIs. We see that in our own testing framework while setting up redirects as the config options are effectively double-encoded and decoded to pass them around successfully as otherwise %2f and / in an URI are treated the same. This commit adds the infrastructure for methods to opt into getting URIs send in encoded form (and returning them to us in encoded form, too) so that we eventually do not have to touch the URIs which is how it should be. This means though that we have to deal with methods who do not support this yet (aka: all at the moment) for which we decode and encode while communicating with them.
* Do not require libxxhash-dev for including pkgcachegen.hJulian Andres Klode2020-12-171-1/+3
|
* Unroll pkgCache::sHash 8 time, break up dependencyJulian Andres Klode2020-12-151-2/+16
| | | | | | | | | | | | | | | | Unroll pkgCache::sHash 8 times and break up the dependency between the iterations by expanding the calculation H(n) = 33 * H(n-1) + c 8 times rather than performing it 8 times. This seems to yield about a 0.4% performance improvement. I tried unrolling 4 and 2 bytes as well, those only having 3 ifs at the end rather than 1 small loop; but that was actually slower - potentially the code got to large and the cache went bonkers. I also tried unrolling 4 times instead of 8, thinking that smaller code might yield better results overall then, but that was slower as well.
* Use XXH3 for cache, hash table hashingJulian Andres Klode2020-12-153-64/+20
| | | | | | XXH3 is faster than both our CRC32c implementation as well as DJB hash for hash table hashing, so meh, let's switch to it.
* Raise APT::Cache-HashtableSize to 196613Julian Andres Klode2020-12-101-1/+1
| | | | | | | We now have over 100k package names, my Ubuntu system has 125k arleady, so increase the hash table size to match, this will cost us about a MB in cache size, but give a very nice speed up somewhere around 3%-4% or so.
* CVE-2020-27350: tarfile: integer overflow: Limit tar items to 128 GiBJulian Andres Klode2020-12-091-0/+10
| | | | | | | | | | | | | | | | | | | The integer overflow was detected by DonKult who added a check like this: (std::numeric_limits<decltype(Itm.Size)>::max() - (2 * sizeof(Block))) Which deals with the code as is, but also still is a fairly big limit, and could become fragile if we change the code. Let's limit our file sizes to 128 GiB, which should be sufficient for everyone. Original comment by DonKult: The code assumes that it can add sizeof(Block)-1 to the size of the item later on, but if we are close to a 64bit overflow this is not possible. Fixing this seems too complex compared to just ensuring there is enough room left given that we will have a lot more problems the moment we will be acting on files that large as if the item is that large, the (valid) tar including it probably doesn't fit in 64bit either.
* CVE-2020-27350: debfile: integer overflow: Limit control size to 64 MiBJulian Andres Klode2020-12-091-0/+15
| | | | | | | | | Like the code in arfile.cc, MemControlExtract also has buffer overflows, in code allocating memory for parsing control files. Specify an upper limit of 64 MiB for control files to both protect against the Size overflowing (we allocate Size + 2 bytes), and protect a bit against control files consisting only of zeroes.
* tarfile: OOM hardening: Limit size of long names/links to 1 MiBJulian Andres Klode2020-12-091-1/+10
| | | | | | | | | | | | | | | Tarballs have long names and long link targets structured by a special tar header with a GNU extension followed by the actual content (padded to 512 bytes). Essentially, think of a name as a special kind of file. The limit of a file size in a header is 12 bytes, aka 10**12 or 1 TB. While this works OK-ish for file content that we stream to extractors, we need to copy file names into memory, and this opens us up to an OOM DoS attack. Limit the file name size to 1 MiB, as libarchive does, to make things safer.
* CVE-2020-27350: arfile: Integer overflow in parsingJulian Andres Klode2020-12-091-1/+13
| | | | | | | | | | | | | | | | | | | | | | GHSL-2020-169: This first hunk adds a check that we have more files left to read in the file than the size of the member, ensuring that (a) the number is not negative, which caused the crash here and (b) ensures that we similarly avoid other issues with trying to read too much data. GHSL-2020-168: Long file names are encoded by a special marker in the filename and then the real filename is part of what is normally the data. We did not check that the length of the file name is within the length of the member, which means that we got a overflow later when subtracting the length from the member size to get the remaining member size. The file createdeb-lp1899193.cc was provided by GitHub Security Lab and reformatted using apt coding style for inclusion in the test case, both of these issues have an automated test case in test/integration/test-ubuntu-bug-1899193-security-issues. LP: #1899193
* patterns: Terminate short pattern by ~ and !Julian Andres Klode2020-12-071-1/+4
| | | | | | | | | | This allows patterns like ~nalpha~nbeta and ~nalpha!~nbeta to work like they do in APT. Also add a comment to remind readers that everything in START should be in short too. Cc: stable >= 2.0
* HexDigest: Silence -Wstringop-overflowJulian Andres Klode2020-12-041-0/+1
| | | | | | | | | | | | | | | | | | | | | The compiler does not know that the size is small and thinks we might be doing a stack buffer overflow of the vla: Add APT_ASSUME macro and silence -Wstringop-overflow in HexDigest() The compiler does not know that the size of a hash is at most 512 bit, so tell it that it is. ../apt-pkg/contrib/hashes.cc: In function ‘std::string HexDigest(gcry_md_hd_t, int)’: ../apt-pkg/contrib/hashes.cc:415:21: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=] 415 | Result[(Size)*2] = 0; | ~~~~~~~~~~~~~~~~~^~~ ../apt-pkg/contrib/hashes.cc:414:9: note: at offset [-9223372036854775808, 9223372036854775807] to an object with size at most 4294967295 declared here 414 | char Result[((Size)*2) + 1]; | ^~~~~~ Fix this by adding a simple assertion. This generates an extra two instructions in the normal code path, so it's not exactly super costly.
* Do not immediately configure m-a: same packages in lockstepJulian Andres Klode2020-11-061-1/+2
| | | | | | | | | | | | | | | | | | | In LP#835625, it was reported that apt did not unpack multi-arch packages in the correct order, and dpkg did not like that. The fix also made apt configure packages together, which is not strictly necessary. This turned out to cause issues now, because of dependencies on libc6:i386 that caused immediate configuration of that to not work. Work around the issue by not configuring multi-arch: same packages in lockstep if they have the immediate flag set. This will be the pseudo-essential set, and given how essential works, we mostly need the native arch to work correctly anyway. LP: #1871268 Regression-Of: 30426f4822516bdd26528aa2e6d8d69c1291c8d3
* Do not produce late error if immediate configuration fails, just warnJulian Andres Klode2020-10-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | We are seeing more and more installations fail due to immediate configuration issues related to libc6. Immediate configuration is supposed to ensure that an essential package is configured immediately, just in case some other packages use a part of the essential package that only works if that package is configured. This used to be a warning, it was turned into an error in some commit I can't remember right now, but importantly, the error missed a return, which means that ordering completed succesfully and packages were being installed anyway; and after all that happened successfully, we'd print an error at the end and exit with an error code, which is not super useful. Revert the error back to a warning such that the behavior stays the same but we do not fail (unless we mess up ordering which then gets caught by a consistency check later on. Closes: #953260 Closes: #972552 LP: #1871268
* acquire: Do not hide _errror messages in Fail()Julian Andres Klode2020-08-111-11/+14
| | | | | If we have errors pending, always log them with our failure message to provide more context.
* Default Acquire::AllowReleaseInfoChange::Suite to "true"Julian Andres Klode2020-08-101-1/+1
| | | | Closes: #931566
* Merge branch 'master' into 'master'Julian Andres Klode2020-08-041-1/+2
|\ | | | | | | | | Support marking all newly installed packages as automatically installed See merge request apt-team/apt!110
| * Support marking all newly installed packages as automatically installedNicolas Schier2020-06-081-1/+2
| | | | | | | | | | | | | | Add option '--mark-auto' to 'apt install' that marks all newly installed packages as automatically installed. Signed-off-by: Nicolas Schier <nicolas@fjasle.eu>
* | Merge branch 'pu/less-slaves' into 'master'Julian Andres Klode2020-08-041-1/+1
|\ \ | | | | | | | | | | | | Remove master/slave terminology See merge request apt-team/apt!124
| * | Replace whitelist/blacklist with allowlist/denylistJulian Andres Klode2020-08-041-1/+1
| | |
* | | Merge branch 'pu/apt-key-deprecated' into 'master'Julian Andres Klode2020-08-041-0/+3
|\ \ \ | |/ / |/| | | | | | | | Fully deprecate apt-key, schedule removal for Q2/2022 See merge request apt-team/apt!119
| * | Fully deprecate apt-key, schedule removal for Q2/2022Julian Andres Klode2020-05-061-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | People are still using apt-key add and friends, despite that not being guaranteed to work. Let's tell them to stop doing so. We might still want a list command at a future point, but this needs deciding, and a blanket ban atm seems like a sensible step until we figured that out.
* | | Reorder config check before result looping for SRV parsing debugDavid Kalnischkies2020-07-021-11/+6
| | | | | | | | | | | | | | | It isn't needed to iterate over all results if we will be doing nothing anyhow as it isn't that common to have that debug option enabled.
* | | Add dependency points in the resolver also to providersDavid Kalnischkies2020-07-021-9/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were traditionally adding points for some dependency types to the real package, but we should also do it for providers of that package to help the resolver especially if the real package is for some reason not tagged for removal yet/anymore. While at it we ensure that the points are only attributed once for each package as especially with versioned provides a package can nowadays provide another many times and would hence acquire a lot of points.
* | | Filter out impossible solutions for protected propagationDavid Kalnischkies2020-07-021-3/+18
| | | | | | | | | | | | | | | | | | | | | | | | If the package providing the given solution is tagged already for removal (or at least for "not installing") we can ignore this solution as a possibility as it is not one, which means we can avoid exploring the option and potentially forward the protected flag further if that helps in reducing the possibilities to a single one.
* | | Delay removals due to Conflicts until Depends are resolvedDavid Kalnischkies2020-07-022-79/+101
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Marking a package for removal is fine if we know that we have to remove that package, but if we are in an alternative branch we might not go this route in the end and hence have a package pointlessly marked for removal which isn't questioned later on. We check if we are allowed to remove that package to avoid working on the positive dependencies if not, but we mark them for removal only after all the other dependencies are successfully resolved. In an ideal world we would let the problemResolver do its job on them, but the resolver might decide against doing the removal exploring another option like the next alternative, which might be a good idea, but is not the behaviour we had before, so that is the best we can do for now without changing the resolver drastically.