summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Fixup manual page docbook syntaxJulian Andres Klode2021-01-082-6/+8
| | | | | | | No idea why we don't have manual page syntax check (what prepare-release post-build does) in CI. Should fix that eventually. Gbp-Dch: ignore
* Merge branch 'pu/small-fixes' into 'master'Julian Andres Klode2021-01-082-4/+4
|\ | | | | | | | | Pu/small fixes See merge request apt-team/apt!151
| * kernels: remove spurious || falseJulian Andres Klode2021-01-081-3/+3
| | | | | | | | Gbp-Dch: ignore
| * Fix getMachineID copy-paste errorJulian Andres Klode2021-01-081-1/+1
| | | | | | | | Gbp-Dch: ignore
* | Merge branch 'pu/apt-update-error-modes' into 'master'Julian Andres Klode2021-01-085-2/+31
|\ \ | |/ |/| | | | | Implement update --error-on=any See merge request apt-team/apt!150
| * Implement update --error-on=anyJulian Andres Klode2021-01-085-2/+31
|/ | | | | | | | | | | People have been asking for a feature to error out on transient network errors for a while, this gives them one while keeping the door open for other modes we need, such as --error-on=no-success which we need to determine when to retry the daily update job. Closes: #594813 (and a whole bunch of duplicates...)
* Merge branch 'pu/phased-updates' into 'master'Julian Andres Klode2021-01-0815-7/+469
|\ | | | | | | | | Add support for Phased-Update-Percentage See merge request apt-team/apt!129
| * Phase using source version to be binNMU-correctJulian Andres Klode2021-01-081-1/+1
| | | | | | | | | | | | | | | | | | If we have different binNMU versions on different architectures, we don't want madness to ensue. This is a change from how update-manager does things, as Ubuntu does not have binNMUs, but I believe it's the right thing to do for a generic solution.
| * Add support for Phased-Update-PercentageJulian Andres Klode2021-01-0815-7/+469
|/ | | | | | | | | | | | | | | | | | | | | | | | | | This adds support for Phased-Update-Percentage by pinning upgrades that are not to be installed down to 1. The output of policy has been changed to add the level of phasing, and documentation has been improved to document how phased updates work. The patch detects if it is running in a chroot, and if so, always includes phased updates, restoring classic apt behavior to avoid behavioral changes on buildd chroots. Various options are added to control this all: * APT::Get::{Always,Never}-Include-Phased-Updates and their legacy update-manager equivalents to always or never include phased updates * APT::Machine-ID can be set to a UUID string to have all machines in a fleet phase the same * Dir::Etc::Machine-ID is weird in that it's default is sort of like ../machine-id, but not really, as ../machine-id would look up $PWD/../machine-id and not relative to Dir::Etc; but it allows you to override the path to machine-id (as opposed to the value) * Dir::Bin::ischroot is the path to the ischroot(1) binary which is used to detect whether we are running in a chroot.
* Merge branch 'pu/autoremove-kernels-in-apt-only' into 'master'Julian Andres Klode2021-01-082-4/+3
|\ | | | | | | | | Only autoremove kernels in apt(8); respect --no-auto-remove See merge request apt-team/apt!149
| * Only autoremove kernels in apt(8); respect --no-auto-removeJulian Andres Klode2021-01-082-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | Automatically removing kernels in apt-get could be unexpected, so limit it to apt for now. To handle --no-auto-remove correctly, rewrite the hack that makes apt ignore APT::Get::AutomaticRemove options from config files such that it unsets the option. This then means we can do FindB("APT::Get::AutomaticRemove", true) as the default for APT::Get::AutomaticRemove::Kernels and get the behavior we want: If you set --no-auto-remove, it is respected as that FindB returns false; if you don't set it, it will be true.
* | Merge branch 'pu/optional-immediate' into 'master'Julian Andres Klode2021-01-081-4/+12
|\ \ | |/ |/| | | | | Make immediate configuration optional See merge request apt-team/apt!148
| * Make immediate configuration optionalJulian Andres Klode2021-01-081-4/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The benefits of immediate configuration are that Essential packages will be configured immediately, so if they are wrongly not working without being configured they won't fail later packages. However, we've reached the point where dependencies on the essential set are too complex for immediate configuration to always work, causing installations to error out at the end, despite having succeeded, because we did not correctly return the error here and did not check for pending errors before running dpkg. Given that we check and configure any packages at the end that have not been configured yet, or fail if we can't configure them; making immediate configuration optional is the best way forward - it orders as it does now, but then does not spuriously fail after having successfully installed everything. Closes: #973305, #188161, #211075, #649588 LP: #1871268
* | Merge branch 'pu/bump-codenames' into 'master'Julian Andres Klode2021-01-072-6/+6
|\| | | | | | | | | Bump codenames to bullseye/hirsute and adjust -security codename See merge request apt-team/apt!147
| * Bump codenames to bullseye/hirsute and adjust -security codenameJulian Andres Klode2021-01-072-6/+6
| | | | | | | | Closes: #969932
* | Merge branch 'pu/depends' into 'master'Julian Andres Klode2021-01-073-0/+127
|\ \ | |/ |/| | | | | ?depends patterns and friends See merge request apt-team/apt!146
| * Implement ?reverse-depends/~R and friendsJulian Andres Klode2020-12-273-0/+57
| | | | | | | | This was easy.
| * woofJulian Andres Klode2020-12-271-1/+1
| |
| * patterns: Add dependency patterns ?depends, ?conflicts, etc.Julian Andres Klode2020-12-273-0/+70
| | | | | | | | | | | | These match the target package, not target versions which is slightly unfortunate but might make sense. Maybe we should add a version that matches Versions instead.
* | Merge branch 'bash-compat' into 'master'Julian Andres Klode2021-01-051-1/+1
|\ \ | | | | | | | | | | | | Be compatible with Bash See merge request apt-team/apt!142
| * | Be compatible with BashDemi M. Obenour2020-12-281-1/+1
| | | | | | | | | | | | | | | | | | On many distributions, /bin/sh is Bash. Bash’s `echo` builtin doesn’t interpret escape sequences, so most tests fail. Fix this by removing the escape sequence.
* | | Merge branch 'pu/kernel-autoremove' into 'master'Julian Andres Klode2021-01-0410-93/+284
|\ \ \ | | | | | | | | | | | | | | | | Determine autoremovable kernels at run-time See merge request apt-team/apt!138
| * | | Only keep up to 3 (not 4) kernelsJulian Andres Klode2021-01-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | This fixes a problem on Ubuntu systems where the /boot partition has been sized to manage 3 kernels, but does not really work with 4 kernels which was causing problems all over the place.
| * | | Automatically remove unused kernels on dist-upgradeJulian Andres Klode2021-01-043-2/+21
| | | | | | | | | | | | | | | | | | | | Kernels clutter /boot and /boot is small size, so we need to take extra care to remove kernels when possible.
| * | | Determine autoremovable kernels at run-timeJulian Andres Klode2021-01-047-83/+239
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our kernel autoremoval helper script protects the currently booted kernel, but it only runs whenever we install or remove a kernel, causing it to protect the kernel that was booted at that point in time, which is not necessarily the same kernel as the one that is running right now. Reimplement the logic in C++ such that we can calculate it at run-time: Provide a function to produce a regular expression that matches all kernels that need protecting, and by changing the default root set function in the DepCache to make use of that expression. Note that the code groups the kernels by versions as before, and then marks all kernel packages with the same version. This optimized version inserts a virtual package $kernel into the cache when building it to avoid having to iterate over all packages in the cache to find the installed ones, significantly improving performance at a minor cost when building the cache. LP: #1615381
| * | | depcache: Cache our InRootSetFuncJulian Andres Klode2021-01-042-8/+24
| | |/ | |/| | | | | | | | | | This avoids the cost of setting up the function every time we mark and sweep.
* | | Merge branch 'http-to-https' into 'master'Julian Andres Klode2021-01-012-16/+16
|\ \ \ | | | | | | | | | | | | | | | | aptmethod: fix HTTP->HTTPS request sequences See merge request apt-team/apt!140
| * | | connect: use ServiceNameOrPort, not Port, as the cache keyFaidon Liambotis2020-12-231-4/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The "last connection" cache is currently being stored and looked up on the combination of (LastHost, LastPort). However, these are not what the arguments to getaddrinfo() were on the first try: the call is to getaddrinfo(Host, ServiceNameOrPort, ...), i.e. with the port *or if 0, the service name* (e.g. http). Effectively this means that the connection cache lookup for: https://example.org/... i.e. Host = example.org, Port = 0, Service = http would end up matching the "last" connection of (if existed): https://example.org/... i.e. Host = example.org, Port = 0, Service = https ...and thus performing a TLS request over an (unrelated) port 80 connection. Therefore, an HTTP request, followed up by an (unrelated) HTTPS request to the same server, would always fail. Address this by using as the cache key the ServiceNameOrPort, rather than Port.
| * | | connect: convert a C-style string to std::stringFaidon Liambotis2020-12-231-11/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the fixed-size (300) char array "ServStr" to a std::string, and simplify the code by removing snprintfs in the process. While at it, rename to the more aptly named "ServiceNameOrPort" and update the comment to reflect what this variable is meant to be.
| * | | basehttp: also consider Access when a Server's URIFaidon Liambotis2020-12-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ServerState->Comp() is used by the HTTP methods main loop to check whether a connection can be reused, or whether a new one is needed. Unfortunately, the currently implementation only compares the Host and Port between the ServerState's internal URI, with a new URI. However these are URIs, and therefore Port is 0 when a URI port is not specificied, i.e. in the most common configurations. As a result, a ServerState for http://example.org/... will be reused for URIs of the form https://example.org/..., as both Host (example.org) and Port (0) match. In turn this means that GET requests will happen over port 80, in cleartext, even for those https URLs(!). URI Acquires for an http URI and subsequently for an https one, in the same aptmethod session, do not typically happen with apt as the frontend, as apt opens a new pipe with the "https" aptmethod binary (nowadays a symlink to http), which is why this hasn't been a problem in practice and has eluded detection so far. It does happen in the wild with other frontends (e.g. reprepro), plus is legitimately an odd and surprising behavior on apt's end. Therefore add a comparison for the URI's "Access" (= the scheme) in addition to Host and Port, to ensure that we're not reusing the same state for multiple different schemes.
* | | | Greek program translation updateVangelis Skarmoutsos2020-12-311-367/+324
| |/ / |/| | | | | | | | See merge request apt-team/apt!144
* | | Release 2.1.152.1.15Julian Andres Klode2020-12-275-6/+23
| |/ |/|
* | German program translation updateHelge Kreutzmann2020-12-231-123/+71
|/ | | | Closes: #977938
* Merge branch 'pu/uriencode' into 'master'Julian Andres Klode2020-12-1832-121/+276
|\ | | | | | | | | Use encoded URIs in the acquire system See merge request apt-team/apt!139
| * Don't re-encode encoded URIs in pkgAcqFileDavid Kalnischkies2020-12-182-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | This commit potentially breaks code feeding apt an encoded URI using a method which does not get URIs send encoded. The webserverconfig requests in our tests are an example for this – but they only worked before if the server was expecting a double encoding as that was what was happening to an encoded URI: so unlikely to work as expected in practice. Now with the new methods we can drop this double encoding and rely on the URI being passed properly (and without modification) between the layers so that passing in encoded URIs should now work correctly.
| * Implement encoded URI handling in all methodsDavid Kalnischkies2020-12-1813-37/+77
| | | | | | | | | | | | | | | | Every method opts in to getting the encoded URI passed along while keeping compat in case we are operated by an older acquire system. Effectively this is just a change for the http-based methods as the others just decode the URI as they work with files directly.
| * Keep URIs encoded in the acquire systemDavid Kalnischkies2020-12-1816-75/+174
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We do not deal a lot with URIs which need encoding, but then we do it is a pain that we store it decoded in the acquire system as it means we have to decode and reencode URIs eventually which is potentially giving us slightly different URIs. We see that in our own testing framework while setting up redirects as the config options are effectively double-encoded and decoded to pass them around successfully as otherwise %2f and / in an URI are treated the same. This commit adds the infrastructure for methods to opt into getting URIs send in encoded form (and returning them to us in encoded form, too) so that we eventually do not have to touch the URIs which is how it should be. This means though that we have to deal with methods who do not support this yet (aka: all at the moment) for which we decode and encode while communicating with them.
| * Proper URI encoding for config requests to our test webserverDavid Kalnischkies2020-12-184-14/+29
|/ | | | | | Our http method encodes the URI again which results in the double encoding we have unwrap in the webserver (we did already, but we skip the filename handling now which does the first decode).
* Do not require libxxhash-dev for including pkgcachegen.hJulian Andres Klode2020-12-171-1/+3
|
* Unroll pkgCache::sHash 8 time, break up dependencyJulian Andres Klode2020-12-151-2/+16
| | | | | | | | | | | | | | | | Unroll pkgCache::sHash 8 times and break up the dependency between the iterations by expanding the calculation H(n) = 33 * H(n-1) + c 8 times rather than performing it 8 times. This seems to yield about a 0.4% performance improvement. I tried unrolling 4 and 2 bytes as well, those only having 3 ifs at the end rather than 1 small loop; but that was actually slower - potentially the code got to large and the cache went bonkers. I also tried unrolling 4 times instead of 8, thinking that smaller code might yield better results overall then, but that was slower as well.
* Release 2.1.142.1.14Julian Andres Klode2020-12-155-6/+13
|
* Use XXH3 for cache, hash table hashingJulian Andres Klode2020-12-156-64/+47
| | | | | | XXH3 is faster than both our CRC32c implementation as well as DJB hash for hash table hashing, so meh, let's switch to it.
* test: fixup for hash table size increase (changed output order)Julian Andres Klode2020-12-155-13/+12
|
* Release 2.1.132.1.13Julian Andres Klode2020-12-105-8/+32
|
* Raise APT::Cache-HashtableSize to 196613Julian Andres Klode2020-12-101-1/+1
| | | | | | | We now have over 100k package names, my Ubuntu system has 125k arleady, so increase the hash table size to match, this will cost us about a MB in cache size, but give a very nice speed up somewhere around 3%-4% or so.
* Merge branch 'pu/cve-2020-27350'Julian Andres Klode2020-12-096-2/+400
|\
| * CVE-2020-27350: tarfile: integer overflow: Limit tar items to 128 GiBJulian Andres Klode2020-12-093-0/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The integer overflow was detected by DonKult who added a check like this: (std::numeric_limits<decltype(Itm.Size)>::max() - (2 * sizeof(Block))) Which deals with the code as is, but also still is a fairly big limit, and could become fragile if we change the code. Let's limit our file sizes to 128 GiB, which should be sufficient for everyone. Original comment by DonKult: The code assumes that it can add sizeof(Block)-1 to the size of the item later on, but if we are close to a 64bit overflow this is not possible. Fixing this seems too complex compared to just ensuring there is enough room left given that we will have a lot more problems the moment we will be acting on files that large as if the item is that large, the (valid) tar including it probably doesn't fit in 64bit either.
| * CVE-2020-27350: debfile: integer overflow: Limit control size to 64 MiBJulian Andres Klode2020-12-093-0/+22
| | | | | | | | | | | | | | | | | | Like the code in arfile.cc, MemControlExtract also has buffer overflows, in code allocating memory for parsing control files. Specify an upper limit of 64 MiB for control files to both protect against the Size overflowing (we allocate Size + 2 bytes), and protect a bit against control files consisting only of zeroes.
| * tarfile: OOM hardening: Limit size of long names/links to 1 MiBJulian Andres Klode2020-12-093-2/+99
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tarballs have long names and long link targets structured by a special tar header with a GNU extension followed by the actual content (padded to 512 bytes). Essentially, think of a name as a special kind of file. The limit of a file size in a header is 12 bytes, aka 10**12 or 1 TB. While this works OK-ish for file content that we stream to extractors, we need to copy file names into memory, and this opens us up to an OOM DoS attack. Limit the file name size to 1 MiB, as libarchive does, to make things safer.
| * CVE-2020-27350: arfile: Integer overflow in parsingJulian Andres Klode2020-12-094-1/+263
|/ | | | | | | | | | | | | | | | | | | | | | GHSL-2020-169: This first hunk adds a check that we have more files left to read in the file than the size of the member, ensuring that (a) the number is not negative, which caused the crash here and (b) ensures that we similarly avoid other issues with trying to read too much data. GHSL-2020-168: Long file names are encoded by a special marker in the filename and then the real filename is part of what is normally the data. We did not check that the length of the file name is within the length of the member, which means that we got a overflow later when subtracting the length from the member size to get the remaining member size. The file createdeb-lp1899193.cc was provided by GitHub Security Lab and reformatted using apt coding style for inclusion in the test case, both of these issues have an automated test case in test/integration/test-ubuntu-bug-1899193-security-issues. LP: #1899193