summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Release 1.3~pre31.3_pre3Julian Andres Klode2016-08-0462-498/+2123
|
* ExecGPGV: Pass current config state to apt-key via temp fileJulian Andres Klode2016-08-031-0/+23
| | | | | | | Create a temporary configuration file with a dump of our configuration and pass that to apt-key. LP: #1607283
* ExecGPGV: Fork in all casesJulian Andres Klode2016-08-031-43/+34
|
* ExecGPGV: Rework file removal on exit()Julian Andres Klode2016-08-031-28/+23
| | | | Create a local exiter object which cleans up files on exit.
* gpgv: Unlink the correct temp file in error caseJulian Andres Klode2016-08-031-4/+4
| | | | | Previously, when data could be created and sig not, we would unlink sig, not data (and vice versa).
* apt-key: ignore any error produced by gpgconf --killDavid Kalnischkies2016-07-311-1/+1
| | | | | | | | | | | gpgconf wasn't always equipped with a --kill option as highlighted by our testcases failing on Travis and co as these use a much older version of gpg2. As this is just for cleaning up slightly faster we ignore any error a call might produce and carry on. Use a recent enough gpg2 version if you need the immediate killing… Gbp-Dch: Ignore Reported-By: Travis CI
* apt-key: kill gpg-agent explicitly in cleanupDavid Kalnischkies2016-07-311-1/+13
| | | | | | | | | | | apt-key has (usually) no secret key material so it doesn't really need the agent at all, but newer gpgs insist on starting it anyhow. The agents die off rather quickly after the underlying home-directory is cleaned up, but that is still not fast enough for tools like sbuild which want to unmount but can't as the agent is still hanging onto a non-existent homedir. Reported-By: Johannes 'josch' Schauer on IRC
* prevent C++ locale number formatting in text APIs (try 2)David Kalnischkies2016-07-303-4/+4
| | | | | | | Followup of b58e2c7c56b1416a343e81f9f80cb1f02c128e25. Still a regression of sorts of 8b79c94af7f7cf2e5e5342294bc6e5a908cacabf. Closes: 832044
* edsp: try to read responses even if writing failedDavid Kalnischkies2016-07-291-15/+20
| | | | | | | If a solver/planner exits before apt is done writing we will generate write errors. Solvers like 'dump' can be pretty quick in failing but produce a valid EDSP error report apt should read, parse and display instead of just discarding even through we had write errors.
* if the FileFd failed already following calls should fail, tooDavid Kalnischkies2016-07-291-8/+10
| | | | | | There is no point in trying to perform Write/Read on a FileFd which already failed as they aren't going to work as expected, so we should make sure that they fail early on and hard.
* (error) va_list 'args' was opened but not closed by va_end()David Kalnischkies2016-07-274-30/+26
| | | | | Reported-By: cppcheck Gbp-Dch: Ignore
* eipp: avoid producing file warnings in simulationDavid Kalnischkies2016-07-271-37/+33
| | | | | | | | | | | | | | | Simulations are frequently run by unprivileged users which naturally don't have the permissions to write to the default location for the eipp file. Either way is bad as running in simulation mode doesn't mean we don't want to run the logging (as EIPP runs the same regardless of simulation or 'real' run), but showing the warnings is relatively pointless in the default setup, so, in case we would produce errors and perform a simulation we will discard the warnings and carry on. Running apt with an external planner wouldn't have generated these messages btw. Closes: 832614
* rred: truncate result file before writing to itDavid Kalnischkies2016-07-274-20/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | If another file in the transaction fails and hence dooms the transaction we can end in a situation in which a -patched file (= rred writes the result of the patching to it) remains in the partial/ directory. The next apt call will perform the rred patching again and write its result again to the -patched file, but instead of starting with an empty file as intended it will override the content previously in the file which has the same result if the new content happens to be longer than the old content, but if it isn't parts of the old content remain in the file which will pass verification as the new content written to it matches the hashes and if the entire transaction passes the file will be moved the lists/ directory where it might or might not trigger errors depending on if the old content which remained forms a valid file together with the new content. This has no real security implications as no untrusted data is involved: The old content consists of a base file which passed verification and a bunch of patches which all passed multiple verifications as well, so the old content isn't controllable by an attacker and the new one isn't either (as the new content alone passes verification). So the best an attacker can do is letting the user run into the same issue as in the report. Closes: #831762
* http: skip requesting if pipeline is fullDavid Kalnischkies2016-07-271-0/+2
| | | | | | | | | The rewrite in 742f67eaede80d2f9b3631d8697ebd63b8f95427 is based on the assumption that the pipeline will always be at least one item short each time it is called, but the logs in #832113 suggest that this isn't always the case. I fail to see how at the moment, but the old implementation had this behavior, so restoring it can't really hurt, can it?
* use proper warning for automatic pipeline disableDavid Kalnischkies2016-07-271-4/+1
| | | | | Also fixes message itself to mention the correct option name as noticed in #832113.
* verify hash of input file in rredDavid Kalnischkies2016-07-262-19/+47
| | | | | | | | | | | | We read the entire input file we want to patch anyhow, so we can also calculate the hash for that file and compare it with what he had expected it to be. Note that this isn't really a security improvement as a) the file we patch is trusted & b) if the input is incorrect, the result will hardly be matching, so this is just for failing slightly earlier with a more relevant error message (althrough, in terms of rred its ignored and complete download attempt instead).
* call flush on the wrapped writebuffered FileFdDavid Kalnischkies2016-07-231-2/+1
| | | | | | | The flush call is a no-op in most FileFd implementations so this isn't as critical as it might sound as the only non-trivial implementation is in the buffered writer, which tends not be used to buffer another buffer…
* report progress for triggered actionsDavid Kalnischkies2016-07-221-15/+42
| | | | | | | | | | | | | APT doesn't know which packages will be triggered in the course of actions, so it can't plan to see them for progress beforehand, but if it sees that dpkg says that a package was triggered we can add additional states. This is pretty much magic – after all it sets back the progress – and there are cornercases in which this will result in incorrect totals (package in partial states may or may not loose trigger states), but the worst which can happen is that the progress is slightly incorrect and doesn't reach 100%, but so be it. Better than being stuck at 100% for a while as apt isn't realizing that a bunch of triggers still need to be processed.
* use a configurable location for apport report storageDavid Kalnischkies2016-07-223-2/+7
| | | | | Hardcoding /var/crash means we can't test it properly and it isn't really our style.
* report progress for removing while purging pkgsDavid Kalnischkies2016-07-221-20/+31
| | | | | | | | The progress reporting for a package sheduled for purging only included the states dpkg passes through while actually purging the package – if the package was fully installed before dpkg will pass first through all remove states before purging it, so in the interest of consistent reporting our progress reporting should do that, too.
* support dpkg debug mode in APT::StateChangesDavid Kalnischkies2016-07-222-59/+121
| | | | Gbp-Dch: Ignore
* create non-existent files in edit-sources with 644 instead of 640David Kalnischkies2016-07-222-1/+54
| | | | | | | | | | If the sources file we want to edit doesn't exist yet GetLock will create it with 640, which for a generic lockfile might be okay, but as this is a sources file more relaxed permissions are in order – and actually required as it wont be readable for unprivileged users causing warnings/errors in apt calls. Reported-By: J. Theede (musca) on IRC
* report warnings&errors consistently in edit-sourcesDavid Kalnischkies2016-07-223-26/+42
| | | | | | | After editing the sources it is a good idea to (re)built the caches as they will be out-of-date and doing so helps in reporting higherlevel errors like duplicates sources.list entries, too, instead of just general parsing errors as before.
* Turkish program translation updateMert Dirik2016-07-221-82/+154
| | | | Closes: 832039
* tests: avoid time-dependent rebuild of cachesDavid Kalnischkies2016-07-221-0/+4
| | | | | | | | | The tests changes the sources.list and the modification time of this file is considered while figuring out if the cache can be good. Usually this isn't an issue, but in that case we have the cache generation produce warnings which appear twice in this case. Gbp-Dch: Ignore
* clean up default-stanzas from extended_states on writeDavid Kalnischkies2016-07-222-13/+22
| | | | | | | | | | The existing cleanup was happening only for packages which had a status change (install -> uninstalled) which is the most frequent but no the only case – you can e.g. set autobits explicitly with apt-mark. This would leave stanzas in the states file declaring a package to be manually installed – which is the default value for a package not listed at all, so we can just as well drop it from the file.
* tests: skip over -flags for first option in autotestsDavid Kalnischkies2016-07-221-1/+9
| | | | | | | Otherwise calls like "apt -q install" end up calling "aptautotest_apt_q" instead of "aptautotest_apt_install" Gbp-Dch: Ignore
* support "install ./foo.changes"David Kalnischkies2016-07-229-22/+79
| | | | | | | | | | | | We support installing ./foo.deb (and ./foo.dsc for source) for a while now, but it can be a bit clunky to work with those directly if you e.g. build packages locally in a 'central' build-area. The changes files also include hashsums and can be signed, so this can also be considered an enhancement in terms of security as a user "just" has to verify the signature on the changes file then rather than checking all deb files individually in these manual installation procedures.
* allow arch=all to override No-Support-for-Architecture-allDavid Kalnischkies2016-07-225-16/+68
| | | | | | | | | | | | | If a user explicitly requests the download of arch:all apt shouldn't get in the way and perform its detection dance if arch:all packages are (also) in arch:any files or not. This e.g. allows setting arch=all on a source with such a field (or one which doesn't support all at all, but has the arch:all files like Debian itself ATM) to get only the arch:all packages from there instead of behaving like a no-op. Reported-By: Helmut Grohne on IRC
* refactor plus/minus sources.list option handlingDavid Kalnischkies2016-07-191-85/+108
| | | | | | | Moving code around into some more dedicated methods, no effective code change itself. Gbp-Dch: Ignore
* don't hardcode /var/lib/dpkg/status as dir::state::statusDavid Kalnischkies2016-07-192-4/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | Theoretically it should be enough to change the Dir setting and have apt pick the dpkg/status file from that. Also, it should be consistently effected by RootDir. Both wasn't really the case through, so a user had to explicitly set it too (or ignore it and have or not have expected sideeffects caused by it). This commit tries to guess better the location of the dpkg/status file by setting dir::state::status to a naive "../dpkg/status", just that this setting would be interpreted as relative to the CWD and not relative to the dir::state directory. Also, the status file isn't really relative to the state files apt has in /var/lib/apt/ as evident if we consider that apt/ could be a symlink to someplace else and "../dpkg" not effected by it, so what we do here is an explicit replace on apt/ – similar to how we create directories if it ends in apt/ – with dpkg/. As this is a change it has the potential to cause regressions in so far as the dpkg/status file of the "host" system is no longer used if you set a "chroot" system via the Dir setting – but that tends to be intended and causes people to painfully figure out that they had to set this explicitly before, so that it now works more in terms of how the other Dir settings work (aka "as expected"). If using the host status file is really intended it is in fact easier to set this explicitely compared to setting the new "magic" location explicitely.
* ensure Cnf::FindFile doesn't return files below /dev/nullDavid Kalnischkies2016-07-194-9/+50
| | | | | | | Very unlikely, but if the parent is /dev/null, the child empty and the grandchild a value we returned /dev/null/value which doesn't exist, so hardly a problem, but for best operability we should be consistent in our work and return /dev/null always.
* tests: activate dpkg multi-arch even if test is single archDavid Kalnischkies2016-07-152-33/+36
| | | | | | | | | | | | | | Most tests are either multiarch, do not care for the specific architecture or do not interact with dpkg, so really effect by this is only test-external-installation-planner-protocol, but its a general issue that while APT can be told to treat any architecture as native dpkg has the native architecture hardcoded so if we run tests we must make sure that dpkg knows about the architecture we will treat as "native" in apt as otherwise dpkg will refuse to install packages from such an architecture. This reverts f883d2c3675eae2700e4cd1532c1a236cae69a4e as it complicates the test slightly for no practical gain after the generic fix.
* Use native arch in test-external-installation-planner-protocolJulian Andres Klode2016-07-151-22/+23
| | | | Hardcoding amd64 broke the tests.
* Release 1.3~pre21.3_pre2Julian Andres Klode2016-07-0818-18/+30
| | | | | Yes, we might still add new features to 1.3 or break some more stuff. Stay tuned!
* tests: fix external solver/planner directory setupDavid Kalnischkies2016-07-081-10/+7
| | | | | The setup didn't prepare the directories as expected by newer version of tthe external tests in an autopkgtests environment.
* add Testsuite-Triggers to tagfile-orderDavid Kalnischkies2016-07-081-0/+1
| | | | Added in dpkg in commit 90324cfa942ba23d5d44b28b1087fbd510340502.
* Add kernels with "+" in the package name to APT::NeverAutoRemoveAndrew Patterson2016-07-082-4/+10
| | | | | | | | | Escape "+" in kernel package names when generating APT::NeverAutoRemove list so it is not treated as a regular expression meta-character. [Changed by David Kalnischkies: let test actually test the change] Closes: #830159
* Release 1.3~pre11.3_pre1Julian Andres Klode2016-07-0719-397/+652
|
* apt-key.8: Put (deprecated) into the term tagJulian Andres Klode2016-07-071-1/+1
| | | | | | Now post-build script should no longer complain... Gbp-Dch: ignore
* keep trying with next if connection to a SRV host failedDavid Kalnischkies2016-07-061-7/+23
| | | | | | | | | | | | Instead of only trying the first host we get via SRV, we try them all as we are supposed to and if that isn't working we try to connect to the host itself as if we hadn't seen any SRV records. This was already the intend of the old code, but it failed to hide earlier problems for the next call, which would unconditionally fail then resulting in an all around failure to connect. With proper stacking we can also keep the error messages of each call around (and in the order tried) so if the entire connection fails we can report all the things we have tried while we discard the entire stack if something works out in the end.
* report all instead of first error up the acquire chainDavid Kalnischkies2016-07-062-4/+21
| | | | | | | If we don't give a specific error to report up it is likely that all error currently in the error stack are equally important, so reporting just one could turn out to be confusing e.g. if name resolution failed in a SRV record list.
* don't change owner/perms/times through file:// symlinksDavid Kalnischkies2016-07-067-23/+60
| | | | | | | | | | | | | If we have files in partial/ from a previous invocation or similar such those could be symlinks created by file:// sources. The code is expecting only real files through and happily changes owner, modification times and permission on the file the symlink points to which tend to be files we have no business in touching in this way. Permissions of symlinks shouldn't be changed, changing owner is usually pointless to, but just to be sure we pick the easy way out and use lchown, check for symlinks before chmod/utimes. Reported-By: Mattia Rizzolo on IRC
* tests: disable EIPP logging in test-compressed-indexesDavid Kalnischkies2016-07-051-1/+2
| | | | | | | | | | | The test makes heavy use of disabling compression types which are usually available some way or another like xz which is how the EIPP logs are compressed by default. Instead of changing this test to change the filename according to the compression we want to test we just disable EIPP logging for this test as that is easier and has the same practical effect. Gbp-Dch: Ignore
* EIPP/EDSP log can't be written is a warning, not an errorDavid Kalnischkies2016-07-051-4/+28
| | | | | If other logs can't be written this is a warning to, so for consistency sake translate the errors to warnings.
* report write errors in EDSP/EIPP properly back to callerDavid Kalnischkies2016-07-051-6/+3
| | | | | Unlikely to happen in practice and I wonder more how I could miss these in earlier reviews, but okay, lets fix it for consistency now.
* give a descriptive error for pipe tries with 'false'David Kalnischkies2016-07-051-0/+3
| | | | | | | | | | | | | | If libapt has builtin support for a compression type it will create a dummy compressor struct with the Binary set to 'false' as it will catch these before using the generic pipe implementation which uses the Binary. The catching happens based on configured Names through, so you can actually force apt to use the external binaries even if it would usually use the builtin support. That logic fails through if you don't happen to have these external binaries installed as it will fallback to calling 'false', which will end in confusing 'Write error's. So, this is again something you only encounter in constructed testing. Gbp-Dch: Ignore
* don't add default compressors two times if disabledDavid Kalnischkies2016-07-051-12/+15
| | | | | | | | | | This is in so far pointless as the first match will deal with the extension, so we don't actually ever use these second instances – probably for the better as most need arguments to behave as epected & more importantly: the point of the exercise disabling their use for testing proposes. Gbp-Dch: Ignore
* use the right key for compressor configuration dumpDavid Kalnischkies2016-07-051-2/+10
| | | | | | | | | | | | The generated dump output is incorrect in sofar as it uses the name as the key for this compressor, but they don't need to be equal as is the case if you force some of the inbuilt ones to be disabled as our testing framework does it at times. This is hidden from changelog as nobody will actually notice while describing it in a few words make it sound like an important change… Git-Dch: Ignore
* avoid 416 response teardown binding to null pointerDavid Kalnischkies2016-07-054-10/+12
| | | | | | | | | | methods/http.cc:640:13: runtime error: reference binding to null pointer of type 'struct FileFd' This reference is never used in the cases it has a nullptr, so the practical difference is non-existent, but its a bug still. Reported-By: gcc -fsanitize=undefined