| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
* the good old 'simple' keyring format
* the ascii armored variant since 1.4
Not supported is the (new in gpg 2.1) keybox format.
Closes: 844724
|
|
|
|
|
|
|
|
|
|
|
|
| |
Having binary files in /etc is kinda annoying – not that the armored
files are much better – but it is hard to keep tabs on which format the
file has ("simple" or "keybox") and different gnupg versions have
different default binary formats which can be confusing for users to
work with (beside that it is binary).
Adding support for this now will enable us in some distant future to
move to armored later on, much like we added trusted.gpg.d years before
the world picked it up.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We report warnings from apt-key this way already since
29c590951f812d9e9c4f17706e34f2c3315fb1f6, so reporting errors seems like
a good addition. Most of those errors aren't really from apt-key
through, but from the code setting up and actually calling it which used
to just print to stderr which might or might not intermix them with
(other) progress lines in update calls. Having them as proper error
messages in the system means that the errors are actually collected
later on for the list instead of ending up with our relatively generic
but in those cases bogus hint regarding "is gpgv installed?".
The effective difference is minimal as the errors apply mostly to
systems which have far worse problems than a not as nice looking error
message, which makes this pretty hard to test – but at least now the
hint that your system is broken can be read in proper order (= there
aren't many valid cases in which the permissions of /tmp are messed up…).
LP: #1522988
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We try to configure all packages at the end which need to be configured,
but that also applies to packages which weren't completely installed
(e.g. maintainerscript failed) we end up removing in this interaction
instead.
APT doesn't perform this explicit configure in the end as it is using
"dpkg --configure --pending", but it does confuse the progress report
and potentially also hook scripts.
Regression-Of: 9ffbac99e52c91182ed8ff8678a994626b194e69
|
|
|
|
|
|
| |
dpkg stumbles over these (#844300) and we haven't dropped 'easier'
removes to be implicit and to be scheduled by dpkg by default so far
so we shouldn't push the decision in such cases to dpkg either.
|
|
|
|
|
|
|
|
|
|
|
| |
Our old idea was to look for the first package which would be "touched"
and take this as the package dpkg is talking about, but that is
incorrect in complicated situations like a package upgraded to/from
multiple M-A:same siblings installed.
As we us the progress report to decide what is still needed we have to
be reasonabily right about the package dpkg is talking about, so we jump
to quite a few loops to get it.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Given that we use the progress information to skip over actions dpkg has
already done like not purging a package which was already removed and
had no config files or not acting on disappeared packages and such it is
important that apt and dpkg agree on which states the package has to
pass through.
To ensure that we keep tabs on this in the future a warning is added at
the end if apt hasn't seen all the action it was supposed to see. I
can't wait for the first bugreporters to wonder about this…
|
|
|
|
|
|
|
|
|
| |
If a package is triggered dpkg frequently issues two messages about it
causing us to make a note about it both times which messes up our
planned dpkg actions view. Adding these actions if we have nothing else
planned fixes this and should still be correct as those planned actions
will deal with the triggering just fine and we avoid strange problems
like a package triggered before its removed…
|
|
|
|
|
|
|
|
|
|
|
| |
Our profile says we spend about 5% of the time transforming the
hex digits into the binary format used by HashsumValue, all for
comparing them against the other strings. That makes no sense
at all.
According to callgrind, this reduces the overall instruction
count from 5,3 billion to 5 billion in my example, which
roughly matches the 5%.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Generating a string for each version we see is somewhat inefficient.
The problem here is that the Description tag names are longer than
15 byte, and thus require an allocation on the heap, which we should
avoid.
It seems reasonable that 20 characters works for all languages codes
used for archive descriptions, but if not, there's a warning, so
we'll catch that.
This should improve performance by about 2%.
|
|
|
|
|
|
|
| |
This has the effect of significantly reducing actual string
comparisons, and should improve the performance of FindGrp
a bit, although it's hardly measureable (callgrind says it
uses 10% instructions less now).
|
|
|
|
|
|
|
| |
Stop copying stuff, and just parse the bytes one by-one to the
newly created AddCRC16Byte. This improves the instruction count
for an update run from 720,850,121 to 455,801,749 according to
callgrind.
|
|
|
|
|
|
|
| |
This one has some obvious collisions for non-alphabetical characters,
like some control characters also hashing to numbers, but we don't
really have those, and these are hash functions which are not
collision free to begin with.
|
|
|
|
|
|
|
| |
We already have two stable series with major version 10, and
the next commits will introduce non-backportable performance
changes that affect the cache algorithms, so we need to bump
the major version now to prevent future problems.
|
|
|
|
|
|
| |
This basically gets rid of 40-50% of the hash table lookups,
making things a bit faster that way, and the profiles look
far cleaner.
|
|
|
|
|
| |
Introduce a new enum class and add functions that can do a lookup
with that enum class. This uses triehash.
|
|\ |
|
|
|
|
|
| |
git-subtree-dir: triehash
git-subtree-split: 16f59e1320e6db18ba3b4269b7ca333b1463dd7b
|
|
|
|
|
| |
This allows us to add a perfect hash function to the tag file
without having to reimplement the methods a second time.
|
|
|
|
|
|
|
|
|
| |
Move the use of the AlphaHash to a new second hash table in
preparation for the arrival of the new perfect hash function.
With the new perfect hash function hashing most of the keys for
us, having 128 slots for a fallback hash function seems enough
and prevents us from wasting space.
|
| |
|
|
|
|
|
|
|
| |
No need to ask translators to deal with typo fixes in english text,
adding new items to long existing lists and 'literals'.
Gbp-Dch: Ignore
|
|
|
|
|
|
|
| |
This also changes Acquire-By-Hash to be "yes" rather than "true", so it
is consistent with dak's output.
Closes: #272557
|
|
|
|
|
|
|
|
|
|
| |
We have the last Release file around for other checks, so its trivial to
look if the new Release file contains a new codename (e.g. the user has
"testing" in the sources and it flipped from stretch to buster).
Such a change can be okay and expected, but also be a hint of problems,
so a warning if we see it happen seems okay. We can only print it once
anyhow and frontends and co are likely to ignore/hide it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A suite or codename entry in the Release file is checked against the
distribution field in the sources.list entry that lead to the download of that
Release file. This distribution entry can contain slashes in the distribution
field:
deb http://security.debian.org/debian wheezy/updates main
However, the Release file may only contain "wheezy" in the Codename field and
not "wheezy/updates". So a transformation needs to take place that removes the
last / and everything that comes after (e.g. "/updates"). This fails, however,
for valid cases like a reprepro snapshot where the given Codename contains
slashes but is perfectly fine and doesn't need to be transformed. Since that
transformation is essentially just a workaround for special cases like the
security repository, it should be checked if the literal Codename without any
transformations happened is valid and only if isn't the dist should be checked
against the transformated one.
This way special cases like security.debian.org are handled and reprepro
snapshots work too.
The initial patch was taken as insperationto move whole transformation
to CheckDist() which makes this method more accepting & easier to use
(but according to codesearch.d.n we are the only users anyhow).
Thanks: Lukas Anzinger for initial patch
Closes: 644610
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
You can pretty much achieve the same with a local dummy package if you
want to, but libapt has an inbuilt setting for essential: "apt" which
can be overridden with this option as well – it could be helpful in
quick tests and what not so adding this alternative shouldn't really
hurt much.
We aren't going to document them much through as care must be taken in
regards to the binary caches as they aren't invalidated by config
options alone, so the effects of old settings could still be in them,
similar to the other already existing pkgCacheGen option(s).
Closes: 767891
Thanks: Anthony Towns for initial patch
|
|
|
|
| |
Suggested in #529794
|
|
|
|
|
|
|
|
|
|
| |
[Comment from commiter:] I have the feeling that the issue itself is
fixed for a while already as nowadays we have testcases involving a
webserver closing the connection on error (look for "closeOnError") and
no even remotely recent reports about it, but moving the content
clearance above the failure report is a valid change and shouldn't hurt.
Closes: #465572
|
|
|
|
|
|
|
| |
apt tools do not really support these other variables, but tools apt
calls might, so lets play save and clean those up as needed.
Reported-By: Paul Wise (pabs) on IRC
|
|
|
|
|
|
|
|
|
|
|
| |
On Travis CI running tests with code coverage enabled sometimes
generates lines like:
profiling:/path/to/file.gcda:Merge mismatch for function 257
It would be nice if we could resolve this somehow as it garbles the
statistics, but until then it is far more annoying that this causes
test failures for no good reason.
Gbp-Dch: Ignore
|
|
|
|
| |
Reported-By: Christoph Berg (Myon) on IRC
|
|
|
|
|
|
|
|
|
| |
The 0.0.0.0:0 tor reports is pretty useless by itself, but even if an IP
would be reported it is better to show the user the hostname we wanted
the proxy to connect to in the same error message. We improve upon it
further by looking for this bind address in particular and remap error
messages slightly to give users a better chance of figuring out what
went wrong. Upstream Tor can't do that as it is technically wrong.
|
|
|
|
|
|
|
|
| |
Some people do not recognize the field value with such an arcane name
and/or expect it to refer to something different (e.g. #839257).
We can't just rename it internally as its an avoidance strategy as such
fieldname existed previously with less clear semantics, but we can spare
the general public from this implementation detail.
|
|
|
|
|
|
|
| |
Sometimes you should really act upon your todos.
Especially if you have placed them directly in the code.
Closes: 841874
|
|
|
|
|
|
|
|
|
| |
We can't cleanup the environment like e.g. sudo would do as you usually
want the environment to "leak" into these helpers, but some variables
like HOME should really not have still the value of the root user – it
could confuse the helpers (USER) and HOME isn't accessible anyhow.
Closes: 842877
|
| |
|
|
|
|
|
|
| |
See also c5f22e483cc0f31f2938874370ac776e40792069
Gbp-Dch: Ignore
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These new enum values might cause "interesting" behaviour in tools not
expecting them – like an old apt would think a Build-Conflicts-Arch is
some sort of Build-Depends – but that can't reasonably be avoided and
effects only packages using B-D/C-A so if there is any breakage the
tools can easily be adapted.
The APT_PKG_RELEASE number is increased so that libapt users can detect
the availability of these new enum fields via:
#if APT_PKG_ABI > 500 || (APT_PKG_ABI == 500 && APT_PKG_RELEASE >= 1)
Closes: #837395
|
|
|
|
|
|
|
|
|
|
| |
dpkg 1.18.11 includes:
* Do not log nor print duplicate dpkg removal action. We print
“Removing <package> (<version>)” lines and log remove action twice
when purging a package from frontends, because they usually first call
--remove and then --purge sequentially. When purging a package which is
already in config-files (i.e. it has been removed before), do not print
nor log the remove action.
|
|
|
|
|
|
|
|
|
| |
This unit runs unattended-upgrades. If apt itself is part of the
upgrade a restart of the unit will kill unattended-upgrades. This
will lead to an inconsistent dpkg status.
Closes: #841763
Thanks: Alexandre Detiste
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A user relying on the deprecated behaviour of apt-get to accept a source
with an unknown pubkey to install a package containing the key expects
that the following 'apt-get update' causes the source to be considered
as trusted, but in case the source hadn't changed in the meantime this
wasn't happening: The source kept being untrusted until the Release file
was changed.
This only effects sources not using InRelease and only apt-get, the apt
binary downright refuses this course of actions, but it is a common way
of adding external sources.
Closes: 838779
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In effect this is an extension of the 6 years old commit
a8dfff90aa740889eb99d00fde5d70908d9fd88a which uses the autoremover to
remove packages again from the solution which are no longer needed to be
there. Commonly these are dependencies of packages we end up not
installed due to problem resolver decisions. Slightly less common is
the situation we deal with here: a package which we wanted to upgrade
sporting a new dependency, but ended up holding back.
The problem is that all versions of an installed reverse dependencies can
bring back a "garbage" package – we need to do this as there is
nothing inherently wrong in having garbage packages installed or upgrade
them, which itself would have garbage dependencies, so just blindly
killing all new garbage packages would prevent the upgrade (and actually
generate errors). What we should be doing is looking only at the version
we will have on the system, disregarding all old/new reverse dependencies.
Reported-By: Stuart Prescott (themill) on IRC
|
|
|
|
| |
Closes: #840757
|
|
|
|
| |
Closes: #840552
|
|
|
|
| |
Gbp-Dch: Ignore
|
| |
|
| |
|
|
|
|
|
|
| |
We always want to run codecov test, even if there are spurious
failures. We should really work around those failures more, though,
it is starting to annoy me.
|
|
|
|
| |
Closes: #838731
|