| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Especially in small sections of an archive it can happen that an index
returns to a previous state (e.g. if a package was first added and then
removed with no other changes happening in between). The result is that
we have multiple patches which start from the same hash which if we
perform clientside merging is no problem although not ideal as we
perform needless work.
For serverside merging it would not matter, but due to rred previously
refusing to merge zero-size patches but dak ignoring failure letting it
carry these size-zero patches until they naturally expire we run into a
problem as these broken patches won't do and force us to fall back to
downloading the entire index. By always starting from the last patch
instead of the first with the starter hash we can avoid this problem
and behave optimally in clientside merge cases, too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The rred method expects the patches to have a certain name, which we
have to rename the file to before calling the method, but by delaying
the rename we ensure that if the download of one of them fails and a
successful fallback occurs they are all properly cleaned up as no longer
useful while in the error case the next apt run can potentially pick
them up as already downloaded.
Our test-pdiff-usage test was encountering this every other run, but did
not fail as the check for unaccounted files in partial/ was wrapped
in a subshell so that the failure produced failing output, but did not
change the exit code.
|
|
|
|
|
|
|
|
| |
There isn't a lot of sense in working on empty patches as they change
nothing (quite literally), but they can be the result of merging
multiple patches and so to not require our users to specifically detect
and remove them, we can be nice and just ignore them instead of erroring
out.
|
|
|
|
|
| |
This reverts commit d96c9a0280bffcfb0f4a319e003e9af60c6cfaf1.
It is not correct for master.
|
|\
| |
| |
| | |
apt Debian release 2.2.1
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts do require removing the package temporarily, so they really
should not be used.
We need to improve that eventually such that we can deconfigure packages
when we have to remove their dependencies due to conflicts.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This caused python-apt to unset the Python signal handler when running
update or install commands, breaking KeyboardInterrupt amongst possibly
other things.
We do not set those signal handlers in this functions, and the calling
functions restore signal handlers to previous ones.
LP: #1898026
|
| |
| |
| |
| | |
Closes: #983348
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
As user "DaOfficialRolex" on GitHub pointed out:
This is needed to allow for APT on iOS to compile correctly. If not included the two following errors happen while compiling APT.
~/apt/apt-pkg/contrib/configuration.cc:900:44: error: constexpr variable cannot have non-literal type 'const std::array<APT::StringView, 3>'
constexpr std::array<APT::StringView, 3> magicComments { "clear"_sv, "include"_sv, "x-apt-configure-index"_sv };
^
~/apt/apt-pkg/contrib/configuration.cc:900:44: error: implicit instantiation of undefined template 'std::__1::array<APT::StringView, 3>'
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/__tuple:219:64: note: template is declared here
template <class _Tp, size_t _Size> struct _LIBCPP_TEMPLATE_VIS array;
^
|
| | |
|
|/
|
|
|
|
|
|
|
| |
dpkg will be changed in 1.20.8 to not require --force-remove for
deconfiguration anymore, but we want to decouple our changes from the
dpkg ones, so let's always pass --force-remove-protected when installing
packages such that we can deconfigure protected packages.
Closes: #983014
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
std::regex pulls in about 50 weak symbols which is complete and
utter madness, especially because we version all our symbols, so no
other library could ever reuse them.
Avoid using the regular expression here all together, loop using
string::find_first_of() and insert backslashes with strng::insert().
|
| |
|
| |
|
|
|
|
| |
Gbp-Dch: ignore
|
|
|
|
|
|
|
|
|
|
|
|
| |
We ignored the failure from strtoul() that those test cases had values
out of range, hence they passed before, but now failed on 32-bit
platforms because we use strtoull() and do the limit check ourselves.
Move the tarball generator for test-github-111-invalid-armember to the
createdeb helper, and fix the helper to set all the numbers for like uid
and stuff to 0 instead of the maximum value the fields support (all 7s).
Regression-Of: e0743a85c5f5f2f83d91c305450e8ba192194cd8
|
| |
|
| |
|
|\
| |
| |
| |
| | |
Various patches uplifted from unfinished fuzzer branches
See merge request apt-team/apt!158
|
| |
| |
| |
| |
| |
| |
| |
| | |
The grep for case-insensitive GPG finds also e.g. "/tmp/tmp.Kc5kKgPg0D"
which is not the intention, so we simply eliminate the variation of the
/tmp directory here from the output to prevent these false positives.
Gbp-Dch: Ignore
|
| |
| |
| |
| |
| |
| | |
They result in what looks odd:
Abhängigkeitsbaum wird aufgebaut.… Fertig
Statusinformationen werden eingelesen.… Fertig
|
| |
| |
| |
| |
| |
| |
| | |
Depending on your configured source 25 MB is hardly enough, so the mmap
housing the cache while it is build has to grow. Repeatedly. We can cut
down on the repeats of this by keeping a record of the size of the old
cache assuming the sizes will remain roughly in the same ballpark.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
APT tries to detect if applying patches is more costly than just
downloading the complete index by combining the size of the patches.
That is correct for client-side merging, but for server-side merging
we actually don't know if we will jump directly from old to current or
have still intermediate steps in between.
With this commit we assume it will be a jump from old to current through
as that is what dak implements and it seems reasonable if you go to
the trouble of server side merging that the server does the entire
merging in one file instead of leaving additional work for the client
to do.
Note that this just changes the heuristic to prevent apt from discarding
patches as uneconomic in the now more common one merged-patch style, it
still supports multiple merged patches as before.
To resolve this cleanly we would need another field in the index file
declaring which hash we will arrive at if a patch is applied (or a field
differentiating between these merged-patch styles at least), but that
seems like overkill for now.
|
| |
| |
| |
| |
| |
| |
| | |
We use the code in error messages, so at least for that edgecase we
should ensure that it has sensible content. Note that the acquire
system aborts on non-sensible message content in SendMessage, so you
can't really exploit this.
|
| |
| |
| |
| |
| |
| |
| | |
varg API is a nightmare as the symbols seems different on ever other
arch, but more importantly SendMessage does a few checks on the content
of the message and it is all outputted via C++ iostreams and not mixed
in FILE* which is handy for overriding the streams.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The rest of our code uses return-code errors and it isn't that nice to
crash the rred method on bad patches anyway, so we move to properly
reporting errors in our usual way which incidently also helps writing
a fuzzer for it.
This is not really security relevant though as the patches passed hash
verification while they were downloaded, so an attacker has to overtake a
trusted repository first which gives plenty better options of attack.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The undefined behaviour sanitizer complains with:
runtime error: addition of unsigned offset to 0x… overflowed to 0x…
Compilers and runtime do the right thing in any case and it is a
codepath that can (and ideally should) be avoided for speed reasons
alone, but fixing it can't hurt (too much).
|
| |
| |
| |
| |
| |
| |
| | |
Our configuration files are not security relevant, but having a parser
which avoids crashing on them even if they are seriously messed up is
not a bad idea anyway. It is also a good opportunity to brush up the
code a bit avoiding a few small string copies with our string_view.
|
| |
| |
| |
| |
| |
| |
| | |
strtoul(l) surprises us with parsing negative values which should not
exist in the places we use to parse them, so we can just downright
refuse them rather than trying to work with them by having them promoted
to huge positive values.
|
| |
| |
| |
| |
| |
| | |
It isn't super likely that we will encounter such big words in the real
world, but we can return arbitrary length, so lets just do that as that
also means we don't have to work with a second buffer.
|
| |
| |
| |
| |
| |
| | |
This has no attack surface though as the loop is to end very soon anyhow
and the method only used while reading CD-ROM mountpoints which seems
like a very unlikely attack vector…
|
| |
| |
| |
| |
| |
| |
| | |
For \xff and friends with the highest bit set and hence being a negative
value on signed char systems the wrong encoding is produced as we run
into undefined behaviour accessing negative array indexes.
We can avoid this problem simply by using an unsigned data type.
|
| |
| |
| |
| |
| |
| |
| |
| | |
If the Configuration code calling this was any indication, it is hard to
use – and even that monster still caused heap-buffer-overflow errors,
so instead of trying to fix it, lets just use methods which are far
easier to use. The question why this is done at all remains, but is left
for another day as an exercise for the reader.
|
| |
| |
| |
| |
| | |
References: 2497198e9599a6a8d4d0ad08627bcfc7ea49c644
Gbp-Dch: Ignore
|
| |
| |
| |
| |
| |
| | |
We were printing an error and hence have non-zero exit code either way,
but API wise it makes sense to have this properly reported back to the
caller to propagate it down the chain e.g. while parsing #include stanzas.
|
| |
| |
| |
| |
| | |
We do this once (usually), so the leak is tremendously big, but it is
detected as a leak by the fuzzer and trips it up.
|
| |
| |
| |
| |
| | |
References: 1460eebf2abe913df964e031eff081a57f043697
Gbp-Dch: Ignore
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Explicitly opening a tar member is a bit harder than it needs to be as
you have to remove the compressor extension so that it can be guessed
here gain potentially choosing the wrong member.
Doesn't really matter for deb packages of course as the member count is
pretty low and strongly defined, but testing is easier this way.
It also finally fixes an incorrectly formatted error message.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The buffers we feed in and read out are usually a couple kilobytes big
so allowing lzma to use an unlimited amount of memory is easy & okay,
but not needed and confuses memory checkers as it will cause lzma to
malloc a huge chunk of memory (which it will never use).
So lets just use a "big enough" value instead.
In exchange we simplify the decoder calling as we were already using the
auto-variant for xz, so we can just avoid the if-else and let liblzma
decide what it decodes.
|
|\ \
| | |
| | |
| | |
| | | |
Show 'Done' always for 'Building dependency tree'
See merge request apt-team/apt!157
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
For years I subconsciously thought this is wrong but ignored it:
$ LANG=C apt install -s
Reading package lists... Done
Building dependency tree
Reading state information... Done
Then I noticed:
$ LANG=C apt install -s -o dir::state::extended_states=/dev/null
Reading package lists... Done
Building dependency tree... Done
That can't be! Then it really should be:
$ LANG=C apt install -s
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
This oddity seems to be in since the introduction of the auto bit in 2005
which makes this rather hard to solve in theory, but in practice no front
end seems to call the readStateFile method directly, so we might actually
be lucky.
The alternative would be to call Done in the calling method and again
at the end of readStateFile while dropping it from the current place,
but as that is more shuffling around it could be more upsetting for
other front ends. Not sure, but now that I have seen it, I want to have
it fixed one way or another… aptitude at least seems not to explode.
References: afb1e2e3bb580077c6c917e6ea98baad8f3c39b3
|
| |
| |
| |
| | |
Closes: #981883
|
|/
|
|
| |
Closes: #981885
|
|\
| |
| |
| |
| | |
Include all translations when building the cache
See merge request apt-team/apt!156
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We do download all translations we ever downloaded, but we don't add all
of those to the cache, meaning that if we run update with LANG=C, it
might still download your de_DE translation, but it won't insert it into
the cache, causing your de_DE user to not get translated messages.
LP: #1907850
|
|\ \
| | |
| | |
| | |
| | | |
vendor: Adjust Debian -security codename
See merge request apt-team/apt!155
|