| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Our configuration files are not security relevant, but having a parser
which avoids crashing on them even if they are seriously messed up is
not a bad idea anyway. It is also a good opportunity to brush up the
code a bit avoiding a few small string copies with our string_view.
|
|
|
|
|
|
|
| |
strtoul(l) surprises us with parsing negative values which should not
exist in the places we use to parse them, so we can just downright
refuse them rather than trying to work with them by having them promoted
to huge positive values.
|
|
|
|
|
|
| |
It isn't super likely that we will encounter such big words in the real
world, but we can return arbitrary length, so lets just do that as that
also means we don't have to work with a second buffer.
|
|
|
|
|
|
| |
This has no attack surface though as the loop is to end very soon anyhow
and the method only used while reading CD-ROM mountpoints which seems
like a very unlikely attack vector…
|
|
|
|
|
|
|
| |
For \xff and friends with the highest bit set and hence being a negative
value on signed char systems the wrong encoding is produced as we run
into undefined behaviour accessing negative array indexes.
We can avoid this problem simply by using an unsigned data type.
|
|
|
|
|
|
|
|
| |
If the Configuration code calling this was any indication, it is hard to
use – and even that monster still caused heap-buffer-overflow errors,
so instead of trying to fix it, lets just use methods which are far
easier to use. The question why this is done at all remains, but is left
for another day as an exercise for the reader.
|
|
|
|
|
|
| |
We were printing an error and hence have non-zero exit code either way,
but API wise it makes sense to have this properly reported back to the
caller to propagate it down the chain e.g. while parsing #include stanzas.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The buffers we feed in and read out are usually a couple kilobytes big
so allowing lzma to use an unlimited amount of memory is easy & okay,
but not needed and confuses memory checkers as it will cause lzma to
malloc a huge chunk of memory (which it will never use).
So lets just use a "big enough" value instead.
In exchange we simplify the decoder calling as we were already using the
auto-variant for xz, so we can just avoid the if-else and let liblzma
decide what it decodes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The integer overflow was detected by DonKult who added a check like this:
(std::numeric_limits<decltype(Itm.Size)>::max() - (2 * sizeof(Block)))
Which deals with the code as is, but also still is a fairly big limit,
and could become fragile if we change the code. Let's limit our file
sizes to 128 GiB, which should be sufficient for everyone.
Original comment by DonKult:
The code assumes that it can add sizeof(Block)-1 to the size of the item
later on, but if we are close to a 64bit overflow this is not possible.
Fixing this seems too complex compared to just ensuring there is enough
room left given that we will have a lot more problems the moment we will
be acting on files that large as if the item is that large, the (valid)
tar including it probably doesn't fit in 64bit either.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tarballs have long names and long link targets structured by a
special tar header with a GNU extension followed by the actual
content (padded to 512 bytes). Essentially, think of a name as
a special kind of file.
The limit of a file size in a header is 12 bytes, aka 10**12
or 1 TB. While this works OK-ish for file content that we stream
to extractors, we need to copy file names into memory, and this
opens us up to an OOM DoS attack.
Limit the file name size to 1 MiB, as libarchive does, to make
things safer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GHSL-2020-169: This first hunk adds a check that we have more files
left to read in the file than the size of the member, ensuring that
(a) the number is not negative, which caused the crash here and (b)
ensures that we similarly avoid other issues with trying to read too
much data.
GHSL-2020-168: Long file names are encoded by a special marker in
the filename and then the real filename is part of what is normally
the data. We did not check that the length of the file name is within
the length of the member, which means that we got a overflow later
when subtracting the length from the member size to get the remaining
member size.
The file createdeb-lp1899193.cc was provided by GitHub Security Lab
and reformatted using apt coding style for inclusion in the test
case, both of these issues have an automated test case in
test/integration/test-ubuntu-bug-1899193-security-issues.
LP: #1899193
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The compiler does not know that the size is small and thinks we might
be doing a stack buffer overflow of the vla:
Add APT_ASSUME macro and silence -Wstringop-overflow in HexDigest()
The compiler does not know that the size of a hash is at most 512 bit,
so tell it that it is.
../apt-pkg/contrib/hashes.cc: In function ‘std::string HexDigest(gcry_md_hd_t, int)’:
../apt-pkg/contrib/hashes.cc:415:21: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]
415 | Result[(Size)*2] = 0;
| ~~~~~~~~~~~~~~~~~^~~
../apt-pkg/contrib/hashes.cc:414:9: note: at offset [-9223372036854775808, 9223372036854775807] to an object with size at most 4294967295 declared here
414 | char Result[((Size)*2) + 1];
| ^~~~~~
Fix this by adding a simple assertion. This generates an extra two
instructions in the normal code path, so it's not exactly super costly.
|
|\
| |
| |
| |
| | |
Remove master/slave terminology
See merge request apt-team/apt!124
|
| | |
|
|\ \
| |/
|/|
| |
| | |
Fully deprecate apt-key, schedule removal for Q2/2022
See merge request apt-team/apt!119
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
People are still using apt-key add and friends, despite that not
being guaranteed to work. Let's tell them to stop doing so.
We might still want a list command at a future point, but this
needs deciding, and a blanket ban atm seems like a sensible step
until we figured that out.
|
| |
| |
| |
| |
| | |
It isn't needed to iterate over all results if we will be doing nothing
anyhow as it isn't that common to have that debug option enabled.
|
| |
| |
| |
| |
| |
| |
| |
| | |
The variable this is read to is named Junk and that it is for usecases
like apt-ftparchive which just looks at the items metadata, so instead
of performing this hunked read for data nobody will process we just tell
our FileFd to skip ahead (Internally it might still loop over the data
depending on which compressor is involved).
|
| |
| |
| |
| |
| |
| | |
With FileFd::Write we already have a helper for this situation we can
just make use of here instead of hoping for the best or rolling our own
solution here.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Our testcases had their own implementation of GetTempFile with the
feature of a temporary file with a choosen suffix. Merging this into
GetTempFile lets us drop this duplicate and hence test more our code
rather than testing our helpers for test implementation.
And then hashsums_test had another implementation… and extracttar wasn't
even trying to use a real tempfile… one GetTempFile to rule them all!
That also ensures that these tempfiles are created in a temporary
directory rather than the current directory which is a nice touch and
tries a little harder to clean up those tempfiles.
|
| |
| |
| |
| |
| | |
Not all filesystems implement this feature in all versions of Linux,
so this open call can fail & we have to fallback to our old method.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(CVE-2020-3810)
When normalizing ar member names by removing trailing whitespace
and slashes, an out-out-bound read can be caused if the ar member
name consists only of such characters, because the code did not
stop at 0, but would wrap around and continue reading from the
stack, without any limit.
Add a check to abort if we reached the first character in the
name, effectively rejecting the use of names consisting just
of slashes and spaces.
Furthermore, certain error cases in arfile.cc and extracttar.cc have
included member names in the output that were not checked at all and
might hence not be nul terminated, leading to further out of bound reads.
Fixes Debian/apt#111
LP: #1878177
|
|
|
|
|
|
| |
This matches the definitions used by dpkg.
Closes: #953527
|
|
|
|
|
| |
Extract the code, and reformat it with clang-format so we can
modify it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Showing a percentage for a timeout is pretty non-standard. Rework the
progress class so it can show an absolute progress (currently hardcoded
to use seconds as a unit). If there is a timeout (aka if it's not the
maximum long long unsigned -1llu), then show the timeout, otherwise
just count up seconds, e.g.
Waiting for cache lock: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 33842 (apt)... 1/120s
or
Waiting for cache lock: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 33842 (apt)... 1s
Also improve the error message to use "Waiting for cache lock: %s" instead of "... (%s)", as having
multiple sentences inside parenthesis is super weird, as is having two closing parens.
We pass the information via _config, as that's reasonably easy and avoids
ABI hackage. It also provides an interesting debugging tool for other
kinds of progress.
|
|
|
|
|
|
|
|
| |
This improves the locking message, getting rid of useless details. If
we have a process holding the lock, we got that because the lock is
being hold by it, so there's no point telling the people the reason
for not getting the lock is the EAGAIN error and displaying its
strerrror().
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
This is not supposed to be done this way, but frankly, since we
abstract away the backend, there's not much else we can do here.
Closes: #949074
|
| |
|
|
|
|
|
|
| |
Remove all code scheduled to be removed after 5.90, and fix
files to include files they previously got from hashes.h
including more headers.
|
| |
|
|
|
|
|
|
|
|
| |
This experiment did not turn out sensibly, as some servers do not
accept credentials when none are expected and fail, so you cannot
mirror such a repository.
This reverts commit c2b9b0489538fed4770515bd8853a960b13a2618.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While moving to a more stable clock in 79b61ae I typoed the microsecond
calculation part and copied it all over the place… Julian fixed the
first two instances in 089e6271 and Trent reported the apt-ftparchive
instances leaving one instance in progress (invisible for user though).
A bit ironic that in an attempt to stop "confusing (and amusing) users"
I managed to hide a typo for close to two years doing just that…
Sadly we can't really test this as while "apt-ftparchive generate /dev/null"
is a great interactive test, it is hard to teach our test framework that
the output is "reasonably below an hour" (usually 0s, but on busy test
systems it is perhaps longer…).
Thanks: Trent W. Buck for initial patch
Closes: #950776
References: 79b61ae7673eb6213493e2cb202f0d70c390932d,
089e627153781ae7c320a5a0724c6c70d684b689
|
|
|
|
| |
This allows us to define constexpr string view literals.
|
|
|
|
|
|
|
|
|
|
| |
Given that we have a maximum of 12 pools, and much more
items to insert, it does not make sense to have two branches
in the hot path.
Move the search for an empty pool into the unlikely case
that no matching pool has been created yet - a condition
that is guaranteed to only happens up to 12 times.
|
|
|
|
|
|
|
|
| |
Commit 93f33052de84e9aeaf19c92291d043dad2665bbd restricted auth.conf
entries to only apply to https by default, but this was silent - there
was no information why http sources with auth.conf entries suddenly
started failing. Add such information, and extend test case to cover
it.
|
|
|
|
| |
Remove it everywhere, except where it is still needed.
|
| |
|
|
|
|
|
|
|
| |
This makes use of the a function GetHashString() that returns
the specific hash string. We also need to implement another overload
of Add() for signed chars with sizes, so the existing users do not
require reinterpret_cast everywhere.
|
|
|
|
|
|
| |
Move APT_BUFFER_SIZE to macros.h and re-use it in hashes,
this also might speed up stuff, the motivation for using
64 KiB buffers in fileutl.cc was precisely that after all.
|
|
|
|
|
| |
Switch the code of the Hashes class to use libgcrypt, which allows
us to use hardware-accelerated implementations of SHA1 and friends.
|
|
|
|
| |
This ensures that we do not leak simple words like that.
|
|
|
|
|
| |
We don't use them, APT_CONST is APT_PURE now, and MAX/MIN/etc
are available as proper templates in the C++ standard library.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace the macro with an anonymous struct that provides an
inline operator->() returning the _error pointer.
This change is ABI compatible, and the inline macro is not
exported. We should consider if we want to avoid the function
call and directly export the thread_local variable instead,
when we do break ABI.
Closes: #948338
|
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids downgrade attacks where an attacker could inject
Location: http://private.example/
and then (having access to raw data to private.example, for example,
by opening a port there, or sniffing network traffic) read the credentials
for the private repository.
Closes: #945911
|
|
|
|
|
|
|
|
| |
Unused variable, std::algorithms instead of raw for-loops.
There should be no observeable difference in behaviour.
Reported-By: cppcheck
Gbp-Dch: Ignore
|