| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
| |
It's interesting to the user to see the progress when it happens,
but arguably once it's done it is just visual clutter, so let's
not write newlines, and when we are done, instead of appending
"Done", let's just empty the line.
This requires some effort to keep apt-cdrom happy which just writes
lines to stdout itself. Bad apt-cdrom. Maybe there is a better fix
for it, but this gets us going.
|
| |
|
|
|
|
|
|
|
|
| |
While the code claims to handle it by just continuing the loop, the
looping condition will actually cause a break from the loop failing the
interrupted writing. The non-static FileFd::Write (and ::Read) deal with
this by setting acceptable values for the loop condition as well – but
for more simplicity and consistency we can instead remove this extra loop
condition and perform the continue/break due to error handling more
explicitly.
|
| |
|
|
|
|
| |
This was automated with sed and git-clang-format, and then I had to
fix up the top of policy.cc by hand as git-clang-format accidentally
indented it by two spaces.
|
| |\
| |
| |
| |
| | |
Prevent infinite loop in `ReadConfigFile`
See merge request apt-team/apt!314
|
| | |
| |
| |
| |
| | |
Break the loop on failure. Without this, the function
goes into an infinite loop if `FName` is a directory.
|
| |/
|
|
|
|
| |
Files with reserved extensions like .list, .sources, .conf,
and .pref should receive notices in their respective directories
even if they are directories.
|
| |
|
|
| |
This reverts commit 668451def296afeb0c358a7d80ff39dc546defab.
|
| |
|
|
|
| |
Initialize using gcrypt's GCRYCTL_NO_FIPS_MODE, available since
gcrypt version 1.10.0, otherwise apt aborts on FIPS enabled systems.
|
| | |
|
| |
|
|
|
|
|
| |
In gcc-13 internal includes were reduced exposing our laziness.
Reported-By: gcc-13
Gbp-Dch: Ignore
|
| |\
| |
| |
| |
| | |
apt-pkg/contrib/fileutl.h Explicitly include sys/stat.h
See merge request apt-team/apt!255
|
| | |
| |
| |
| | |
This fixes compatibility with musl C library.
|
| |/
|
|
|
|
|
|
| |
This fixes build on some architectures like mips
progress.cc:125:31: error: non-constant-expression cannot be narrowed from type 'std::chrono::duration<long long>::rep' (aka 'long long') to '__time_t' (aka 'long') in initializer list [-Wc++11-narrowing]
struct timeval NowTime = { Now_sec.count(), Now_usec.count() };
Signed-off-by: Khem Raj <raj.khem@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
Before this patch, the expression `Res - File.length()` that was
used as the length underflowed. It was very unlikely to cause any
problem given the saturating behavior of the std::string
constructor that's used.
Replacing `Res - File.length()` with `File.length() - Res` would
have worked, but omitting the last argument altogether invokes an
std::string constructor which does the right thing.
|
| |
|
|
|
|
|
| |
Some of our headers use APT_COMPILING_APT trickery to avoid exposing too
broadly details we don't want external clients to know and make use of.
The flip-side is that this can lead to different compilation units
seeing different definitions if they aren't all using the same config.
|
| |
|
|
|
|
|
|
| |
Our public interface doesn't use zlib for quite a while now so lets drop
the last remnants as hopefully nobody depends on us bringing it in…
Unlike our own private lib for transitive provision of unistd.h.
References: 680b916ce7203a40ebd0a3882b9a71ca77278a67
|
| |
|
|
|
|
|
| |
We abstract hashes a fair bit to be able to add new ones eventually,
which lead us to building the field names on the fly. We can do better
through by keeping a central place for these names, too, which even
helps in reducing code as we don't need the MD5 → Files dance anymore.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous regime of the file was to sort it on insert, but that
changes the values in the generated enum, which is fine as long as we
only use it in libapt itself, but breaks on other users.
The header was always intended to be private to apt itself, so we just
document this here now and lay the ground work to have the file in the
future only appended to, so that it remains sufficiently ABI stable that
we can use it outside the library in our apt tools.
We also remove some fields apt is unlikely to need or only uses in
certain cases outside of any (speed) critical path to have enough room
to add more fields soon as currently we are limited to 128 fields max
and it would be sad if we use up that allowance entirely already.
|
| |
|
|
|
| |
Avoid misclassifying additional alphabetical characters from
certain locales as alpha and then sort them by ASCII...
|
| | |
|
| |
|
|
| |
This avoids type errors with musl C library.
|
| |
|
|
|
|
|
|
|
| |
Some C libraries e.g. musl do not implement the new res_n* APIs
therefore keep the old implementation as fallback and check __RES
version macro to determine the API level
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Cc: Julian Andres Klode <julian.klode@canonical.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
This caused python-apt to unset the Python signal handler when running
update or install commands, breaking KeyboardInterrupt amongst possibly
other things.
We do not set those signal handlers in this functions, and the calling
functions restore signal handlers to previous ones.
LP: #1898026
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
As user "DaOfficialRolex" on GitHub pointed out:
This is needed to allow for APT on iOS to compile correctly. If not included the two following errors happen while compiling APT.
~/apt/apt-pkg/contrib/configuration.cc:900:44: error: constexpr variable cannot have non-literal type 'const std::array<APT::StringView, 3>'
constexpr std::array<APT::StringView, 3> magicComments { "clear"_sv, "include"_sv, "x-apt-configure-index"_sv };
^
~/apt/apt-pkg/contrib/configuration.cc:900:44: error: implicit instantiation of undefined template 'std::__1::array<APT::StringView, 3>'
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/__tuple:219:64: note: template is declared here
template <class _Tp, size_t _Size> struct _LIBCPP_TEMPLATE_VIS array;
^
|
| |
|
|
|
|
|
| |
varg API is a nightmare as the symbols seems different on ever other
arch, but more importantly SendMessage does a few checks on the content
of the message and it is all outputted via C++ iostreams and not mixed
in FILE* which is handy for overriding the streams.
|
| |
|
|
|
|
|
|
|
| |
The undefined behaviour sanitizer complains with:
runtime error: addition of unsigned offset to 0x… overflowed to 0x…
Compilers and runtime do the right thing in any case and it is a
codepath that can (and ideally should) be avoided for speed reasons
alone, but fixing it can't hurt (too much).
|
| |
|
|
|
|
|
| |
Our configuration files are not security relevant, but having a parser
which avoids crashing on them even if they are seriously messed up is
not a bad idea anyway. It is also a good opportunity to brush up the
code a bit avoiding a few small string copies with our string_view.
|
| |
|
|
|
|
|
| |
strtoul(l) surprises us with parsing negative values which should not
exist in the places we use to parse them, so we can just downright
refuse them rather than trying to work with them by having them promoted
to huge positive values.
|
| |
|
|
|
|
| |
It isn't super likely that we will encounter such big words in the real
world, but we can return arbitrary length, so lets just do that as that
also means we don't have to work with a second buffer.
|
| |
|
|
|
|
| |
This has no attack surface though as the loop is to end very soon anyhow
and the method only used while reading CD-ROM mountpoints which seems
like a very unlikely attack vector…
|
| |
|
|
|
|
|
| |
For \xff and friends with the highest bit set and hence being a negative
value on signed char systems the wrong encoding is produced as we run
into undefined behaviour accessing negative array indexes.
We can avoid this problem simply by using an unsigned data type.
|
| |
|
|
|
|
|
|
| |
If the Configuration code calling this was any indication, it is hard to
use – and even that monster still caused heap-buffer-overflow errors,
so instead of trying to fix it, lets just use methods which are far
easier to use. The question why this is done at all remains, but is left
for another day as an exercise for the reader.
|
| |
|
|
|
|
| |
We were printing an error and hence have non-zero exit code either way,
but API wise it makes sense to have this properly reported back to the
caller to propagate it down the chain e.g. while parsing #include stanzas.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The buffers we feed in and read out are usually a couple kilobytes big
so allowing lzma to use an unlimited amount of memory is easy & okay,
but not needed and confuses memory checkers as it will cause lzma to
malloc a huge chunk of memory (which it will never use).
So lets just use a "big enough" value instead.
In exchange we simplify the decoder calling as we were already using the
auto-variant for xz, so we can just avoid the if-else and let liblzma
decide what it decodes.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The integer overflow was detected by DonKult who added a check like this:
(std::numeric_limits<decltype(Itm.Size)>::max() - (2 * sizeof(Block)))
Which deals with the code as is, but also still is a fairly big limit,
and could become fragile if we change the code. Let's limit our file
sizes to 128 GiB, which should be sufficient for everyone.
Original comment by DonKult:
The code assumes that it can add sizeof(Block)-1 to the size of the item
later on, but if we are close to a 64bit overflow this is not possible.
Fixing this seems too complex compared to just ensuring there is enough
room left given that we will have a lot more problems the moment we will
be acting on files that large as if the item is that large, the (valid)
tar including it probably doesn't fit in 64bit either.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tarballs have long names and long link targets structured by a
special tar header with a GNU extension followed by the actual
content (padded to 512 bytes). Essentially, think of a name as
a special kind of file.
The limit of a file size in a header is 12 bytes, aka 10**12
or 1 TB. While this works OK-ish for file content that we stream
to extractors, we need to copy file names into memory, and this
opens us up to an OOM DoS attack.
Limit the file name size to 1 MiB, as libarchive does, to make
things safer.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GHSL-2020-169: This first hunk adds a check that we have more files
left to read in the file than the size of the member, ensuring that
(a) the number is not negative, which caused the crash here and (b)
ensures that we similarly avoid other issues with trying to read too
much data.
GHSL-2020-168: Long file names are encoded by a special marker in
the filename and then the real filename is part of what is normally
the data. We did not check that the length of the file name is within
the length of the member, which means that we got a overflow later
when subtracting the length from the member size to get the remaining
member size.
The file createdeb-lp1899193.cc was provided by GitHub Security Lab
and reformatted using apt coding style for inclusion in the test
case, both of these issues have an automated test case in
test/integration/test-ubuntu-bug-1899193-security-issues.
LP: #1899193
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The compiler does not know that the size is small and thinks we might
be doing a stack buffer overflow of the vla:
Add APT_ASSUME macro and silence -Wstringop-overflow in HexDigest()
The compiler does not know that the size of a hash is at most 512 bit,
so tell it that it is.
../apt-pkg/contrib/hashes.cc: In function ‘std::string HexDigest(gcry_md_hd_t, int)’:
../apt-pkg/contrib/hashes.cc:415:21: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]
415 | Result[(Size)*2] = 0;
| ~~~~~~~~~~~~~~~~~^~~
../apt-pkg/contrib/hashes.cc:414:9: note: at offset [-9223372036854775808, 9223372036854775807] to an object with size at most 4294967295 declared here
414 | char Result[((Size)*2) + 1];
| ^~~~~~
Fix this by adding a simple assertion. This generates an extra two
instructions in the normal code path, so it's not exactly super costly.
|
| |\
| |
| |
| |
| | |
Remove master/slave terminology
See merge request apt-team/apt!124
|
| | | |
|
| |\ \
| |/
|/|
| |
| | |
Fully deprecate apt-key, schedule removal for Q2/2022
See merge request apt-team/apt!119
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
People are still using apt-key add and friends, despite that not
being guaranteed to work. Let's tell them to stop doing so.
We might still want a list command at a future point, but this
needs deciding, and a blanket ban atm seems like a sensible step
until we figured that out.
|
| | |
| |
| |
| |
| | |
It isn't needed to iterate over all results if we will be doing nothing
anyhow as it isn't that common to have that debug option enabled.
|
| | |
| |
| |
| |
| |
| |
| |
| | |
The variable this is read to is named Junk and that it is for usecases
like apt-ftparchive which just looks at the items metadata, so instead
of performing this hunked read for data nobody will process we just tell
our FileFd to skip ahead (Internally it might still loop over the data
depending on which compressor is involved).
|
| | |
| |
| |
| |
| |
| | |
With FileFd::Write we already have a helper for this situation we can
just make use of here instead of hoping for the best or rolling our own
solution here.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Our testcases had their own implementation of GetTempFile with the
feature of a temporary file with a choosen suffix. Merging this into
GetTempFile lets us drop this duplicate and hence test more our code
rather than testing our helpers for test implementation.
And then hashsums_test had another implementation… and extracttar wasn't
even trying to use a real tempfile… one GetTempFile to rule them all!
That also ensures that these tempfiles are created in a temporary
directory rather than the current directory which is a nice touch and
tries a little harder to clean up those tempfiles.
|
| | |
| |
| |
| |
| | |
Not all filesystems implement this feature in all versions of Linux,
so this open call can fail & we have to fallback to our old method.
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(CVE-2020-3810)
When normalizing ar member names by removing trailing whitespace
and slashes, an out-out-bound read can be caused if the ar member
name consists only of such characters, because the code did not
stop at 0, but would wrap around and continue reading from the
stack, without any limit.
Add a check to abort if we reached the first character in the
name, effectively rejecting the use of names consisting just
of slashes and spaces.
Furthermore, certain error cases in arfile.cc and extracttar.cc have
included member names in the output that were not checked at all and
might hence not be nul terminated, leading to further out of bound reads.
Fixes Debian/apt#111
LP: #1878177
|
| |
|
|
|
|
| |
This matches the definitions used by dpkg.
Closes: #953527
|
| |
|
|
|
| |
Extract the code, and reformat it with clang-format so we can
modify it.
|