<feed xmlns='http://www.w3.org/2005/Atom'>
<title>apt/apt-pkg/contrib, branch 2.1.18</title>
<subtitle>Debians commandline package manager</subtitle>
<id>https://git.kalnischkies.de/apt/atom?h=2.1.18</id>
<link rel='self' href='https://git.kalnischkies.de/apt/atom?h=2.1.18'/>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/'/>
<updated>2020-12-09T16:30:43Z</updated>
<entry>
<title>CVE-2020-27350: tarfile: integer overflow: Limit tar items to 128 GiB</title>
<updated>2020-12-09T16:30:43Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-12-05T18:55:30Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=df81895bce764dd02fbb4d67b92d28a730b5281f'/>
<id>urn:sha1:df81895bce764dd02fbb4d67b92d28a730b5281f</id>
<content type='text'>
The integer overflow was detected by DonKult who added a check like this:

(std::numeric_limits&lt;decltype(Itm.Size)&gt;::max() - (2 * sizeof(Block)))

Which deals with the code as is, but also still is a fairly big limit,
and could become fragile if we change the code. Let's limit our file
sizes to 128 GiB, which should be sufficient for everyone.

Original comment by DonKult:

The code assumes that it can add sizeof(Block)-1 to the size of the item
later on, but if we are close to a 64bit overflow this is not possible.
Fixing this seems too complex compared to just ensuring there is enough
room left given that we will have a lot more problems the moment we will
be acting on files that large as if the item is that large, the (valid)
tar including it probably doesn't fit in 64bit either.
</content>
</entry>
<entry>
<title>tarfile: OOM hardening: Limit size of long names/links to 1 MiB</title>
<updated>2020-12-09T16:30:43Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-12-04T11:37:19Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=822db13d68658a1a20df2d19c688c18faa331616'/>
<id>urn:sha1:822db13d68658a1a20df2d19c688c18faa331616</id>
<content type='text'>
Tarballs have long names and long link targets structured by a
special tar header with a GNU extension followed by the actual
content (padded to 512 bytes). Essentially, think of a name as
a special kind of file.

The limit of a file size in a header is 12 bytes, aka 10**12
or 1 TB. While this works OK-ish for file content that we stream
to extractors, we need to copy file names into memory, and this
opens us up to an OOM DoS attack.

Limit the file name size to 1 MiB, as libarchive does, to make
things safer.
</content>
</entry>
<entry>
<title>CVE-2020-27350: arfile: Integer overflow in parsing</title>
<updated>2020-12-09T16:30:43Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-10-19T11:22:33Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=d10c68d628fe5342d400a999a6d10c5c7c0cef41'/>
<id>urn:sha1:d10c68d628fe5342d400a999a6d10c5c7c0cef41</id>
<content type='text'>
GHSL-2020-169: This first hunk adds a check that we have more files
left to read in the file than the size of the member, ensuring that
(a) the number is not negative, which caused the crash here and (b)
ensures that we similarly avoid other issues with trying to read too
much data.

GHSL-2020-168: Long file names are encoded by a special marker in
the filename and then the real filename is part of what is normally
the data. We did not check that the length of the file name is within
the length of the member, which means that we got a overflow later
when subtracting the length from the member size to get the remaining
member size.

The file createdeb-lp1899193.cc was provided by GitHub Security Lab
and reformatted using apt coding style for inclusion in the test
case, both of these issues have an automated test case in
test/integration/test-ubuntu-bug-1899193-security-issues.

LP: #1899193
</content>
</entry>
<entry>
<title>HexDigest: Silence -Wstringop-overflow</title>
<updated>2020-12-04T22:16:04Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-12-04T22:16:04Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=d63772845a28a08ea9c812ad8ac281cf9e0ae12a'/>
<id>urn:sha1:d63772845a28a08ea9c812ad8ac281cf9e0ae12a</id>
<content type='text'>
The compiler does not know that the size is small and thinks we might
be doing a stack buffer overflow of the vla:

    Add APT_ASSUME macro and silence -Wstringop-overflow in HexDigest()

    The compiler does not know that the size of a hash is at most 512 bit,
    so tell it that it is.

    ../apt-pkg/contrib/hashes.cc: In function ‘std::string HexDigest(gcry_md_hd_t, int)’:
    ../apt-pkg/contrib/hashes.cc:415:21: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]
      415 |    Result[(Size)*2] = 0;
          |    ~~~~~~~~~~~~~~~~~^~~
    ../apt-pkg/contrib/hashes.cc:414:9: note: at offset [-9223372036854775808, 9223372036854775807] to an object with size at most 4294967295 declared here
      414 |    char Result[((Size)*2) + 1];
          |         ^~~~~~

Fix this by adding a simple assertion. This generates an extra two
instructions in the normal code path, so it's not exactly super costly.
</content>
</entry>
<entry>
<title>Merge branch 'pu/less-slaves' into 'master'</title>
<updated>2020-08-04T10:12:30Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2020-08-04T10:12:30Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=11530bab64efd4b4fc46de7833533cea9c69f521'/>
<id>urn:sha1:11530bab64efd4b4fc46de7833533cea9c69f521</id>
<content type='text'>
Remove master/slave terminology

See merge request apt-team/apt!124</content>
</entry>
<entry>
<title>Replace whitelist/blacklist with allowlist/denylist</title>
<updated>2020-08-04T10:12:11Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-07-14T14:34:20Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=7d8bb855487d6821b0cd6bf5d2270ed8fda3d1a3'/>
<id>urn:sha1:7d8bb855487d6821b0cd6bf5d2270ed8fda3d1a3</id>
<content type='text'>
</content>
</entry>
<entry>
<title>Merge branch 'pu/apt-key-deprecated' into 'master'</title>
<updated>2020-08-04T10:07:10Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2020-08-04T10:07:10Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=1afe7c8b874abb61cde591e0241b967ef1b99991'/>
<id>urn:sha1:1afe7c8b874abb61cde591e0241b967ef1b99991</id>
<content type='text'>
Fully deprecate apt-key, schedule removal for Q2/2022

See merge request apt-team/apt!119</content>
</entry>
<entry>
<title>Reorder config check before result looping for SRV parsing debug</title>
<updated>2020-07-02T16:57:11Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2020-06-30T08:11:09Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=1bfc0907c987758529bcdc4ebfb34364702a2d8b'/>
<id>urn:sha1:1bfc0907c987758529bcdc4ebfb34364702a2d8b</id>
<content type='text'>
It isn't needed to iterate over all results if we will be doing nothing
anyhow as it isn't that common to have that debug option enabled.
</content>
</entry>
<entry>
<title>Skip reading data from tar members if nobody will look at it</title>
<updated>2020-05-18T13:55:36Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2020-05-15T11:29:36Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=19790db8900bc9baac29cf58600152997a8ecef8'/>
<id>urn:sha1:19790db8900bc9baac29cf58600152997a8ecef8</id>
<content type='text'>
The variable this is read to is named Junk and that it is for usecases
like apt-ftparchive which just looks at the items metadata, so instead
of performing this hunked read for data nobody will process we just tell
our FileFd to skip ahead (Internally it might still loop over the data
depending on which compressor is involved).
</content>
</entry>
<entry>
<title>Properly handle interrupted write() call in ExtractTar</title>
<updated>2020-05-18T13:55:36Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2020-05-13T21:01:38Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=5534bb3ad346ef4435e6fd0fe326771a4bde16a1'/>
<id>urn:sha1:5534bb3ad346ef4435e6fd0fe326771a4bde16a1</id>
<content type='text'>
With FileFd::Write we already have a helper for this situation we can
just make use of here instead of hoping for the best or rolling our own
solution here.
</content>
</entry>
</feed>
