<feed xmlns='http://www.w3.org/2005/Atom'>
<title>apt/test/interactive-helper, branch 2.3.9</title>
<subtitle>Debians commandline package manager</subtitle>
<id>https://git.kalnischkies.de/apt/atom?h=2.3.9</id>
<link rel='self' href='https://git.kalnischkies.de/apt/atom?h=2.3.9'/>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/'/>
<updated>2021-08-29T12:23:26Z</updated>
<entry>
<title>Increase recursion limits from 100 to 3000</title>
<updated>2021-08-29T12:23:26Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2021-08-29T11:50:31Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=5f6bbfa53c32ec30aff6a2bc8c412616049eab18'/>
<id>urn:sha1:5f6bbfa53c32ec30aff6a2bc8c412616049eab18</id>
<content type='text'>
If you install dpkg on an empty status file with all recommends and
suggests apt wants to install 4000+ packages. The deepest chain
seemingly being 236 steps long. And dpkg isn't even the worst (~259).

That is a problem as libapt has a hardcoded recursion limit for
MarkInstall and friends … set to 100. We are saved by the fact that
chains without suggests are much shorter (dpkg has 5, max seems ~43),
but I ignored Conflicts in these chains, which typically trigger
upgrades, so if two of the worst are chained together we suddenly get
dangerously close to the limit still.

So, lets just increase the limit into oblivion as it is really just a
safety measure we should not be running into to begin with. MarkPackage
was running years without it after all. 3000 is picked as a nice number
as any other and because it is roughly the half of the stack crashs I
saw previously in this branch.
</content>
</entry>
<entry>
<title>Adjust loops to use size_t instead of int</title>
<updated>2021-02-09T22:49:31Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2021-02-09T22:49:31Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=ea114447ee44908c75f4ddc19a0521260832668d'/>
<id>urn:sha1:ea114447ee44908c75f4ddc19a0521260832668d</id>
<content type='text'>
Gbp-Dch: ignore
</content>
</entry>
<entry>
<title>Fix test suite regression from StrToNum fixes</title>
<updated>2021-02-09T22:33:47Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2021-02-09T22:29:05Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=6284c8221da94ab6b4262795e6a7990fc3655848'/>
<id>urn:sha1:6284c8221da94ab6b4262795e6a7990fc3655848</id>
<content type='text'>
We ignored the failure from strtoul() that those test cases had values
out of range, hence they passed before, but now failed on 32-bit
platforms because we use strtoull() and do the limit check ourselves.

Move the tarball generator for test-github-111-invalid-armember to the
createdeb helper, and fix the helper to set all the numbers for like uid
and stuff to 0 instead of the maximum value the fields support (all 7s).

Regression-Of: e0743a85c5f5f2f83d91c305450e8ba192194cd8
</content>
</entry>
<entry>
<title>Don't re-encode encoded URIs in pkgAcqFile</title>
<updated>2020-12-18T19:45:35Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2020-07-10T18:19:31Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=a5859bafdaa6bcf12934d0fb1715a5940965e13a'/>
<id>urn:sha1:a5859bafdaa6bcf12934d0fb1715a5940965e13a</id>
<content type='text'>
This commit potentially breaks code feeding apt an encoded URI using a
method which does not get URIs send encoded. The webserverconfig
requests in our tests are an example for this – but they only worked
before if the server was expecting a double encoding as that was what
was happening to an encoded URI: so unlikely to work as expected in
practice.

Now with the new methods we can drop this double encoding and rely on
the URI being passed properly (and without modification) between the
layers so that passing in encoded URIs should now work correctly.
</content>
</entry>
<entry>
<title>Proper URI encoding for config requests to our test webserver</title>
<updated>2020-12-18T18:02:05Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2020-07-08T15:51:40Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=97be873d782c5e9aaa8b4f4f4e6e18805d0fa51c'/>
<id>urn:sha1:97be873d782c5e9aaa8b4f4f4e6e18805d0fa51c</id>
<content type='text'>
Our http method encodes the URI again which results in the double
encoding we have unwrap in the webserver (we did already, but we skip
the filename handling now which does the first decode).
</content>
</entry>
<entry>
<title>CVE-2020-27350: tarfile: integer overflow: Limit tar items to 128 GiB</title>
<updated>2020-12-09T16:30:43Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-12-05T18:55:30Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=df81895bce764dd02fbb4d67b92d28a730b5281f'/>
<id>urn:sha1:df81895bce764dd02fbb4d67b92d28a730b5281f</id>
<content type='text'>
The integer overflow was detected by DonKult who added a check like this:

(std::numeric_limits&lt;decltype(Itm.Size)&gt;::max() - (2 * sizeof(Block)))

Which deals with the code as is, but also still is a fairly big limit,
and could become fragile if we change the code. Let's limit our file
sizes to 128 GiB, which should be sufficient for everyone.

Original comment by DonKult:

The code assumes that it can add sizeof(Block)-1 to the size of the item
later on, but if we are close to a 64bit overflow this is not possible.
Fixing this seems too complex compared to just ensuring there is enough
room left given that we will have a lot more problems the moment we will
be acting on files that large as if the item is that large, the (valid)
tar including it probably doesn't fit in 64bit either.
</content>
</entry>
<entry>
<title>CVE-2020-27350: debfile: integer overflow: Limit control size to 64 MiB</title>
<updated>2020-12-09T16:30:43Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-12-05T19:17:56Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=0444f9dd52c2bc7bec315f6f1ecad76a30713fa0'/>
<id>urn:sha1:0444f9dd52c2bc7bec315f6f1ecad76a30713fa0</id>
<content type='text'>
Like the code in arfile.cc, MemControlExtract also has buffer
overflows, in code allocating memory for parsing control files.

Specify an upper limit of 64 MiB for control files to both protect
against the Size overflowing (we allocate Size + 2 bytes), and
protect a bit against control files consisting only of zeroes.
</content>
</entry>
<entry>
<title>tarfile: OOM hardening: Limit size of long names/links to 1 MiB</title>
<updated>2020-12-09T16:30:43Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-12-04T11:37:19Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=822db13d68658a1a20df2d19c688c18faa331616'/>
<id>urn:sha1:822db13d68658a1a20df2d19c688c18faa331616</id>
<content type='text'>
Tarballs have long names and long link targets structured by a
special tar header with a GNU extension followed by the actual
content (padded to 512 bytes). Essentially, think of a name as
a special kind of file.

The limit of a file size in a header is 12 bytes, aka 10**12
or 1 TB. While this works OK-ish for file content that we stream
to extractors, we need to copy file names into memory, and this
opens us up to an OOM DoS attack.

Limit the file name size to 1 MiB, as libarchive does, to make
things safer.
</content>
</entry>
<entry>
<title>CVE-2020-27350: arfile: Integer overflow in parsing</title>
<updated>2020-12-09T16:30:43Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-10-19T11:22:33Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=d10c68d628fe5342d400a999a6d10c5c7c0cef41'/>
<id>urn:sha1:d10c68d628fe5342d400a999a6d10c5c7c0cef41</id>
<content type='text'>
GHSL-2020-169: This first hunk adds a check that we have more files
left to read in the file than the size of the member, ensuring that
(a) the number is not negative, which caused the crash here and (b)
ensures that we similarly avoid other issues with trying to read too
much data.

GHSL-2020-168: Long file names are encoded by a special marker in
the filename and then the real filename is part of what is normally
the data. We did not check that the length of the file name is within
the length of the member, which means that we got a overflow later
when subtracting the length from the member size to get the remaining
member size.

The file createdeb-lp1899193.cc was provided by GitHub Security Lab
and reformatted using apt coding style for inclusion in the test
case, both of these issues have an automated test case in
test/integration/test-ubuntu-bug-1899193-security-issues.

LP: #1899193
</content>
</entry>
<entry>
<title>aptwebserver: Rename slaves to workers</title>
<updated>2020-08-04T10:12:10Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-07-14T14:06:44Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=264137c679fb0d3c1f476dcb4ae207abc601b0b2'/>
<id>urn:sha1:264137c679fb0d3c1f476dcb4ae207abc601b0b2</id>
<content type='text'>
Apologies.
</content>
</entry>
</feed>
