| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
This ensures that it compiles when clang compiler is passing
-DFORTIFY_SOURCES=2
Signed-off-by: Khem Raj <raj.khem@gmail.com>
|
|
|
|
|
|
|
| |
Some of our headers use APT_COMPILING_APT trickery to avoid exposing too
broadly details we don't want external clients to know and make use of.
The flip-side is that this can lead to different compilation units
seeing different definitions if they aren't all using the same config.
|
|
|
|
|
|
| |
Building the library just so we can build the helpers against it is not
only wasteful but as we are supposed to test the system we can use that
as an additional simple smoke test before the real testing starts.
|
|
|
|
|
|
|
| |
apt/test/interactive-helper/aptwebserver.cc: In function ‘std::string HTMLEncode(std::string)’:
error: variable ‘constexpr const std::array<std::array<const char*, 2>, 6> htmlencode’ has initializer but incomplete type
Reported-By: Helmut Grohne on IRC
|
|
|
|
|
|
|
|
|
| |
RFC7233 3.2 If-Range specifies the comparison to be an exact match,
not a less or equal, which makes no sense in this context anyhow.
Our server exists only to write our tests against it so this isn't much
of a practical issue. I did confirm with a crashing server that no test
(silently) depends on this or exhibits a different behaviour not
explicitly checked for.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If you install dpkg on an empty status file with all recommends and
suggests apt wants to install 4000+ packages. The deepest chain
seemingly being 236 steps long. And dpkg isn't even the worst (~259).
That is a problem as libapt has a hardcoded recursion limit for
MarkInstall and friends … set to 100. We are saved by the fact that
chains without suggests are much shorter (dpkg has 5, max seems ~43),
but I ignored Conflicts in these chains, which typically trigger
upgrades, so if two of the worst are chained together we suddenly get
dangerously close to the limit still.
So, lets just increase the limit into oblivion as it is really just a
safety measure we should not be running into to begin with. MarkPackage
was running years without it after all. 3000 is picked as a nice number
as any other and because it is roughly the half of the stack crashs I
saw previously in this branch.
|
|
|
|
| |
Gbp-Dch: ignore
|
|
|
|
|
|
|
|
|
|
|
|
| |
We ignored the failure from strtoul() that those test cases had values
out of range, hence they passed before, but now failed on 32-bit
platforms because we use strtoull() and do the limit check ourselves.
Move the tarball generator for test-github-111-invalid-armember to the
createdeb helper, and fix the helper to set all the numbers for like uid
and stuff to 0 instead of the maximum value the fields support (all 7s).
Regression-Of: e0743a85c5f5f2f83d91c305450e8ba192194cd8
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit potentially breaks code feeding apt an encoded URI using a
method which does not get URIs send encoded. The webserverconfig
requests in our tests are an example for this – but they only worked
before if the server was expecting a double encoding as that was what
was happening to an encoded URI: so unlikely to work as expected in
practice.
Now with the new methods we can drop this double encoding and rely on
the URI being passed properly (and without modification) between the
layers so that passing in encoded URIs should now work correctly.
|
|
|
|
|
|
| |
Our http method encodes the URI again which results in the double
encoding we have unwrap in the webserver (we did already, but we skip
the filename handling now which does the first decode).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The integer overflow was detected by DonKult who added a check like this:
(std::numeric_limits<decltype(Itm.Size)>::max() - (2 * sizeof(Block)))
Which deals with the code as is, but also still is a fairly big limit,
and could become fragile if we change the code. Let's limit our file
sizes to 128 GiB, which should be sufficient for everyone.
Original comment by DonKult:
The code assumes that it can add sizeof(Block)-1 to the size of the item
later on, but if we are close to a 64bit overflow this is not possible.
Fixing this seems too complex compared to just ensuring there is enough
room left given that we will have a lot more problems the moment we will
be acting on files that large as if the item is that large, the (valid)
tar including it probably doesn't fit in 64bit either.
|
|
|
|
|
|
|
|
|
| |
Like the code in arfile.cc, MemControlExtract also has buffer
overflows, in code allocating memory for parsing control files.
Specify an upper limit of 64 MiB for control files to both protect
against the Size overflowing (we allocate Size + 2 bytes), and
protect a bit against control files consisting only of zeroes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tarballs have long names and long link targets structured by a
special tar header with a GNU extension followed by the actual
content (padded to 512 bytes). Essentially, think of a name as
a special kind of file.
The limit of a file size in a header is 12 bytes, aka 10**12
or 1 TB. While this works OK-ish for file content that we stream
to extractors, we need to copy file names into memory, and this
opens us up to an OOM DoS attack.
Limit the file name size to 1 MiB, as libarchive does, to make
things safer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GHSL-2020-169: This first hunk adds a check that we have more files
left to read in the file than the size of the member, ensuring that
(a) the number is not negative, which caused the crash here and (b)
ensures that we similarly avoid other issues with trying to read too
much data.
GHSL-2020-168: Long file names are encoded by a special marker in
the filename and then the real filename is part of what is normally
the data. We did not check that the length of the file name is within
the length of the member, which means that we got a overflow later
when subtracting the length from the member size to get the remaining
member size.
The file createdeb-lp1899193.cc was provided by GitHub Security Lab
and reformatted using apt coding style for inclusion in the test
case, both of these issues have an automated test case in
test/integration/test-ubuntu-bug-1899193-security-issues.
LP: #1899193
|
|
|
|
| |
Apologies.
|
|
|
|
|
|
| |
We are converting to std::string anyway by passing to
istringstream, and this removes the need for .c_str()
in callers.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
The filename can be overridden, but sometimes it is useful to do it only
for the directory-part of the filename – e.g. if you want to let a flat
archive directory (like /var/cache/apt/archives) serve a pool-based
request like /pool/a/apt_version.deb.
Gbp-Dch: Ignore
|
|
|
|
|
|
|
|
|
|
|
| |
We sometimes autogenerate HTML pages e.g. for listing files in a
directory or for various error codes. If this would be a serious
webserver this would be a security problem (althrough a bit hard to
exploit), but as it is not shipped and intended to be used by our
testcases only the world hasn't ended & we can ignore it for
changelog and fix it for brownie points.
Gbp-Dch: Ignore
|
|
|
|
|
|
|
|
|
|
| |
Browsing pages served via aptwebserver is working better if we tell the
browser the Content-Type which for this simple usecase we can just do by
guessing based on the file extension – and because hardcoding a list
would be boring we just reuse the mime.types data from mime-support if
available and allow it to be overridden by files and config.
Gbp-Dch: Ignore
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If multiple threads act on requests (like if connection comes from a
webbrowser) a thread might request the supported compressors while
another thread is still working on creating the list to be stored in the
static cache variable.
As the price to pay for atomic and co seems to high for the fringe
usecase of manual usage of aptwebserver the patch just makes a call to
generate the list while still single threaded.
Gbp-Dch: Ignore
|
|
|
|
|
|
|
| |
Moving the Retry-implementation from individual items to the worker
implementation not only gives every file retry capability instead of
just a selected few but also avoids needing to implement it in each item
(incorrectly).
|
|
|
|
|
|
|
|
|
|
| |
A va_copy call needs to be closed in all branches with va_end, so these
functions would need to be reworked slightly, but we don't actually need
to copy the va_list as we don't work on it, we just push it forward, so
dropping the copy and everyone is happy.
Reported-By: cppcheck
Gbp-Dch: Ignore
|
|
|
|
| |
This makes the code easier to read.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes it easier to see which headers includes what.
The changes were done by running
git grep -l '#\s*include' \
| grep -E '.(cc|h)$' \
| xargs sed -i -E 's/(^\s*)#(\s*)include/\1#\2 include/'
To modify all include lines by adding a space, and then running
./git-clang-format.sh.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An unknown code should be handled the same as the x00 code of this
group, but for redirections we used to treat 300 (and a few others)
as an error while unknown codes were considered redirections.
Instead we check now explicitly for the redirection codes we support for
redirecting (and add the 308 defined in RFC 7538) to avoid future
problems if new 3xx codes are added expecting certain behaviours.
Potentially strange would have been e.g. "305 Use Proxy" sending a
Location for the proxy to use – which wouldn't have worked and resulted
in an error anyhow, but probably confused users in the process.
|
|
|
|
| |
Gbp-Dch: Ignore
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test test-handle-redirect-as-used-mirror-change serves multiple
clients at the same time, so the order of the output is undefined and
once in a while the two clients will intermix their lines causing the
grep we perform on it later to fail making our tests fail.
Solved by introducing client-specific logfiles which we all grep and
sort the result to have the results more stable.
Git-Dch: Ignore
|
|
|
|
|
|
|
|
|
| |
We can't cleanup the environment like e.g. sudo would do as you usually
want the environment to "leak" into these helpers, but some variables
like HOME should really not have still the value of the root user – it
could confuse the helpers (USER) and HOME isn't accessible anyhow.
Closes: 842877
|
|
|
|
|
|
|
| |
We need to ignore messages from gcov. All those messages
start with profiling: and are printed using vfprintf(), so
the only thing we can do is add a library overriding those
functions and linking apt-pkg to it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several modules use std::array without including the
array header. Bad modules.
Some modules use STDOUT_FILENO and friends, or close()
without including unistd.h, where they are defined.
One module also uses WIFEXITED() without including
sys/wait.h.
Finally, environ is not specified to be defined in unistd.h. We
are required to define it ourselves according to POSIX, so let's
do that.
|
|
|
|
|
|
| |
If the server told us in a previous request that it isn't supporting
Ranges with bytes via an Accept-Ranges header missing bytes, we don't
try to formulate requests using Ranges.
|
|
|
|
|
|
|
|
|
|
|
|
| |
It seems completely pointless from a server-POV to sent empty header
fields, so most of them don't do it (simply proven by this limitation
existing since day one) – but it is technically allowed by the RFC as
the surounding whitespaces are optional and Github seems to like sending
"X-Geo-Block-List:\r\n" since recently (bug reports in other http
clients indicate July) at least sometimes as the reporter claims to have
seen it on https only even through it can happen with both.
Closes: 834048
|
|
|
|
| |
Bye, bye, old friend.
|
|
|
|
|
|
| |
This early support seems a bit hacky, but it's a hard switch: The
integration tests do not understand the old build system anymore
afterwards. I don't really like that.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All apt versions support numeric as well as 3-character timezones just
fine and its actually hard to write code which doesn't "accidently"
accepts it. So why change? Documenting the Date/Valid-Until fields in
the Release file is easy to do in terms of referencing the
datetime format used e.g. in the Debian changelogs (policy §4.4). This
format specifies only the numeric timezones through, not the nowadays
obsolete 3-character ones, so in the interest of least surprise we should
use the same format even through it carries a small risk of regression
in other clients (which encounter repositories created with
apt-ftparchive).
In case it is really regressing in practice, the hidden option
-o APT::FTPArchive::Release::NumericTimezone=0
can be used to go back to good old UTC as timezone.
The EDSP and EIPP protocols use this 'new' format, the text interface
used to communicate with the acquire methods does not for compatibility
reasons even if none of our methods would be effected and I doubt any
other would (in these instances the timezone is 'GMT' as that is what
HTTP/1.1 requires). Note that this is only true for apt talking to
methods, (libapt-based) methods talking to apt will respond with the
'new' format. It is therefore strongly adviced to support both also in
method input.
|
|
|
|
|
| |
Seen first in #826783, but as this buglog also shows leaked uncompressed
files as well we don't close it just yet.
|
|
|
|
|
|
|
|
| |
Breaking here lets our handler die which a client will fix by
reconnecting… but that eats time needlessly and is simple the wrong
handling, too.
Git-Dch: Ignore
|
|
|
|
|
|
|
|
| |
Iceweasel^WFirefox complains about the missing encoding in its console
which can be a bit annoying in interactive sessions, so fixing these
issues has no effect on apt itself, but on the testers.
Git-Dch: Ignore
|
|
|
|
| |
Git-Dch: Ignore
|
|
|
|
|
|
| |
This allows running tests in parallel.
Git-Dch: Ignore
|
|
|
|
|
|
|
|
|
| |
C++11 adds the 'override' specifier to mark that a method is overriding
a base class method and error out if not. We hide it in the APT_OVERRIDE
macro to ensure that we keep compiling in pre-c++11 standards.
Reported-By: clang-modernize -add-override -override-macros
Git-Dch: Ignore
|
|\
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
apt-pkg/pkgcache.h
debian/changelog
methods/https.cc
methods/server.cc
test/integration/test-apt-download-progress
|
| |
| |
| |
| |
| |
| |
| | |
Add a regression test that reproduced the hang of apt when a
partial file is present.
Git-Dch: ignore
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The apt http code parses Content-Length and Content-Range. For
both requests the variable "Size" is used and the semantic for
this Size is the total file size. However Content-Length is not
the entire file size for partital file requests. For servers that
send the Content-Range header first and then the Content-Length
header this can lead to globbing of Size so that its less than
the real file size. This may lead to a subsequent passing of a
negative number into the CircleBuf which leads to a endless
loop that writes data.
Thanks to Anton Blanchard for the analysis and initial patch.
LP: #1445239
|
| |
| |
| |
| |
| |
| |
| | |
If we have the expected hashes we can check with them if the file we
have in partial we got a 416 for is the expected file. We detected this
with same-size before, but not every server sends a good Content-Range
header with a 416 response.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If we get a IMSHit for the Transaction-Manager (= the InRelease file or
as its still supported fallback Release + Release.gpg combo) we can
assume that every file we would queue based on this manager, but already
have locally is current and hence would get an IMSHit, too. We therefore
save us and the server the trouble and skip the queuing in this case.
Beside speeding up repetative executions of 'apt-get update' this way we
also avoid hitting hashsum errors if the indexes are in fact already
updated, but the Release file isn't yet as it is the case on well
behaving mirrors as Release files is updated last.
The implementation is a bit harder than the theory makes it sound as we
still have to keep reverifying the Release files (e.g. to detect now expired
once to avoid an attacker being able to silently stale us) and have to
handle cases in which the Release file hits, but some indexes aren't
present (e.g. user added a new foreign architecture).
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Servers who advertise that they close the connection get the 'Closes'
encoding flag, but this conflicts with servers who response with a
transfer-encoding (e.g. encoding) as it is saved in the same flag.
We have a better flag for the keep-alive (or not) of the connection
anyway, so we check this instead of the encoding.
This is in practice not much of a problem as real servers we talk to are
HTTP1.1 servers (with keep-alive) and there isn't much point in doing
chunked encoding if you are going to close anyway, but our simple
testserver stumbles over this if pressed and its a bit cleaner, too.
Git-Dch: Ignore
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Bug #778375 uncovered that https wasn't properly integrated in the class
family tree of http as it was supposed to be leading to a NULL pointer
dereference. Fixing this 'properly' was deemed to much diff for
practically no gain that late in the release, so commit
0c2dc43d4fe1d026650b5e2920a021557f9534a6 just fixed the synptom, while
this commit here is fixing the cause plus adding a test.
|