summaryrefslogtreecommitdiff
path: root/test/interactive-helper
Commit message (Collapse)AuthorAgeFilesLines
* interactive-helper: Undefine _FORTIFY_SOURCEKhem Raj2022-09-161-0/+1
| | | | | | | This ensures that it compiles when clang compiler is passing -DFORTIFY_SOURCES=2 Signed-off-by: Khem Raj <raj.khem@gmail.com>
* Include our config.h in all C++ files to avoid ODR violationsDavid Kalnischkies2022-05-072-0/+4
| | | | | | | Some of our headers use APT_COMPILING_APT trickery to avoid exposing too broadly details we don't want external clients to know and make use of. The flip-side is that this can lead to different compilation units seeing different definitions if they aren't all using the same config.
* Link interactive helpers against system libapt for autopkgtestDavid Kalnischkies2022-05-071-7/+20
| | | | | | Building the library just so we can build the helpers against it is not only wasteful but as we are supposed to test the system we can use that as an additional simple smoke test before the real testing starts.
* Fix build failure with gcc-12 due to missing includeDavid Kalnischkies2022-03-211-0/+1
| | | | | | | apt/test/interactive-helper/aptwebserver.cc: In function ‘std::string HTMLEncode(std::string)’: error: variable ‘constexpr const std::array<std::array<const char*, 2>, 6> htmlencode’ has initializer but incomplete type Reported-By: Helmut Grohne on IRC
* Use exact If-Range match in our test webserverDavid Kalnischkies2021-09-161-1/+1
| | | | | | | | | RFC7233 3.2 If-Range specifies the comparison to be an exact match, not a less or equal, which makes no sense in this context anyhow. Our server exists only to write our tests against it so this isn't much of a practical issue. I did confirm with a crashing server that no test (silently) depends on this or exhibits a different behaviour not explicitly checked for.
* Increase recursion limits from 100 to 3000David Kalnischkies2021-08-292-1/+74
| | | | | | | | | | | | | | | | | | | If you install dpkg on an empty status file with all recommends and suggests apt wants to install 4000+ packages. The deepest chain seemingly being 236 steps long. And dpkg isn't even the worst (~259). That is a problem as libapt has a hardcoded recursion limit for MarkInstall and friends … set to 100. We are saved by the fact that chains without suggests are much shorter (dpkg has 5, max seems ~43), but I ignored Conflicts in these chains, which typically trigger upgrades, so if two of the worst are chained together we suddenly get dangerously close to the limit still. So, lets just increase the limit into oblivion as it is really just a safety measure we should not be running into to begin with. MarkPackage was running years without it after all. 3000 is picked as a nice number as any other and because it is roughly the half of the stack crashs I saw previously in this branch.
* Adjust loops to use size_t instead of intJulian Andres Klode2021-02-091-3/+3
| | | | Gbp-Dch: ignore
* Fix test suite regression from StrToNum fixesJulian Andres Klode2021-02-091-0/+42
| | | | | | | | | | | | We ignored the failure from strtoul() that those test cases had values out of range, hence they passed before, but now failed on 32-bit platforms because we use strtoull() and do the limit check ourselves. Move the tarball generator for test-github-111-invalid-armember to the createdeb helper, and fix the helper to set all the numbers for like uid and stuff to 0 instead of the maximum value the fields support (all 7s). Regression-Of: e0743a85c5f5f2f83d91c305450e8ba192194cd8
* Don't re-encode encoded URIs in pkgAcqFileDavid Kalnischkies2020-12-181-1/+1
| | | | | | | | | | | | | This commit potentially breaks code feeding apt an encoded URI using a method which does not get URIs send encoded. The webserverconfig requests in our tests are an example for this – but they only worked before if the server was expecting a double encoding as that was what was happening to an encoded URI: so unlikely to work as expected in practice. Now with the new methods we can drop this double encoding and rely on the URI being passed properly (and without modification) between the layers so that passing in encoded URIs should now work correctly.
* Proper URI encoding for config requests to our test webserverDavid Kalnischkies2020-12-181-4/+10
| | | | | | Our http method encodes the URI again which results in the double encoding we have unwrap in the webserver (we did already, but we skip the filename handling now which does the first decode).
* CVE-2020-27350: tarfile: integer overflow: Limit tar items to 128 GiBJulian Andres Klode2020-12-091-0/+4
| | | | | | | | | | | | | | | | | | | The integer overflow was detected by DonKult who added a check like this: (std::numeric_limits<decltype(Itm.Size)>::max() - (2 * sizeof(Block))) Which deals with the code as is, but also still is a fairly big limit, and could become fragile if we change the code. Let's limit our file sizes to 128 GiB, which should be sufficient for everyone. Original comment by DonKult: The code assumes that it can add sizeof(Block)-1 to the size of the item later on, but if we are close to a 64bit overflow this is not possible. Fixing this seems too complex compared to just ensuring there is enough room left given that we will have a lot more problems the moment we will be acting on files that large as if the item is that large, the (valid) tar including it probably doesn't fit in 64bit either.
* CVE-2020-27350: debfile: integer overflow: Limit control size to 64 MiBJulian Andres Klode2020-12-091-0/+4
| | | | | | | | | Like the code in arfile.cc, MemControlExtract also has buffer overflows, in code allocating memory for parsing control files. Specify an upper limit of 64 MiB for control files to both protect against the Size overflowing (we allocate Size + 2 bytes), and protect a bit against control files consisting only of zeroes.
* tarfile: OOM hardening: Limit size of long names/links to 1 MiBJulian Andres Klode2020-12-091-1/+83
| | | | | | | | | | | | | | | Tarballs have long names and long link targets structured by a special tar header with a GNU extension followed by the actual content (padded to 512 bytes). Essentially, think of a name as a special kind of file. The limit of a file size in a header is 12 bytes, aka 10**12 or 1 TB. While this works OK-ish for file content that we stream to extractors, we need to copy file names into memory, and this opens us up to an OOM DoS attack. Limit the file name size to 1 MiB, as libarchive does, to make things safer.
* CVE-2020-27350: arfile: Integer overflow in parsingJulian Andres Klode2020-12-092-0/+237
| | | | | | | | | | | | | | | | | | | | | | GHSL-2020-169: This first hunk adds a check that we have more files left to read in the file than the size of the member, ensuring that (a) the number is not negative, which caused the crash here and (b) ensures that we similarly avoid other issues with trying to read too much data. GHSL-2020-168: Long file names are encoded by a special marker in the filename and then the real filename is part of what is normally the data. We did not check that the length of the file name is within the length of the member, which means that we got a overflow later when subtracting the length from the member size to get the remaining member size. The file createdeb-lp1899193.cc was provided by GitHub Security Lab and reformatted using apt coding style for inclusion in the test case, both of these issues have an automated test case in test/integration/test-ubuntu-bug-1899193-security-issues. LP: #1899193
* aptwebserver: Rename slaves to workersJulian Andres Klode2020-08-041-4/+5
| | | | Apologies.
* RFC1123StrToTime: Accept const std::string& as first argumentJulian Andres Klode2019-06-171-2/+2
| | | | | | We are converting to std::string anyway by passing to istringstream, and this removes the need for .c_str() in callers.
* Merge libapt-inst into libapt-pkgJulian Andres Klode2019-05-061-2/+2
|
* Fix various typos in the documentationJakub Wilk2019-02-101-1/+1
|
* Allow to override the directory of a request in aptwebserverDavid Kalnischkies2018-11-251-2/+9
| | | | | | | | | The filename can be overridden, but sometimes it is useful to do it only for the directory-part of the filename – e.g. if you want to let a flat archive directory (like /var/cache/apt/archives) serve a pool-based request like /pool/a/apt_version.deb. Gbp-Dch: Ignore
* aptwebserver: Prevent XSS in debug and file listingDavid Kalnischkies2018-11-251-24/+36
| | | | | | | | | | | We sometimes autogenerate HTML pages e.g. for listing files in a directory or for various error codes. If this would be a serious webserver this would be a security problem (althrough a bit hard to exploit), but as it is not shipped and intended to be used by our testcases only the world hasn't ended &amp; we can ignore it for changelog and fix it for brownie points. Gbp-Dch: Ignore
* aptwebserver: Guess Content-Type from filename extensionDavid Kalnischkies2018-11-251-1/+85
| | | | | | | | | | Browsing pages served via aptwebserver is working better if we tell the browser the Content-Type which for this simple usecase we can just do by guessing based on the file extension – and because hardcoding a list would be boring we just reuse the mime.types data from mime-support if available and allow it to be overridden by files and config. Gbp-Dch: Ignore
* aptwebserver: Prefetch compressors to avoid thread crashesDavid Kalnischkies2018-08-191-0/+1
| | | | | | | | | | | | | If multiple threads act on requests (like if connection comes from a webbrowser) a thread might request the supported compressors while another thread is still working on creating the list to be stored in the static cache variable. As the price to pay for atomic and co seems to high for the fringe usecase of manual usage of aptwebserver the patch just makes a call to generate the list while still single threaded. Gbp-Dch: Ignore
* implement Acquire::Retries support for all itemsDavid Kalnischkies2017-12-131-1/+16
| | | | | | | Moving the Retry-implementation from individual items to the worker implementation not only gives every file retry capability instead of just a selected few but also avoids needing to implement it in each item (incorrectly).
* remove pointless va_copy to avoid cleanup danceDavid Kalnischkies2017-10-051-18/+12
| | | | | | | | | | A va_copy call needs to be closed in all branches with va_end, so these functions would need to be reworked slightly, but we don't actually need to copy the va_list as we don't work on it, we just push it forward, so dropping the copy and everyone is happy. Reported-By: cppcheck Gbp-Dch: Ignore
* Use C++11 threading support instead of pthreadJulian Andres Klode2017-07-201-1/+0
| | | | This makes the code easier to read.
* Reformat and sort all includes with clang-formatJulian Andres Klode2017-07-127-16/+16
| | | | | | | | | | | | | This makes it easier to see which headers includes what. The changes were done by running git grep -l '#\s*include' \ | grep -E '.(cc|h)$' \ | xargs sed -i -E 's/(^\s*)#(\s*)include/\1#\2 include/' To modify all include lines by adding a space, and then running ./git-clang-format.sh.
* deal with 3xx httpcodes as required by HTTP/1.1 specDavid Kalnischkies2017-06-261-0/+1
| | | | | | | | | | | | | | An unknown code should be handled the same as the x00 code of this group, but for redirections we used to treat 300 (and a few others) as an error while unknown codes were considered redirections. Instead we check now explicitly for the redirection codes we support for redirecting (and add the 308 defined in RFC 7538) to avoid future problems if new 3xx codes are added expecting certain behaviours. Potentially strange would have been e.g. "305 Use Proxy" sending a Location for the proxy to use – which wouldn't have worked and resulted in an error anyhow, but probably confused users in the process.
* Refactor to avoid loop/dangling gcc warningsDavid Kalnischkies2017-06-261-1/+1
| | | | Gbp-Dch: Ignore
* optional write aptwebserver log to client specific filesDavid Kalnischkies2016-11-252-81/+136
| | | | | | | | | | | | The test test-handle-redirect-as-used-mirror-change serves multiple clients at the same time, so the order of the output is undefined and once in a while the two clients will intermix their lines causing the grep we perform on it later to fail making our tests fail. Solved by introducing client-specific logfiles which we all grep and sort the result to have the results more stable. Git-Dch: Ignore
* reset HOME, USER(NAME), TMPDIR & SHELL in DropPrivilegesDavid Kalnischkies2016-11-092-0/+29
| | | | | | | | | We can't cleanup the environment like e.g. sudo would do as you usually want the environment to "leak" into these helpers, but some variables like HOME should really not have still the value of the root user – it could confuse the helpers (USER) and HOME isn't accessible anyhow. Closes: 842877
* Coverage: Do not print messages from gcovJulian Andres Klode2016-09-112-0/+53
| | | | | | | We need to ignore messages from gcov. All those messages start with profiling: and are printed using vfprintf(), so the only thing we can do is add a library overriding those functions and linking apt-pkg to it.
* Add missing includes and external definitionsJulian Andres Klode2016-08-261-0/+1
| | | | | | | | | | | | | | | Several modules use std::array without including the array header. Bad modules. Some modules use STDOUT_FILENO and friends, or close() without including unistd.h, where they are defined. One module also uses WIFEXITED() without including sys/wait.h. Finally, environ is not specified to be defined in unistd.h. We are required to define it ourselves according to POSIX, so let's do that.
* don't sent Range requests if we know its not acceptedDavid Kalnischkies2016-08-161-0/+9
| | | | | | If the server told us in a previous request that it isn't supporting Ranges with bytes via an Accept-Ranges header missing bytes, we don't try to formulate requests using Ranges.
* http(s): allow empty values for header fieldsDavid Kalnischkies2016-08-131-0/+3
| | | | | | | | | | | | It seems completely pointless from a server-POV to sent empty header fields, so most of them don't do it (simply proven by this limitation existing since day one) – but it is technically allowed by the RFC as the surounding whitespaces are optional and Github seems to like sending "X-Geo-Block-List:\r\n" since recently (bug reports in other http clients indicate July) at least sometimes as the reporter claims to have seen it on https only even through it can happen with both. Closes: 834048
* Get rid of the old buildsystemJulian Andres Klode2016-08-101-53/+0
| | | | Bye, bye, old friend.
* CMake: Switch integration tests and travis overJulian Andres Klode2016-08-061-0/+10
| | | | | | This early support seems a bit hacky, but it's a hard switch: The integration tests do not understand the old build system anymore afterwards. I don't really like that.
* use +0000 instead of UTC by default as timezone in outputDavid Kalnischkies2016-07-021-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | All apt versions support numeric as well as 3-character timezones just fine and its actually hard to write code which doesn't "accidently" accepts it. So why change? Documenting the Date/Valid-Until fields in the Release file is easy to do in terms of referencing the datetime format used e.g. in the Debian changelogs (policy §4.4). This format specifies only the numeric timezones through, not the nowadays obsolete 3-character ones, so in the interest of least surprise we should use the same format even through it carries a small risk of regression in other clients (which encounter repositories created with apt-ftparchive). In case it is really regressing in practice, the hidden option -o APT::FTPArchive::Release::NumericTimezone=0 can be used to go back to good old UTC as timezone. The EDSP and EIPP protocols use this 'new' format, the text interface used to communicate with the acquire methods does not for compatibility reasons even if none of our methods would be effected and I doubt any other would (in these instances the timezone is 'GMT' as that is what HTTP/1.1 requires). Note that this is only true for apt talking to methods, (libapt-based) methods talking to apt will respond with the 'new' format. It is therefore strongly adviced to support both also in method input.
* don't leak an FD in lz4 (de)compressionDavid Kalnischkies2016-06-102-0/+49
| | | | | Seen first in #826783, but as this buglog also shows leaked uncompressed files as well we don't close it just yet.
* webserver: 416 errors aren't closing connectionsDavid Kalnischkies2016-04-131-1/+1
| | | | | | | | Breaking here lets our handler die which a client will fix by reconnecting… but that eats time needlessly and is simple the wrong handling, too. Git-Dch: Ignore
* aptwebserver: fix html validation issuesDavid Kalnischkies2016-03-141-5/+5
| | | | | | | | Iceweasel^WFirefox complains about the missing encoding in its console which can be a bit annoying in interactive sessions, so fixing these issues has no effect on apt itself, but on the testers. Git-Dch: Ignore
* test all redirection codes work as expectedDavid Kalnischkies2016-01-311-1/+1
| | | | Git-Dch: Ignore
* tests: don't use hardcoded port for http and httpsDavid Kalnischkies2015-09-151-3/+30
| | | | | | This allows running tests in parallel. Git-Dch: Ignore
* add c++11 override marker to overridden methodsDavid Kalnischkies2015-08-101-1/+1
| | | | | | | | | C++11 adds the 'override' specifier to mark that a method is overriding a base class method and error out if not. We hide it in the APT_OVERRIDE macro to ensure that we keep compiling in pre-c++11 standards. Reported-By: clang-modernize -add-override -override-macros Git-Dch: Ignore
* Merge branch 'debian/sid' into debian/experimentalMichael Vogt2015-05-221-3/+5
|\ | | | | | | | | | | | | | | | | Conflicts: apt-pkg/pkgcache.h debian/changelog methods/https.cc methods/server.cc test/integration/test-apt-download-progress
| * Add regression test for LP: #1445239Michael Vogt2015-05-221-0/+2
| | | | | | | | | | | | | | Add a regression test that reproduced the hang of apt when a partial file is present. Git-Dch: ignore
| * Fix endless loop in apt-get update that can cause disk fillupMichael Vogt2015-05-221-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The apt http code parses Content-Length and Content-Range. For both requests the variable "Size" is used and the semantic for this Size is the total file size. However Content-Length is not the entire file size for partital file requests. For servers that send the Content-Range header first and then the Content-Length header this can lead to globbing of Size so that its less than the real file size. This may lead to a subsequent passing of a negative number into the CircleBuf which leads to a endless loop that writes data. Thanks to Anton Blanchard for the analysis and initial patch. LP: #1445239
* | detect 416 complete file in partial by expected hashDavid Kalnischkies2015-05-121-3/+6
| | | | | | | | | | | | | | If we have the expected hashes we can check with them if the file we have in partial we got a 416 for is the expected file. We detected this with same-size before, but not every server sends a good Content-Range header with a 416 response.
* | a hit on Release files means the indexes will be hits tooDavid Kalnischkies2015-04-191-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we get a IMSHit for the Transaction-Manager (= the InRelease file or as its still supported fallback Release + Release.gpg combo) we can assume that every file we would queue based on this manager, but already have locally is current and hence would get an IMSHit, too. We therefore save us and the server the trouble and skip the queuing in this case. Beside speeding up repetative executions of 'apt-get update' this way we also avoid hitting hashsum errors if the indexes are in fact already updated, but the Release file isn't yet as it is the case on well behaving mirrors as Release files is updated last. The implementation is a bit harder than the theory makes it sound as we still have to keep reverifying the Release files (e.g. to detect now expired once to avoid an attacker being able to silently stale us) and have to handle cases in which the Release file hits, but some indexes aren't present (e.g. user added a new foreign architecture).
* | handle servers closing encoded connections correctlyDavid Kalnischkies2015-04-191-7/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Servers who advertise that they close the connection get the 'Closes' encoding flag, but this conflicts with servers who response with a transfer-encoding (e.g. encoding) as it is saved in the same flag. We have a better flag for the keep-alive (or not) of the connection anyway, so we check this instead of the encoding. This is in practice not much of a problem as real servers we talk to are HTTP1.1 servers (with keep-alive) and there isn't much point in doing chunked encoding if you are going to close anyway, but our simple testserver stumbles over this if pressed and its a bit cleaner, too. Git-Dch: Ignore
* | derive more of https from http methodDavid Kalnischkies2015-03-161-44/+44
| | | | | | | | | | | | | | | | | | Bug #778375 uncovered that https wasn't properly integrated in the class family tree of http as it was supposed to be leading to a NULL pointer dereference. Fixing this 'properly' was deemed to much diff for practically no gain that late in the release, so commit 0c2dc43d4fe1d026650b5e2920a021557f9534a6 just fixed the synptom, while this commit here is fixing the cause plus adding a test.