| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
We are converting to std::string anyway by passing to
istringstream, and this removes the need for .c_str()
in callers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As a follow-up for CVE-2019-3462, add checks similar to those
for redirect to the central SendMessage() function. The checks
are a bit more relaxed for values - they may include newlines
and unicode characters (newlines get rewritten, so are safe).
For keys and the message header, the checks are far more strict:
They may only contain alphanumerical characters, the hyphen-minus,
and the horizontal space.
In case the method tries to send anything else, we construct a
legal 400 URI Failed response, and send that. We specifically do
not include the item URI, in case it has been compromised (that
would cause infinite recursion).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a security issue that can be exploited to inject arbritrary debs
or other files into a signed repository as followed:
(1) Server sends a redirect to somewhere%0a<headers for the apt method> (where %0a is
\n encoded)
(2) apt method decodes the redirect (because the method encodes the URLs before
sending them out), writting something like
somewhere\n
<headers>
into its output
(3) apt then uses the headers injected for validation purposes.
Regression-Of: c34ea12ad509cb34c954ed574a301c3cbede55ec
LP: #1812353
|
|
|
|
| |
Prompted-by: Jakub Wilk <jwilk@debian.org>
|
|
|
|
|
|
| |
Allowing a method to request work from other methods is a powerful
capability which could be misused or exploited, so to slightly limited
the surface let method opt-in into this capability on startup.
|
|
|
|
|
|
| |
The format isn't too hard to get right, but it gets funny with multiline
fields (which we don't really have yet) and its just easier to deal with
it once and for all which can be reused for more messages later.
|
|
|
|
|
|
|
| |
This avoids running the Proxy-Auto-Detect script inside the
untrusted (well, less trusted for now) sandbox. This will allow
us to restrict the http method from fork()ing or exec()ing via
seccomp.
|
|
|
|
|
|
|
|
|
| |
This isn't really used by the acquire system at all at the moment and
the only method potentially sending this information is file://, but
that used to be working correctly before broken in 2013, so better fix
it now and worry about maybe using the data some day later.
Regression-Of: b3501edb7091ca3aa6c2d6d96dc667b8161dd2b9
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes it easier to see which headers includes what.
The changes were done by running
git grep -l '#\s*include' \
| grep -E '.(cc|h)$' \
| xargs sed -i -E 's/(^\s*)#(\s*)include/\1#\2 include/'
To modify all include lines by adding a space, and then running
./git-clang-format.sh.
|
|
|
|
|
|
|
|
| |
Most of them in (old) code comments. The two instances of user visible
string changes the po files of the manpages are fixed up as well.
Gbp-Dch: Ignore
Reported-By: spellintian
|
|
|
|
|
|
|
| |
If we don't give a specific error to report up it is likely that all
error currently in the error stack are equally important, so reporting
just one could turn out to be confusing e.g. if name resolution failed
in a SRV record list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All apt versions support numeric as well as 3-character timezones just
fine and its actually hard to write code which doesn't "accidently"
accepts it. So why change? Documenting the Date/Valid-Until fields in
the Release file is easy to do in terms of referencing the
datetime format used e.g. in the Debian changelogs (policy §4.4). This
format specifies only the numeric timezones through, not the nowadays
obsolete 3-character ones, so in the interest of least surprise we should
use the same format even through it carries a small risk of regression
in other clients (which encounter repositories created with
apt-ftparchive).
In case it is really regressing in practice, the hidden option
-o APT::FTPArchive::Release::NumericTimezone=0
can be used to go back to good old UTC as timezone.
The EDSP and EIPP protocols use this 'new' format, the text interface
used to communicate with the acquire methods does not for compatibility
reasons even if none of our methods would be effected and I doubt any
other would (in these instances the timezone is 'GMT' as that is what
HTTP/1.1 requires). Note that this is only true for apt talking to
methods, (libapt-based) methods talking to apt will respond with the
'new' format. It is therefore strongly adviced to support both also in
method input.
|
|
|
|
|
|
|
|
|
|
|
| |
Setting the C++ locale via std::locale::global(std::locale("")); which
would otherwise default to the default C locale (aka: unaffected by
setlocale) effects the formatting of numeric types in IO streams, which
for output for humans is perfectly sensible, but breaks our many text
interfaces used and parsed by us and others without expecting the
numbers to be formatted.
Closes: #825396
|
|
|
|
|
| |
Reported-By: cppcheck
Git-Dch: Ignore
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Doing this disables the implicit copy assignment operator (among others)
which would cause hovac if used on the classes as it would just copy the
pointer, not the data the d-pointer points to. For most of the classes
we don't need a copy assignment operator anyway and in many classes it
was broken before as many contain a pointer of some sort.
Only for our Cacheset Container interfaces we define an explicit copy
assignment operator which could later be implemented to copy the data
from one d-pointer to the other if we need it.
Git-Dch: Ignore
|
|
|
|
|
|
|
|
| |
To have a chance to keep the ABI for a while we need all three to team
up. One of them missing and we might loose, so ensuring that they are
available is a very tedious but needed task once in a while.
Git-Dch: Ignore
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rred is responsible for unpacking and reading the patch files in one go,
but we currently only have hashes for the uncompressed patch files, so
the handler read the entire patch file before dispatching it to the
worker which would read it again – both with an implicit uncompress.
Worse, while the workers operate in parallel the handler is the central
orchestration unit, so having it busy with work means the workers do
(potentially) nothing.
This means rred is working with 'untrusted' data, which is bad. Yet,
having the unpack in the handler meant that the untrusted uncompress was
done as root which isn't better either. Now, we have it at least
contained in a binary which we can harden a bit better. In the long run,
we want hashes for the compressed patch files through to be safe.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Having every item having its own code to verify the file(s) it handles
is an errorprune process and easy to break, especially if items move
through various stages (download, uncompress, patching, …). With a giant
rework we centralize (most of) the verification to have a better
enforcement rate and (hopefully) less chance for bugs, but it breaks the
ABI bigtime in exchange – and as we break it anyway, it is broken even
harder.
It shouldn't effect most frontends as they don't deal with the acquire
system at all or implement their own items, but some do and will need to
be patched (might be an opportunity to use apt on-board material).
The theory is simple: Items implement methods to decide if hashes need to
be checked (in this stage) and to return the expected hashes for this
item (in this stage). The verification itself is done in worker message
passing which has the benefit that a hashsum error is now a proper error
for the acquire system rather than a Done() which is later revised to a
Failed().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reimplementing an inline method is opening a can of worms we don't want
to open if we ever want to us a d-pointer in those classes, so we do the
only thing which can save us from hell: move the destructors into the cc
sources and we are good.
Technically not an ABI break as the methods inline or not do the same
(nothing), so a program compiled against the old version still works
with the new version (beside that this version is still in experimental,
so nothing really has been build against this library anyway).
Git-Dch: Ignore
|
|\
| |
| |
| | |
debian/experimental
|
| |\
| | |
| | |
| | | |
feature/expected-size
|
| | | |
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
This ensures that we can stop downloading if the server send
too much data by accident (or by a malicious attempt)
|
|\ \ \ \
| | |_|/
| |/| | |
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
On travis-ci connect.cc detects a rotation, triggering it store the IP
which is later appended to the error message, which is all nice and
great if we deal with a real server, but in the testcases it just
triggers failures as strings do not match.
Git-Dch: Ignore
|
|/ /
| |
| |
| | |
Git-Dch: ignore
|
|/ |
|
|
|
|
|
|
| |
Now that we have all hashes in the acquire system, pass the info down to
the methods, so that it can use it in the request and/or to precheck the
response.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is not very extensible to have the supported Hashes hardcoded
everywhere and especially if it is part of virtual method names.
It is also possible that a method does not support the 'best' hash
(yet), so we might end up not being able to verify a file even though we
have a common subset of supported hashes. And those are just two of the
cases in which it is handy to have a more dynamic selection.
The downside is that this is a MAJOR API break, but the HashStringList
has a string constructor for compatibility, so with a bit of luck the
few frontends playing with the acquire system directly are okay.
|
|
|
|
|
|
|
|
| |
Beside being a bit cleaner it hopefully also resolves oddball problems
I have with high levels of parallel jobs.
Git-Dch: Ignore
Reported-By: iwyu (include-what-you-use)
|
|
|
|
|
|
|
| |
- handle redirections in the worker with the right method instead of
in the method the redirection occured in (Closes: #668111)
* methods/http.cc:
- forbid redirects to change protocol
|
|
|
| |
- factor out into private Dequeue() to fix access to deleted pointer
|
|\ |
|
| | |
|
|\ \ |
|
| |\| |
|
| | |
| | |
| | |
| | |
| | |
| | | |
done on the mirco-optimazation level, so lets fix them:
(performance) Possible inefficient checking for emptiness.
(performance) Prefer prefix ++/-- operators for non-primitive types.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
is redundant in Redirect() as we can't reach the code with null anyway
[apt-pkg/acquire-method.cc:433]: (error) Possible null pointer dereference:
Queue - otherwise it is redundant to check if Queue is null at line 425
|
|/ / |
|
|\ \
| |/
|/| |
|
| | |
|
| | |
|
| |
| |
| |
| | |
- write directly to stdout instead of creating the message in
memory first before writing to avoid hitting limits
|
| | |
|
|/
|
| |
- when downloading data, show the mirror being used
|
|\ |
|
| | |
|
| |
| |
| |
| | |
of this item is ok and does not need to be tried on all mirrors
|
|\| |
|