| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In LP#835625, it was reported that apt did not unpack multi-arch
packages in the correct order, and dpkg did not like that. The fix
also made apt configure packages together, which is not strictly
necessary.
This turned out to cause issues now, because of dependencies on
libc6:i386 that caused immediate configuration of that to not
work.
Work around the issue by not configuring multi-arch: same packages
in lockstep if they have the immediate flag set. This will be the
pseudo-essential set, and given how essential works, we mostly need
the native arch to work correctly anyway.
LP: #1871268
Regression-Of: 30426f4822516bdd26528aa2e6d8d69c1291c8d3
|
| |
|
|
|
|
|
| |
That mostly means deleting symbols which went private or have
disappeared and were previously compiler artefacts.
|
|
|
|
|
|
|
|
| |
The versions "needing" these fixes are at least five years old, so in an
effort to save massive amounts of runtime and disk space (on aggregate at
least) we can drop these lines.
Reported-By: lintian maintainer-script-supports-ancient-package-version
|
|
|
|
| |
Reported-By: dh_missing
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
| CMake Warning (dev) at /usr/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:273 (message):
| The package name passed to `find_package_handle_standard_args` (Berkeley)
| does not match the name of the calling package (BerkeleyDB). This can lead
| to problems in calling code that expects `find_package` result variables
| (e.g., `_FOUND`) to follow a certain pattern.
| Call Stack (most recent call first):
| CMake/FindBerkeleyDB.cmake:57 (find_package_handle_standard_args)
| CMakeLists.txt:83 (find_package)
| This warning is for project developers. Use -Wno-dev to suppress it.
And indeed, we checked for BERKLEY_DB_FOUND which was not defined so our
HAVE_BDB was not set – just that it is never used, so it wasn't noticed.
|
|
|
|
| |
Closes: #968414
|
|
|
|
|
|
| |
mirror.fail points to porn now apparently.
Cc: stable
|
|
|
|
|
|
|
|
|
| |
We accidentally excluded virtual packages by excluding every
group that had a package, but where the package had no versions.
Rewrite the code so the lookup consistently uses VersionList()
instead of FirstVersion and FindPkg("any") - those are all the
same, and this is easier to read.
|
|
|
|
|
|
|
| |
We passed "false" instead of false, and that apparently got
cast to bool, because it's a non-null pointer.
LP: #1876495
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We are seeing more and more installations fail due to immediate
configuration issues related to libc6. Immediate configuration is
supposed to ensure that an essential package is configured immediately,
just in case some other packages use a part of the essential package
that only works if that package is configured.
This used to be a warning, it was turned into an error in some commit I
can't remember right now, but importantly, the error missed a return,
which means that ordering completed succesfully and packages were being
installed anyway; and after all that happened successfully, we'd print
an error at the end and exit with an error code, which is not super
useful.
Revert the error back to a warning such that the behavior stays the same
but we do not fail (unless we mess up ordering which then gets caught by
a consistency check later on.
Closes: #953260
Closes: #972552
LP: #1871268
|
|
|
|
|
|
| |
Closes: #970037
[jak: Fix typo extended_status -> extended_states]
|
| |
|
|
|
|
| |
Closes: #969086
|
| |
|
|\
| |
| |
| |
| | |
Add better acquire debugging support
See merge request apt-team/apt!130
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The old code was fairly confusing, and contradictory. Notably, the
second `if` also only applied to the Data state, whereas we already
terminated the Data state earlier. This was bad.
The else fallback applied in three cases:
(1) We reached our limit
(2) We are Persistent
(3) We are headers
Now, it always failed as a transient error if it had
nothing left in the buffer. BUT: Nothing left in the buffer
is the correct thing to happen if we were fetching content.
Checking all combinations for the flags, we can compare the results
of Die() between 2.1.7 - the last "known-acceptable-ish" version
and this version:
2.1.7 this
Data !Persist !Space !Limit OK (A) OK
Data !Persist !Space Limit OK (A) OK
Data !Persist Space !Limit OK (C) OK
Data !Persist Space Limit OK OK
Data Persist !Space !Limit ERR ERR *
Data Persist !Space Limit OK (B) OK
Data Persist Space !Limit ERR ERR
Data Persist Space Limit OK OK
=> Data connections are OK if they have not reached their limit,
or are persistent (in which case they'll probably be chunked)
Header !Persist !Space !Limit ERR ERR
Header !Persist !Space Limit ERR ERR
Header !Persist Space !Limit OK OK
Header !Persist Space Limit OK OK
Header Persist !Space !Limit ERR ERR
Header Persist !Space Limit ERR ERR
Header Persist Space !Limit OK OK
Header Persist Space Limit OK OK
=> Common scheme here is that header connections are fine if they have
read something into the input buffer (Space). The rest does not matter.
(A) Non-persistent connections with !space always enter the else clause, hence success
(B) no Space means we enter the if/else, we go with else because IsLimit(), and we succeed because we don't have space
(C) Having space we do enter the while (WriteSpace()) loop, but we never reach IsLimit(),
hence we fall through. Given that our connection is not persistent, we fall through to the
else case, and there we win because we have data left to write.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We do not want to end up in a code path while reading content
from the server where we have local data left to write, which
can happen if a previous read included both headers and content.
Restructure Flush() to accept a new argument to allow incomplete
flushs (which do not match our limit), so that it can flush as
far as possible, and modify Go() and use that before and after
reading from the server.
|
| |
| |
| |
| | |
This causes some more issues, really.
|
| |
| |
| |
| |
| | |
We have successfully finished reading data if our buffer is empty,
so we don't need to do any further checks.
|
|/
|
|
|
| |
If we have errors pending, always log them with our failure
message to provide more context.
|
|\
| |
| |
| |
| | |
Default Acquire::AllowReleaseInfoChange::Suite to "true"
See merge request apt-team/apt!128
|
|/
|
|
| |
Closes: #931566
|
| |
|
|\
| |
| |
| |
| | |
http: Fix infinite loop on read errors
See merge request apt-team/apt!126
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
While we fixed the infinite retrying earlier, we still have
problems if we retry in the middle of a transfer, we might
end up resuming downloads that are already done and read
more than we should (removing the IsOpen() check so that
it always retries makes test-ubuntu-bug-1098738-apt-get-source-md5sum
fail with wrong file sizes).
I think the retrying was added to fixup pipelining messups,
but we have better solutions now, so let's get rid of it,
until we have implemented this properly.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we failed after a retry, we only communicated failure as
transient, but this seems wrong, especially given that the code
now always triggers a retry when Die() is called, as Die() closes
the server fd.
Instead, remove the error handling in that code path, and reuse
the existing fatal-ish error code handling path.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If there was a transient error and the server fd was closed, the
code would infinitely retry - it never reached FailCounter >= 2
because it falls through to the end of the loop, which sets
FailCounter = 0.
Add a continue just like the DNS rotation code has, so that the
retry actually fails after 2 attempts.
Also rework the error logic to forward the actual error message.
|
|/
|
|
| |
See merge request !127 for more information.
|
| |
|
|\
| |
| |
| |
| | |
Pu/http fixes 2
See merge request apt-team/apt!125
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We only add the file to the select() call if we have data to
write to it prior to the select() call. This is problematic:
Assuming we enter Go() with no data to write to the file,
but we read some from the server as well as an EOF, we end
up not writing it to the file because we did not add the file
to the select.
We can't always add the file to the select(), because it's
basically always ready and we don't want to wake up if we
don't have anything to read or write.
So for a solution, let's just always write data to the file
if there's data to write to it. If some gets leftover, or if
some was already present when we started Go(), it will still
be added to the select() call and unblock it.
Closes: #959518
|
|\ \
| | |
| | |
| | |
| | | |
Support marking all newly installed packages as automatically installed
See merge request apt-team/apt!110
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add option '--mark-auto' to 'apt install' that marks all newly installed
packages as automatically installed.
Signed-off-by: Nicolas Schier <nicolas@fjasle.eu>
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
Remove master/slave terminology
See merge request apt-team/apt!124
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
Apologies.
|
| | | |
| | | |
| | | |
| | | | |
Sorry!
|
|\ \ \ \
| |_|_|/
|/| | |
| | | |
| | | | |
Fully deprecate apt-key, schedule removal for Q2/2022
See merge request apt-team/apt!119
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Maintainer scripts that need to use apt-key del might as well
depend on gpg, they don't need the full gnupg suite.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
People are still using apt-key add and friends, despite that not
being guaranteed to work. Let's tell them to stop doing so.
We might still want a list command at a future point, but this
needs deciding, and a blanket ban atm seems like a sensible step
until we figured that out.
|
|\ \ \ \
| |_|/ /
|/| | |
| | | |
| | | | |
Pu/http fixes
See merge request apt-team/apt!122
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Instead of reading the data early, disable the timeout for the
select() call and read the data later. Also, change Read() to
call only once to drain the buffer in such instances.
We could optimize this to call read() multiple times if there
is also pending stuff on the socket, but that it slightly more
complex and should not provide any benefits.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
The error handling in Die() that's supposed to add useful error
messages is not super useful here.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This avoids a case where we read data, then write to the server
and only then realize the connection was closed. It is somewhat
slower, though.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
By changing the buffer implementation to return true if it
read or wrote something, even on EOF, we should not have a
need to flush the buffer in Die() anymore - we should only
be calling Die() if the buffer is empty now.
|