<feed xmlns='http://www.w3.org/2005/Atom'>
<title>apt/apt-pkg/acquire-worker.cc, branch 1.2.11</title>
<subtitle>Debians commandline package manager</subtitle>
<id>https://git.kalnischkies.de/apt/atom?h=1.2.11</id>
<link rel='self' href='https://git.kalnischkies.de/apt/atom?h=1.2.11'/>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/'/>
<updated>2016-04-07T11:48:31Z</updated>
<entry>
<title>stop handling items in doomed transactions</title>
<updated>2016-04-07T11:48:31Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-04-05T23:08:57Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=38f8704e419ed93f433129e20df5611df6652620'/>
<id>urn:sha1:38f8704e419ed93f433129e20df5611df6652620</id>
<content type='text'>
With the previous commit we track the state of transactions, so we can
now use our knowledge to avoid processing data for a transaction which
was already closed (via an abort in this case).

This is needed as multiple independent processes are interacting in the
process, so there isn't a simple immediate full-engine stop and it would
also be bad to teach each and every item how to check if its manager
has failed subordinate and what to do in that case.

In the pdiff case, which deals (potentially) with many items during its
lifetime e.g. a hashsum mismatch in another file can abort the
transaction the file we try to patch via pdiff belongs to. This causes
some of the items (which are already done) to be aborted with it, but
items still in the process of acquisition continue in the processing and
will later try to use all the items together failing in strange ways as
cleanup already happened.

The chosen solution is to dry up the communication channels instead by
ignoring new requests for data acquisition, canceling requests which are
not assigned to a queue and not calling Done/Failed on items anymore.
This means that e.g. already started or pending (e.g. pipelined)
downloads aren't stopped and continue as normal for now, but they remain
in partial/ and aren't processed further so the next update command will
pick them up and put them to good use while the current process fails
updating (for this transaction group) in an orderly fashion.

Closes: 817240
Thanks: Barr Detwix &amp; Vincent Lefevre for log files
</content>
</entry>
<entry>
<title>Use descriptive URIs in 104 Warning messages</title>
<updated>2016-03-16T17:34:47Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2016-03-16T17:31:26Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=421807e1d38c58b776be0b20faed94c5316d38eb'/>
<id>urn:sha1:421807e1d38c58b776be0b20faed94c5316d38eb</id>
<content type='text'>
This makes the new GPG related warnings much nicer to read,
for example, the second one here replaces the first one:

W: gpgv:/var/lib/apt/lists/example.com_dists_stable_InRelease: Weak ...
W: http://example.com/dists/stable/InRelease: Weak ...
</content>
</entry>
<entry>
<title>apt-pkg/acquire-worker.cc: Introduce 104 Warning message</title>
<updated>2016-03-15T11:33:21Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2016-03-15T10:40:10Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=8c9b7725c3d89461e78061aff4bc644cdb237fe7'/>
<id>urn:sha1:8c9b7725c3d89461e78061aff4bc644cdb237fe7</id>
<content type='text'>
This can be used by workers to send warnings to the main
program. The messages will be passed to _error-&gt;Warning()
by APT with the URI prepended.

We are not going to make that really public now, as the
interface might change a bit.
</content>
</entry>
<entry>
<title>act on various suggestions from cppcheck</title>
<updated>2016-01-26T14:32:15Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-01-25T21:13:52Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=2651f1c071927b7fc440ec7a638ecad7ccf04a2e'/>
<id>urn:sha1:2651f1c071927b7fc440ec7a638ecad7ccf04a2e</id>
<content type='text'>
Reported-By: cppcheck
Git-Dch: Ignore
</content>
</entry>
<entry>
<title>do not use _apt for file/copy sources if it isn't world-accessible</title>
<updated>2015-11-19T15:46:29Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-11-18T18:31:40Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=514a25cbcd2babb2a9c4485fc7b9a4256b7f6ff3'/>
<id>urn:sha1:514a25cbcd2babb2a9c4485fc7b9a4256b7f6ff3</id>
<content type='text'>
In 0940230d we started dropping privileges for file (and a bit later for
copy, too) with the intend of uniforming this for all methods. The
commit message says that the source will likely fail based on the
compressors already – and there isn't much secret in the repository
content. After all, after apt has run the update everyone can access the
content via apt anyway…

There are sources through which worked before which are mostly
single-deb (and those with the uncompressed files available).
The first one being especially surprising for users maybe, so instead of
failing, we make it so that apt detects that it can't access a source as
_apt and if so doesn't drop (for all sources!) privileges – but we limit
this to file/copy, so the uncompress which might be needed will still
fail – but that failed before this regression.

We display a notice about this, mostly so that if it still fails (e.g.
compressed) the user has some idea what is wrong.

Closes: 805069
</content>
</entry>
<entry>
<title>wrap every unlink call to check for != /dev/null</title>
<updated>2015-11-04T17:42:28Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-11-02T17:49:52Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=ce1f3a2c616b86da657c1c796efa5f4d18c30c39'/>
<id>urn:sha1:ce1f3a2c616b86da657c1c796efa5f4d18c30c39</id>
<content type='text'>
Unlinking /dev/null is bad, we shouldn't do that. Also, we should print
at least a warning if we tried to unlink a file but didn't manage to
pull it of (ignoring the case were the file is /dev/null or doesn't
exist in the first place).

This got triggered by a relatively unlikely to cause problem in
pkgAcquire::Worker::PrepareFiles which would while temporary
uncompressed files (which are set to keep compressed) figure out that to
files are the same and prepare for sharing by deleting them. Bad move.
That also shows why not printing a warning is a bad idea as this hide
the error for in non-root test runs.

Git-Dch: Ignore
</content>
</entry>
<entry>
<title>add ConnectionTimedOut to transient failreasons list</title>
<updated>2015-11-04T17:04:01Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-10-12T14:48:59Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=9f301e0f2d828d662bd67da2be9d8f227caadd07'/>
<id>urn:sha1:9f301e0f2d828d662bd67da2be9d8f227caadd07</id>
<content type='text'>
All other reasons from methods/connect.cc were already included.

Git-Dch: Ignore
</content>
</entry>
<entry>
<title>use std-algorithms instead of manual loops to avoid overflow warning</title>
<updated>2015-09-14T13:22:18Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-09-11T18:53:07Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=ae732225ec2fa0d7434c9f40a92ced8683752211'/>
<id>urn:sha1:ae732225ec2fa0d7434c9f40a92ced8683752211</id>
<content type='text'>
Reported-By: gcc
Understandable: no
Git-Dch: Ignore
</content>
</entry>
<entry>
<title>use unusable-for-security hashes for integrity checks</title>
<updated>2015-09-01T12:19:44Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-09-01T11:58:00Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=63d609985eb7eefa5f2332bfe4fab96f017760a1'/>
<id>urn:sha1:63d609985eb7eefa5f2332bfe4fab96f017760a1</id>
<content type='text'>
We want to declare some hashes as not enough for security, so that a
user will need --allow-unauthenticated or similar to get data secured
only by those hashes, but we can still us these hashes for integrity
checks if we got them.
</content>
</entry>
<entry>
<title>correct 'apt update' download summary line</title>
<updated>2015-08-27T09:27:43Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-08-21T22:10:08Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=b6a0018e1c4bb22132e0316a81b7a455c6333cf1'/>
<id>urn:sha1:b6a0018e1c4bb22132e0316a81b7a455c6333cf1</id>
<content type='text'>
Fetched() was reported for mostly nothing, while we should be calling it
for files worked with from non-local sources (e.g. http, but not file or
xz). Previously this was called from an acquire item, but got moved to
the acquire worker instead to avoid having it (re)implemented in all
items, but the checks were faulty.
</content>
</entry>
</feed>
