<feed xmlns='http://www.w3.org/2005/Atom'>
<title>apt/apt-pkg/acquire.cc, branch 1.2.10</title>
<subtitle>Debians commandline package manager</subtitle>
<id>https://git.kalnischkies.de/apt/atom?h=1.2.10</id>
<link rel='self' href='https://git.kalnischkies.de/apt/atom?h=1.2.10'/>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/'/>
<updated>2016-03-16T16:52:40Z</updated>
<entry>
<title>Get accurate progress reporting in apt update again</title>
<updated>2016-03-16T16:52:40Z</updated>
<author>
<name>Michael Vogt</name>
<email>mvo@ubuntu.com</email>
</author>
<published>2016-03-15T13:50:37Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=fb193b1cd43f0e8c3b7e5f69f183b9abe7e83761'/>
<id>urn:sha1:fb193b1cd43f0e8c3b7e5f69f183b9abe7e83761</id>
<content type='text'>
For the non-pdiff case, we have can have accurate progress
reporting because after fetching the {,In}Release files we know
how many IndexFiles will be fetched and what size they have.

Therefore init the filesize early (in pkgAcqIndex::Init) and
ensure that in Acquire::Pulse() looks at already downloaded
bits when calculating the progress in Acquire::Pulse.

Also improve debug output of Debug::acquire::progress
</content>
</entry>
<entry>
<title>always download changelogs into /tmp first</title>
<updated>2016-02-11T22:13:47Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-02-11T21:54:49Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=6fd4b4c0b693b52cb8b593b76e5b60f77e500454'/>
<id>urn:sha1:6fd4b4c0b693b52cb8b593b76e5b60f77e500454</id>
<content type='text'>
pkgAcqChangelog has the default behaviour of downloading a changelog to
a temporary directory (inside /tmp, not /tmp directly), which is cleaned
up on shutdown, but this can be overridden to store the changelog more
permanently – but that caries a permission problem.

For changelog we can 'easily' solve this by always downloading to a
temporary directory and only move it out of there on done.
</content>
</entry>
<entry>
<title>revert file-hash based action-merging in acquire</title>
<updated>2016-01-15T01:45:35Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-01-15T01:45:35Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=feb674aba51dcb26f5281b5b38fbc5893f757170'/>
<id>urn:sha1:feb674aba51dcb26f5281b5b38fbc5893f757170</id>
<content type='text'>
Introduced in 9d2a8a7388cf3b0bbbe92f6b0b30a533e1167f40 apt tries to
merge actions like downloading the same (as judged by hashes) file
into doing it once. The implementation was very simple in that it isn't
planing at all. Turns out that it works 90% of the time just fine, but
has issues in more complicated situations in which items can be in
different stages downloading different files emitting potentially the
"wrong" hash – like while pdiffs are worked on we might end up copying
the patch instead of the result file giving us very strange errors in
return. Reverting the change until we can implement a better planing
solution seems to be the best course of action even if its sad.

Closes: 810046
</content>
</entry>
<entry>
<title>acquire: Allow parallelizing methods without hosts</title>
<updated>2016-01-07T16:31:24Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2016-01-07T16:06:55Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=b89cd2e36f9696ccd56167593c792116ae4fc97f'/>
<id>urn:sha1:b89cd2e36f9696ccd56167593c792116ae4fc97f</id>
<content type='text'>
The maximum parallelization soft limit is the number of CPU
cores * 2 on systems defining _SC_NPROCESSORS_ONLN. The hard
limit in all cases is Acquire::QueueHost::Limit.
</content>
</entry>
<entry>
<title>Use 0llu instead of 0ull in one place too</title>
<updated>2015-12-07T13:45:52Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2015-12-07T13:45:52Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=e551e1237da3cbba71f08f17dc57b07832b8d9ac'/>
<id>urn:sha1:e551e1237da3cbba71f08f17dc57b07832b8d9ac</id>
<content type='text'>
Gbp-Dch: ignore
</content>
</entry>
<entry>
<title>Avoid overflow when summing up file sizes</title>
<updated>2015-12-07T13:44:15Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2015-12-07T13:42:25Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=5a97834817dd43b7833881f38f512a9f2fdac8a9'/>
<id>urn:sha1:5a97834817dd43b7833881f38f512a9f2fdac8a9</id>
<content type='text'>
We need to pass 0llu instead of 0 as the init value, otherwise
std::accumulate will calculate with ints.

Reported-by: Raphaël Hertzog
</content>
</entry>
<entry>
<title>Check if the Apt::Sandbox::User exists in CheckDropPrivsMustBeDisabled()</title>
<updated>2015-11-27T11:29:22Z</updated>
<author>
<name>Michael Vogt</name>
<email>mvo@ubuntu.com</email>
</author>
<published>2015-11-27T11:29:22Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=ef39c148105cf30aea822022a5f41a120897cc65'/>
<id>urn:sha1:ef39c148105cf30aea822022a5f41a120897cc65</id>
<content type='text'>
If it does not exist disabled priv dropping as there is nothing
we can drop to. This will unblock people with special chroots
or systems that deleted the "_apt" user.

Closes: #806406
</content>
</entry>
<entry>
<title>Deal with killed acquire methods properly instead of hanging</title>
<updated>2015-11-27T11:10:57Z</updated>
<author>
<name>Michael Vogt</name>
<email>mvo@ubuntu.com</email>
</author>
<published>2015-11-27T11:07:48Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=a416a90b631a430306df6ed3fa1d4f3a1bdc7949'/>
<id>urn:sha1:a416a90b631a430306df6ed3fa1d4f3a1bdc7949</id>
<content type='text'>
This fixes a regression caussed by commit
95278287f4e1eeaf5d96749d6fc9bfc53fb400d0
that moved the error detection of RunFds() later into the loop.
However this broke detecting issues like dead acquire methods.
Instead of relying on the global error state (which is bad)
we now pass a boolean value back from RunFds() and break on
false.

Closes: #806406
</content>
</entry>
<entry>
<title>ignore lost+found in private directory cleanup</title>
<updated>2015-11-19T16:56:07Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-11-19T15:19:15Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=6aef1942f441e6e667982b92802907026d8cc7c6'/>
<id>urn:sha1:6aef1942f441e6e667982b92802907026d8cc7c6</id>
<content type='text'>
In ce1f3a2c we started warning about failing unlinking, which we
consistently do for directories. That isn't a problem as directories
usually aren't in the places we do want to clean up – with the potential
exeception of "lost+found", so lets ignore it like we ignore our own
partial/ subdirectory.

Closes: 805424
</content>
</entry>
<entry>
<title>do not use _apt for file/copy sources if it isn't world-accessible</title>
<updated>2015-11-19T15:46:29Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-11-18T18:31:40Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=514a25cbcd2babb2a9c4485fc7b9a4256b7f6ff3'/>
<id>urn:sha1:514a25cbcd2babb2a9c4485fc7b9a4256b7f6ff3</id>
<content type='text'>
In 0940230d we started dropping privileges for file (and a bit later for
copy, too) with the intend of uniforming this for all methods. The
commit message says that the source will likely fail based on the
compressors already – and there isn't much secret in the repository
content. After all, after apt has run the update everyone can access the
content via apt anyway…

There are sources through which worked before which are mostly
single-deb (and those with the uncompressed files available).
The first one being especially surprising for users maybe, so instead of
failing, we make it so that apt detects that it can't access a source as
_apt and if so doesn't drop (for all sources!) privileges – but we limit
this to file/copy, so the uncompress which might be needed will still
fail – but that failed before this regression.

We display a notice about this, mostly so that if it still fails (e.g.
compressed) the user has some idea what is wrong.

Closes: 805069
</content>
</entry>
</feed>
