<feed xmlns='http://www.w3.org/2005/Atom'>
<title>apt/apt-pkg/acquire.cc, branch 1.3_exp1</title>
<subtitle>Debians commandline package manager</subtitle>
<id>https://git.kalnischkies.de/apt/atom?h=1.3_exp1</id>
<link rel='self' href='https://git.kalnischkies.de/apt/atom?h=1.3_exp1'/>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/'/>
<updated>2016-05-07T12:44:53Z</updated>
<entry>
<title>delay progress until Release files are downloaded</title>
<updated>2016-05-07T12:44:53Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-05-07T12:44:53Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=1eba782fc3c55528a4da14d79e114874b9299453'/>
<id>urn:sha1:1eba782fc3c55528a4da14d79e114874b9299453</id>
<content type='text'>
Progress reporting used an "upper bound" on files we might get, expect
that this wasn't correct in case pdiff entered the picture. So instead
of calculating a value which is perhaps incorrect, we just accept that
we can't tell how many files we are going to download and just keep at
0% until we know. Additionally, if we have pdiffs we wait until we got
these (sub)index files, too.

That could all be done better by downloading all Release files first and
planing with them in hand accordingly, but one step at a time.
</content>
</entry>
<entry>
<title>make random acquire queues work less random</title>
<updated>2016-04-25T13:35:52Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-04-06T10:50:26Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=4aa6ebf6d78131416ef173b1ce472f014da25136'/>
<id>urn:sha1:4aa6ebf6d78131416ef173b1ce472f014da25136</id>
<content type='text'>
Queues feeding workers like rred are created in a random pattern to get
a few of them to run in parallel – but if we already have an idling queue
we don't need to assign it to a (potentially new) random queue as that
saves us the (agruably small) overhead of starting up a new queue,
avoids adding jobs to an already busy queue while others idle and as
a bonus reduces the size of debug logs a bit.

We also keep starting new queues now until we reach our limit before
we assign work at random to them, which should give us a more effective
utilisation overall compared to potentially adding work to busy queues
while we haven't reached our queue limit yet.
</content>
</entry>
<entry>
<title>Get accurate progress reporting in apt update again</title>
<updated>2016-03-16T16:52:40Z</updated>
<author>
<name>Michael Vogt</name>
<email>mvo@ubuntu.com</email>
</author>
<published>2016-03-15T13:50:37Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=fb193b1cd43f0e8c3b7e5f69f183b9abe7e83761'/>
<id>urn:sha1:fb193b1cd43f0e8c3b7e5f69f183b9abe7e83761</id>
<content type='text'>
For the non-pdiff case, we have can have accurate progress
reporting because after fetching the {,In}Release files we know
how many IndexFiles will be fetched and what size they have.

Therefore init the filesize early (in pkgAcqIndex::Init) and
ensure that in Acquire::Pulse() looks at already downloaded
bits when calculating the progress in Acquire::Pulse.

Also improve debug output of Debug::acquire::progress
</content>
</entry>
<entry>
<title>always download changelogs into /tmp first</title>
<updated>2016-02-11T22:13:47Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-02-11T21:54:49Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=6fd4b4c0b693b52cb8b593b76e5b60f77e500454'/>
<id>urn:sha1:6fd4b4c0b693b52cb8b593b76e5b60f77e500454</id>
<content type='text'>
pkgAcqChangelog has the default behaviour of downloading a changelog to
a temporary directory (inside /tmp, not /tmp directly), which is cleaned
up on shutdown, but this can be overridden to store the changelog more
permanently – but that caries a permission problem.

For changelog we can 'easily' solve this by always downloading to a
temporary directory and only move it out of there on done.
</content>
</entry>
<entry>
<title>revert file-hash based action-merging in acquire</title>
<updated>2016-01-15T01:45:35Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-01-15T01:45:35Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=feb674aba51dcb26f5281b5b38fbc5893f757170'/>
<id>urn:sha1:feb674aba51dcb26f5281b5b38fbc5893f757170</id>
<content type='text'>
Introduced in 9d2a8a7388cf3b0bbbe92f6b0b30a533e1167f40 apt tries to
merge actions like downloading the same (as judged by hashes) file
into doing it once. The implementation was very simple in that it isn't
planing at all. Turns out that it works 90% of the time just fine, but
has issues in more complicated situations in which items can be in
different stages downloading different files emitting potentially the
"wrong" hash – like while pdiffs are worked on we might end up copying
the patch instead of the result file giving us very strange errors in
return. Reverting the change until we can implement a better planing
solution seems to be the best course of action even if its sad.

Closes: 810046
</content>
</entry>
<entry>
<title>acquire: Allow parallelizing methods without hosts</title>
<updated>2016-01-07T16:31:24Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2016-01-07T16:06:55Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=b89cd2e36f9696ccd56167593c792116ae4fc97f'/>
<id>urn:sha1:b89cd2e36f9696ccd56167593c792116ae4fc97f</id>
<content type='text'>
The maximum parallelization soft limit is the number of CPU
cores * 2 on systems defining _SC_NPROCESSORS_ONLN. The hard
limit in all cases is Acquire::QueueHost::Limit.
</content>
</entry>
<entry>
<title>Use 0llu instead of 0ull in one place too</title>
<updated>2015-12-07T13:45:52Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2015-12-07T13:45:52Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=e551e1237da3cbba71f08f17dc57b07832b8d9ac'/>
<id>urn:sha1:e551e1237da3cbba71f08f17dc57b07832b8d9ac</id>
<content type='text'>
Gbp-Dch: ignore
</content>
</entry>
<entry>
<title>Avoid overflow when summing up file sizes</title>
<updated>2015-12-07T13:44:15Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2015-12-07T13:42:25Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=5a97834817dd43b7833881f38f512a9f2fdac8a9'/>
<id>urn:sha1:5a97834817dd43b7833881f38f512a9f2fdac8a9</id>
<content type='text'>
We need to pass 0llu instead of 0 as the init value, otherwise
std::accumulate will calculate with ints.

Reported-by: Raphaël Hertzog
</content>
</entry>
<entry>
<title>Check if the Apt::Sandbox::User exists in CheckDropPrivsMustBeDisabled()</title>
<updated>2015-11-27T11:29:22Z</updated>
<author>
<name>Michael Vogt</name>
<email>mvo@ubuntu.com</email>
</author>
<published>2015-11-27T11:29:22Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=ef39c148105cf30aea822022a5f41a120897cc65'/>
<id>urn:sha1:ef39c148105cf30aea822022a5f41a120897cc65</id>
<content type='text'>
If it does not exist disabled priv dropping as there is nothing
we can drop to. This will unblock people with special chroots
or systems that deleted the "_apt" user.

Closes: #806406
</content>
</entry>
<entry>
<title>Deal with killed acquire methods properly instead of hanging</title>
<updated>2015-11-27T11:10:57Z</updated>
<author>
<name>Michael Vogt</name>
<email>mvo@ubuntu.com</email>
</author>
<published>2015-11-27T11:07:48Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=a416a90b631a430306df6ed3fa1d4f3a1bdc7949'/>
<id>urn:sha1:a416a90b631a430306df6ed3fa1d4f3a1bdc7949</id>
<content type='text'>
This fixes a regression caussed by commit
95278287f4e1eeaf5d96749d6fc9bfc53fb400d0
that moved the error detection of RunFds() later into the loop.
However this broke detecting issues like dead acquire methods.
Instead of relying on the global error state (which is bad)
we now pass a boolean value back from RunFds() and break on
false.

Closes: #806406
</content>
</entry>
</feed>
