<feed xmlns='http://www.w3.org/2005/Atom'>
<title>apt/methods/http.cc, branch 1.2.8</title>
<subtitle>Debians commandline package manager</subtitle>
<id>https://git.kalnischkies.de/apt/atom?h=1.2.8</id>
<link rel='self' href='https://git.kalnischkies.de/apt/atom?h=1.2.8'/>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/'/>
<updated>2015-09-14T13:22:18Z</updated>
<entry>
<title>fix two memory leaks reported by gcc</title>
<updated>2015-09-14T13:22:18Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-09-11T19:02:19Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=830a1b8c9e9a26dc1101167ac66a75c444902c4d'/>
<id>urn:sha1:830a1b8c9e9a26dc1101167ac66a75c444902c4d</id>
<content type='text'>
Reported-By: gcc -fsanitize=address -fno-sanitize=vptr
Git-Dch: Ignore
</content>
</entry>
<entry>
<title>Merge branch 'debian/sid' into debian/experimental</title>
<updated>2015-05-22T15:01:03Z</updated>
<author>
<name>Michael Vogt</name>
<email>mvo@ubuntu.com</email>
</author>
<published>2015-05-22T15:01:03Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=4fc6b7570c3e97b65c118b58cdf6729fa94c9b03'/>
<id>urn:sha1:4fc6b7570c3e97b65c118b58cdf6729fa94c9b03</id>
<content type='text'>
Conflicts:
	apt-pkg/pkgcache.h
	debian/changelog
	methods/https.cc
	methods/server.cc
	test/integration/test-apt-download-progress
</content>
</entry>
<entry>
<title>Fix endless loop in apt-get update that can cause disk fillup</title>
<updated>2015-05-22T13:28:53Z</updated>
<author>
<name>Michael Vogt</name>
<email>mvo@ubuntu.com</email>
</author>
<published>2015-05-22T13:28:53Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=ceafe8a6edc815df2923ba892894617829e9d3c2'/>
<id>urn:sha1:ceafe8a6edc815df2923ba892894617829e9d3c2</id>
<content type='text'>
The apt http code parses Content-Length and Content-Range. For
both requests the variable "Size" is used and the semantic for
this Size is the total file size. However Content-Length is not
the entire file size for partital file requests. For servers that
send the Content-Range header first and then the Content-Length
header this can lead to globbing of Size so that its less than
the real file size. This may lead to a subsequent passing of a
negative number into the CircleBuf which leads to a endless
loop that writes data.

Thanks to Anton Blanchard for the analysis and initial patch.

LP: #1445239
</content>
</entry>
<entry>
<title>calculate hashes while downloading in https</title>
<updated>2015-04-18T23:13:09Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-04-11T08:23:52Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=34faa8f7ae2526f46cd1f84bb6962ad06d841e5e'/>
<id>urn:sha1:34faa8f7ae2526f46cd1f84bb6962ad06d841e5e</id>
<content type='text'>
We do this in HTTP already to give the CPU some exercise while the disk
is heavily spinning (or flashing?) to store the data avoiding the need
to reread the entire file again later on to calculate the hashes – which
happens outside of the eyes of progress reporting, so you might ended up
with a bunch of https workers 'stuck' at 100% while they were busy
calculating hashes.

This is a bummer for everyone using apt as a connection speedtest as the
https method works slower now (not really, it just isn't reporting done
too early anymore).
</content>
</entry>
<entry>
<title>calculate only expected hashes in methods</title>
<updated>2015-04-18T23:13:09Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-03-30T18:47:13Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=9224ce3d4d1ea0428a70e75134998e08aa45b1e6'/>
<id>urn:sha1:9224ce3d4d1ea0428a70e75134998e08aa45b1e6</id>
<content type='text'>
Methods get told which hashes are expected by the acquire system, which
means we can use this list to restrict what we calculate in the methods
as any extra we are calculating is wasted effort as we can't compare it
with anything anyway.

Adding support for a new hash algorithm is therefore 'free' now and if a
algorithm is no longer provided in a repository for a file, we
automatically stop calculating it.

In practice this results in a speed-up in Debian as we don't have SHA512
here (so far), so we practically stop calculating it.
</content>
</entry>
<entry>
<title>handle servers closing encoded connections correctly</title>
<updated>2015-04-18T23:13:09Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-03-30T17:52:32Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=117038bac90261351518870b3f48136f134d4bfc'/>
<id>urn:sha1:117038bac90261351518870b3f48136f134d4bfc</id>
<content type='text'>
Servers who advertise that they close the connection get the 'Closes'
encoding flag, but this conflicts with servers who response with a
transfer-encoding (e.g. encoding) as it is saved in the same flag.

We have a better flag for the keep-alive (or not) of the connection
anyway, so we check this instead of the encoding.

This is in practice not much of a problem as real servers we talk to are
HTTP1.1 servers (with keep-alive) and there isn't much point in doing
chunked encoding if you are going to close anyway, but our simple
testserver stumbles over this if pressed and its a bit cleaner, too.

Git-Dch: Ignore
</content>
</entry>
<entry>
<title>derive more of https from http method</title>
<updated>2015-03-16T17:00:50Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2015-03-09T00:54:46Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=905fba60a046646a26a56b4c5d4a5dc7d5906f0d'/>
<id>urn:sha1:905fba60a046646a26a56b4c5d4a5dc7d5906f0d</id>
<content type='text'>
Bug #778375 uncovered that https wasn't properly integrated in the class
family tree of http as it was supposed to be leading to a NULL pointer
dereference. Fixing this 'properly' was deemed to much diff for
practically no gain that late in the release, so commit
0c2dc43d4fe1d026650b5e2920a021557f9534a6 just fixed the synptom, while
this commit here is fixing the cause plus adding a test.
</content>
</entry>
<entry>
<title>dispose http(s) 416 error page as non-content</title>
<updated>2014-12-22T13:23:39Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2014-11-29T16:59:52Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=92e8c1ff287ab829de825e00cdf94744e699ff97'/>
<id>urn:sha1:92e8c1ff287ab829de825e00cdf94744e699ff97</id>
<content type='text'>
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.

This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…

To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.

(The tests and their binary helpers had to be slightly modified to
apply, but the patch to fix the issue itself is unchanged.)

Closes: 768797
</content>
</entry>
<entry>
<title>dispose http(s) 416 error page as non-content</title>
<updated>2014-12-09T00:13:48Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2014-11-29T16:59:52Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=ed793a19ec00b83254029509bc516e3ba911c75a'/>
<id>urn:sha1:ed793a19ec00b83254029509bc516e3ba911c75a</id>
<content type='text'>
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.

This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…

To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.

Closes: 768797
</content>
</entry>
<entry>
<title>Fix backward compatiblity of the new pkgAcquireMethod::DropPrivsOrDie()</title>
<updated>2014-10-13T09:29:47Z</updated>
<author>
<name>Michael Vogt</name>
<email>mvo@ubuntu.com</email>
</author>
<published>2014-10-13T08:57:30Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=9983999d294887046abf386adc31190700d89b61'/>
<id>urn:sha1:9983999d294887046abf386adc31190700d89b61</id>
<content type='text'>
Do not drop privileges in the methods when using a older version of
libapt that does not support the chown magic in partial/ yet. To
do this DropPrivileges() now will ignore a empty Apt::Sandbox::User.

Cleanup all hardcoded _apt along the way.
</content>
</entry>
</feed>
