<feed xmlns='http://www.w3.org/2005/Atom'>
<title>apt/methods/basehttp.cc, branch 1.6_alpha6</title>
<subtitle>Debians commandline package manager</subtitle>
<id>https://git.kalnischkies.de/apt/atom?h=1.6_alpha6</id>
<link rel='self' href='https://git.kalnischkies.de/apt/atom?h=1.6_alpha6'/>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/'/>
<updated>2017-12-13T22:56:29Z</updated>
<entry>
<title>report transient errors as transient errors</title>
<updated>2017-12-13T22:56:29Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2017-10-25T22:57:26Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=47c0bdc310c8cd62374ca6e6bb456dd183bdfc07'/>
<id>urn:sha1:47c0bdc310c8cd62374ca6e6bb456dd183bdfc07</id>
<content type='text'>
The Fail method for acquire methods has a boolean parameter indicating
the transient-nature of a reported error. The problem with this is that
Fail is called very late at a point where it is no longer easily
identifiable if an error is indeed transient or not, so some calls were
and some weren't and the acquire system would later mostly ignore the
transient flag and guess by using the FailReason instead.

Introducing a tri-state enum we can pass the information about fatal or
transient errors through the callstack to generate the correct fails.
</content>
</entry>
<entry>
<title>mark some 500 HTTP codes as transient acquire errors</title>
<updated>2017-12-13T22:56:29Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2017-10-25T19:40:56Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=2f6aed72f656494d668918aa8ce4052d7c81e993'/>
<id>urn:sha1:2f6aed72f656494d668918aa8ce4052d7c81e993</id>
<content type='text'>
If retries are enabled only transient errors are retried, which are very
few errors. At least for some HTTP codes it could be beneficial to retry
them through so adding them seems like a good idea if only to be more
consistent in what we report.
</content>
</entry>
<entry>
<title>avoid some useless casts reported by -Wuseless-cast</title>
<updated>2017-12-13T22:53:41Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2017-12-13T20:39:16Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=1adcf56bec7d2127d83aa423916639740fe8e586'/>
<id>urn:sha1:1adcf56bec7d2127d83aa423916639740fe8e586</id>
<content type='text'>
The casts are useless, but the reports show some where we can actually
improve the code by replacing them with better alternatives like
converting whatever int type into a string instead of casting to a
specific one which might in the future be too small.

Reported-By: gcc -Wuseless-cast
</content>
</entry>
<entry>
<title>methods/basehttp.cc: Remove proxy autodetect debugging code</title>
<updated>2017-10-22T18:27:23Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2017-10-22T18:26:55Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=9130b5f9304b7f58273a826ff9acf04e10c6f98e'/>
<id>urn:sha1:9130b5f9304b7f58273a826ff9acf04e10c6f98e</id>
<content type='text'>
This was a left over from the autodetect move.

Gbp-Dch: ignore
</content>
</entry>
<entry>
<title>Run Proxy-Auto-Detect script from main process</title>
<updated>2017-10-22T16:52:16Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2017-10-21T13:44:43Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=1a76517470ebc2dd3f96e39ebe6f3706d6dd78da'/>
<id>urn:sha1:1a76517470ebc2dd3f96e39ebe6f3706d6dd78da</id>
<content type='text'>
This avoids running the Proxy-Auto-Detect script inside the
untrusted (well, less trusted for now) sandbox. This will allow
us to restrict the http method from fork()ing or exec()ing via
seccomp.
</content>
</entry>
<entry>
<title>allow the auth.conf to be root:root owned</title>
<updated>2017-07-26T17:09:04Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2017-07-07T20:21:44Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=881ec045b6660e2fe0c6953720260e380ceeeb99'/>
<id>urn:sha1:881ec045b6660e2fe0c6953720260e380ceeeb99</id>
<content type='text'>
Opening the file before we drop privileges in the methods allows us to
avoid chowning in the acquire main process which can apply to the wrong
file (imagine Binary scoped settings) and surprises users as their
permission setup is overridden.

There are no security benefits as the file is open, so an evil method
could as before read the contents of the file, but it isn't worse than
before and we avoid permission problems in this setup.
</content>
</entry>
<entry>
<title>lookup login info for proxies in auth.conf</title>
<updated>2017-07-26T17:09:04Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2017-07-07T19:59:01Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=6291fa81da6ed4c32d0dde33fa559cd155faff11'/>
<id>urn:sha1:6291fa81da6ed4c32d0dde33fa559cd155faff11</id>
<content type='text'>
On HTTP Connect we since recently look into the auth.conf file for login
information, so we should really look for all proxies into the file as
the argument is the same as for sources entries and it is easier to
document (especially as the manpage already mentions it as supported).
</content>
</entry>
<entry>
<title>reimplement and document auth.conf</title>
<updated>2017-07-26T17:09:04Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2017-07-07T14:24:21Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=ea408c560ed85bb4ef7cf8f72f8463653501332c'/>
<id>urn:sha1:ea408c560ed85bb4ef7cf8f72f8463653501332c</id>
<content type='text'>
We have support for an netrc-like auth.conf file since 0.7.25 (closing
518473), but it was never documented in apt that it even exists and
netrc seems to have fallen out of usage as a manpage for it no longer
exists making the feature even more arcane.

On top of that the code was a bit of a mess (as it is written in c-style)
and as a result the matching of machine tokens to URIs also a bit
strange by checking for less specific matches (= without path) first.
We now do a single pass over the stanzas.

In practice early adopters of the undocumented implementation will not
really notice the differences and the 'new' behaviour is simpler to
document and more usual for an apt user.

Closes: #811181
</content>
</entry>
<entry>
<title>fail early in http if server answer is too small as well</title>
<updated>2017-07-26T17:07:56Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2017-07-26T16:35:42Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=f2f8e89f08cdf01c83a0b8ab053c65329d85ca90'/>
<id>urn:sha1:f2f8e89f08cdf01c83a0b8ab053c65329d85ca90</id>
<content type='text'>
Failing on too much data is good, but we can do better by checking for
exact filesizes as we know with hashsums how large a file should be, so
if we get a file which has a size we do not expect we can drop it
directly, regardless of if the file is larger or smaller than what we
expect which should catch most cases which would end up as hashsum
errors later now a lot sooner.
</content>
</entry>
<entry>
<title>don't try to parse all fields starting with HTTP as status-line</title>
<updated>2017-07-26T17:07:56Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2017-07-24T07:45:51Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=1c5f13d489688e5fbbcdd3d0d2dd766769639939'/>
<id>urn:sha1:1c5f13d489688e5fbbcdd3d0d2dd766769639939</id>
<content type='text'>
It is highly unlikely to encounter fields which start with HTTP in
practice, but we should really be a bit more restrictive here.
</content>
</entry>
</feed>
