<feed xmlns='http://www.w3.org/2005/Atom'>
<title>apt/methods, branch 1.3_rc2</title>
<subtitle>Debians commandline package manager</subtitle>
<id>https://git.kalnischkies.de/apt/atom?h=1.3_rc2</id>
<link rel='self' href='https://git.kalnischkies.de/apt/atom?h=1.3_rc2'/>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/'/>
<updated>2016-08-17T19:53:05Z</updated>
<entry>
<title>methods: read config in most to least specific order</title>
<updated>2016-08-17T19:53:05Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-08-17T19:53:05Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=d1bdb73a96d01896ec8e213a0f14abc38d19a929'/>
<id>urn:sha1:d1bdb73a96d01896ec8e213a0f14abc38d19a929</id>
<content type='text'>
The implementation of the generic config fallback did the fallback in
the wrong order so that the least specific option wasn't the last value
picked but in fact the first one… doh!

So in the bugreports case http -&gt; https -&gt; http::&lt;hostname&gt; -&gt;
https::&lt;hostname&gt; while it should have been the reverse as before.

Regression-In: 30060442025824c491f58887ca7369f3c572fa57
Closes: 834642
</content>
</entry>
<entry>
<title>don't try pipelining if server closes connections</title>
<updated>2016-08-16T17:20:28Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-08-12T09:05:58Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=9714d522056e5256f5a2de587d88eba7cb3291c2'/>
<id>urn:sha1:9714d522056e5256f5a2de587d88eba7cb3291c2</id>
<content type='text'>
If a server closes a connection after sending us a file that tends to
mean that its a type of server who always closes the connection – it is
therefore relatively pointless to try pipelining with it even if it
isn't a problem by itself: apt is just restarting the pipeline each
time after it got served one file and the connection is closed.

The problem starts if one or more proxies are between the server and apt
and they disagree about how the connection should be as in the
bugreporters case where the responses apt gets contain both Keep-Alive
and Proxy-Connection headers (which apt both ignores) indicating a
proxy is trying to keep a connection open while the response also
contains "Connection: close" indicating the opposite which apt
understands and respects as it is required to do.

We avoid stepping into this abyss by not performing pipelining anymore
if we got a respond with the indication to close connection if the
response was otherwise a success – error messages are sent by some
servers via this method as their pages tend to be created dynamically
and hence their size isn't known a priori to them.

Closes: #832113
</content>
</entry>
<entry>
<title>don't sent Range requests if we know its not accepted</title>
<updated>2016-08-16T16:49:37Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-08-11T16:24:35Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=d94b1d80d8326334d17f6a43061368e783b8e0aa'/>
<id>urn:sha1:d94b1d80d8326334d17f6a43061368e783b8e0aa</id>
<content type='text'>
If the server told us in a previous request that it isn't supporting
Ranges with bytes via an Accept-Ranges header missing bytes, we don't
try to formulate requests using Ranges.
</content>
</entry>
<entry>
<title>reorganize server-states resetting in http/https</title>
<updated>2016-08-16T16:49:37Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-08-11T14:59:13Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=ebdb6f1810a20ac240b5b2192dc2e6532ff149d2'/>
<id>urn:sha1:ebdb6f1810a20ac240b5b2192dc2e6532ff149d2</id>
<content type='text'>
We keep various information bits about the server around, some only
effecting the currently handled file (like sizes) while others
should be persistent (like pipeline detections). http used to reset all
file-related manually, which is a bit silly if we already have a Reset()
method – which does reset all through –, so extending it with a
parameter for reuse and calling it from https too (as this was
previously resetting by just creating a new state struct – it uses no
value of the persistent state-keeping yet as it supports no pipelining).

Gbp-Dch: Ignore
</content>
</entry>
<entry>
<title>http(s): allow empty values for header fields</title>
<updated>2016-08-13T06:49:35Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-08-12T20:13:09Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=148c049150cc39f2e40894c1684dc2aefea1117e'/>
<id>urn:sha1:148c049150cc39f2e40894c1684dc2aefea1117e</id>
<content type='text'>
It seems completely pointless from a server-POV to sent empty header
fields, so most of them don't do it (simply proven by this limitation
existing since day one) – but it is technically allowed by the RFC as
the surounding whitespaces are optional and Github seems to like sending
"X-Geo-Block-List:\r\n" since recently (bug reports in other http
clients indicate July) at least sometimes as the reporter claims to have
seen it on https only even through it can happen with both.

Closes: 834048
</content>
</entry>
<entry>
<title>http: auto-configure for local Tor proxy if called as 'tor'</title>
<updated>2016-08-10T23:34:39Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-08-06T20:54:31Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=0568d325ad8660a9966d552634aa17c90ed22516'/>
<id>urn:sha1:0568d325ad8660a9966d552634aa17c90ed22516</id>
<content type='text'>
With apts http transport supporting socks5h proxies and all the work
in terms of configuration of methods based on the name it is called with
it becomes surprisingly easy to implement Tor support equally (and
perhaps even a bit exceeding) what is available currently in
apt-transport-tor.

How this will turn out to be handled packaging wise we will see in
https://lists.debian.org/deity/2016/08/msg00012.html , but until this is
resolved we can add the needed support without actively enabling it for
now, so that this can be tested better.
</content>
</entry>
<entry>
<title>block direct connections to .onion domains (RFC7687)</title>
<updated>2016-08-10T23:34:39Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-08-06T11:53:05Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=8665dceb5cf2a197ae270b08066f05c8a2870223'/>
<id>urn:sha1:8665dceb5cf2a197ae270b08066f05c8a2870223</id>
<content type='text'>
Doing a direct connect to an .onion address (if you don't happen to use
it as a local domain, which you shouldn't) is bound to fail and does
leak the information that you do use Tor and which hidden service you
wanted to connect to to a DNS server. Worse, if the DNS is poisoned and
actually resolves tricking a user into believing the setup would work
correctly…

This does block also the usage of wrappers like torsocks with apt, but
with native support available and advertised in the error message this
shouldn't really be an issue.

Inspired-by: https://bugzilla.mozilla.org/show_bug.cgi?id=1228457
</content>
</entry>
<entry>
<title>implement socks5h proxy support for http method</title>
<updated>2016-08-10T21:19:44Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-08-04T06:45:38Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=61db48241f2d46697a291bfedaf398a1ca9a70e3'/>
<id>urn:sha1:61db48241f2d46697a291bfedaf398a1ca9a70e3</id>
<content type='text'>
Socks support is a requested feature in sofar that the internet is
actually believing Acquire::socks::Proxy would exist. It doesn't and
this commit isn't adding it as that isn't how our configuration works,
but it allows Acquire::http::Proxy="socks5h://…". The HTTPS method was
changed already to support socks proxies (all versions) via curl. This
commit implements only SOCKS5 (RFC1928) with no auth or pass&amp;user auth
(RFC1929), but not GSSAPI which is required by the RFC. The 'h' in the
protocol name further indicates that DNS resolution is delegated to the
socks proxy rather than performed locally.

The implementation works and was tested with Tor as socks proxy for
which implementing socks5h only can actually be considered a feature.

Closes: 744934
</content>
</entry>
<entry>
<title>implement generic config fallback for methods</title>
<updated>2016-08-10T21:19:44Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-07-31T16:05:56Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=30060442025824c491f58887ca7369f3c572fa57'/>
<id>urn:sha1:30060442025824c491f58887ca7369f3c572fa57</id>
<content type='text'>
The https method implemented for a long while now a hardcoded fallback
to the same options in http, which, while it works, is rather inflexible
if we want to allow the methods to use another name to change their
behavior slightly, like apt-transport-tor does to https – most of the
diff being s#https#tor#g which then fails to do the full circle
fallthrough tor -&gt; https -&gt; http for https sources. With this config
infrastructure this could be implemented now.
</content>
</entry>
<entry>
<title>use the same redirection handling for http and https</title>
<updated>2016-08-10T21:19:44Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2016-08-02T12:49:58Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=4bba5a88d0f6afde4414b586b64c48a4851d5324'/>
<id>urn:sha1:4bba5a88d0f6afde4414b586b64c48a4851d5324</id>
<content type='text'>
cURL which backs our https implementation can handle redirects on its
own, but by dealing with them on our own we gain finer control over which
redirections will be performed (we don't like https → http) and by whom
so that redirections to other hosts correctly spawn a new https method
dealing with these instead of letting the current one deal with it.
</content>
</entry>
</feed>
