<feed xmlns='http://www.w3.org/2005/Atom'>
<title>apt/methods, branch 2.1.9</title>
<subtitle>Debians commandline package manager</subtitle>
<id>https://git.kalnischkies.de/apt/atom?h=2.1.9</id>
<link rel='self' href='https://git.kalnischkies.de/apt/atom?h=2.1.9'/>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/'/>
<updated>2020-08-10T09:39:30Z</updated>
<entry>
<title>Do not retry on failure to fetch</title>
<updated>2020-08-10T09:39:30Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-08-10T09:39:30Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=fa375493c5a4ed9c10d4e5257ac82c6e687862d3'/>
<id>urn:sha1:fa375493c5a4ed9c10d4e5257ac82c6e687862d3</id>
<content type='text'>
While we fixed the infinite retrying earlier, we still have
problems if we retry in the middle of a transfer, we might
end up resuming downloads that are already done and read
more than we should (removing the IsOpen() check so that
it always retries makes test-ubuntu-bug-1098738-apt-get-source-md5sum
fail with wrong file sizes).

I think the retrying was added to fixup pipelining messups,
but we have better solutions now, so let's get rid of it,
until we have implemented this properly.
</content>
</entry>
<entry>
<title>basehttp: Correctly handle non-transient failure from RunData()</title>
<updated>2020-08-05T12:15:49Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-08-05T12:14:19Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=4b439208203cd584e158fd240a3a4a72d1248099'/>
<id>urn:sha1:4b439208203cd584e158fd240a3a4a72d1248099</id>
<content type='text'>
When we failed after a retry, we only communicated failure as
transient, but this seems wrong, especially given that the code
now always triggers a retry when Die() is called, as Die() closes
the server fd.

Instead, remove the error handling in that code path, and reuse
the existing fatal-ish error code handling path.
</content>
</entry>
<entry>
<title>http: Fix infinite loop on read errors</title>
<updated>2020-08-05T09:08:16Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-08-05T09:04:45Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=ded246bb61b9b0f4ca658be45c1691844e1dc122'/>
<id>urn:sha1:ded246bb61b9b0f4ca658be45c1691844e1dc122</id>
<content type='text'>
If there was a transient error and the server fd was closed, the
code would infinitely retry - it never reached FailCounter &gt;= 2
because it falls through to the end of the loop, which sets
FailCounter = 0.

Add a continue just like the DNS rotation code has, so that the
retry actually fails after 2 attempts.

Also rework the error logic to forward the actual error message.
</content>
</entry>
<entry>
<title>Merge branch 'pu/http-fixes-2' into 'master'</title>
<updated>2020-08-04T10:34:38Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2020-08-04T10:34:38Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=7d222636954ec95382149e31b314e9828ba05a2e'/>
<id>urn:sha1:7d222636954ec95382149e31b314e9828ba05a2e</id>
<content type='text'>
Pu/http fixes 2

See merge request apt-team/apt!125</content>
</entry>
<entry>
<title>Merge branch 'pu/less-slaves' into 'master'</title>
<updated>2020-08-04T10:12:30Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>jak@debian.org</email>
</author>
<published>2020-08-04T10:12:30Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=11530bab64efd4b4fc46de7833533cea9c69f521'/>
<id>urn:sha1:11530bab64efd4b4fc46de7833533cea9c69f521</id>
<content type='text'>
Remove master/slave terminology

See merge request apt-team/apt!124</content>
</entry>
<entry>
<title>gpgv: Rename master to primary</title>
<updated>2020-08-04T10:12:11Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-07-14T14:19:08Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=9cb5a81168307e15f209173ad9286835bff2df65'/>
<id>urn:sha1:9cb5a81168307e15f209173ad9286835bff2df65</id>
<content type='text'>
</content>
</entry>
<entry>
<title>http: Always write to the file if there's something to write</title>
<updated>2020-08-04T09:46:39Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-08-04T09:37:45Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=27d36318b98f3a070fb24557ce691718ef4eec34'/>
<id>urn:sha1:27d36318b98f3a070fb24557ce691718ef4eec34</id>
<content type='text'>
We only add the file to the select() call if we have data to
write to it prior to the select() call. This is problematic:

Assuming we enter Go() with no data to write to the file,
but we read some from the server as well as an EOF, we end
up not writing it to the file because we did not add the file
to the select.

We can't always add the file to the select(), because it's
basically always ready and we don't want to wake up if we
don't have anything to read or write.

So for a solution, let's just always write data to the file
if there's data to write to it. If some gets leftover, or if
some was already present when we started Go(), it will still
be added to the select() call and unblock it.

Closes: #959518
</content>
</entry>
<entry>
<title>http: Redesign reading of pending data</title>
<updated>2020-07-24T14:30:43Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-06-29T12:03:21Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=08f05aa8beb58fa32485e2087eb21a9f3cb267bb'/>
<id>urn:sha1:08f05aa8beb58fa32485e2087eb21a9f3cb267bb</id>
<content type='text'>
Instead of reading the data early, disable the timeout for the
select() call and read the data later. Also, change Read() to
call only once to drain the buffer in such instances.

We could optimize this to call read() multiple times if there
is also pending stuff on the socket, but that it slightly more
complex and should not provide any benefits.
</content>
</entry>
<entry>
<title>http: On select timeout, error out directly, do not call Die()</title>
<updated>2020-07-24T14:30:43Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-06-29T10:31:55Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=c2cb8abbf5d8a49b25071ffffca93a083fe725fc'/>
<id>urn:sha1:c2cb8abbf5d8a49b25071ffffca93a083fe725fc</id>
<content type='text'>
The error handling in Die() that's supposed to add useful error
messages is not super useful here.
</content>
</entry>
<entry>
<title>http: Finish copying data from server to file before sending stuff to server</title>
<updated>2020-07-24T14:30:43Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-06-29T10:23:02Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=9742032dcdc0e72c117ae0c589fbb59452d6d33c'/>
<id>urn:sha1:9742032dcdc0e72c117ae0c589fbb59452d6d33c</id>
<content type='text'>
This avoids a case where we read data, then write to the server
and only then realize the connection was closed. It is somewhat
slower, though.
</content>
</entry>
</feed>
