<feed xmlns='http://www.w3.org/2005/Atom'>
<title>apt/methods, branch feature/rred</title>
<subtitle>Debians commandline package manager</subtitle>
<id>https://git.kalnischkies.de/apt/atom?h=feature%2Frred</id>
<link rel='self' href='https://git.kalnischkies.de/apt/atom?h=feature%2Frred'/>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/'/>
<updated>2020-11-07T21:52:20Z</updated>
<entry>
<title>Support compressed output from rred similar to apt-helper cat-file</title>
<updated>2020-11-07T21:52:20Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2020-11-07T21:52:20Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=e5bb443cf58cec23503ad0deeeb06a080053da8a'/>
<id>urn:sha1:e5bb443cf58cec23503ad0deeeb06a080053da8a</id>
<content type='text'>
</content>
</entry>
<entry>
<title>Support reading compressed patches in rred direct call modes</title>
<updated>2020-11-07T20:48:21Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2020-11-07T20:39:00Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=418f9272606857e312f485778a1ef1b263236463'/>
<id>urn:sha1:418f9272606857e312f485778a1ef1b263236463</id>
<content type='text'>
The acquire system mode does this for a long time already and as it is
easy to implement and handy for manual testing as well we can support
it in the other modes, too.
</content>
</entry>
<entry>
<title>Prepare rred binary for external usage</title>
<updated>2020-11-07T20:48:21Z</updated>
<author>
<name>David Kalnischkies</name>
<email>david@kalnischkies.de</email>
</author>
<published>2020-11-07T20:23:57Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=9e1398b164f55238990907f63dfdef60588d9b24'/>
<id>urn:sha1:9e1398b164f55238990907f63dfdef60588d9b24</id>
<content type='text'>
Merging patches is a bit of non-trivial code we have for client-side
work, but as we support also server-side merging we can export this
functionality so that server software can reuse it.

Note that this just cleans up and makes rred behave a bit more like all
our other binaries by supporting setting configuration at runtime and
supporting --help and --version. If you can make due without this, the
now advertised functionality is provided already in earlier versions.
</content>
</entry>
<entry>
<title>Rewrite HttpServerState::Die()</title>
<updated>2020-08-11T11:42:41Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-08-11T11:09:14Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=8b35e2a3dd7b863639a8909fa2361ed4fd217bc3'/>
<id>urn:sha1:8b35e2a3dd7b863639a8909fa2361ed4fd217bc3</id>
<content type='text'>
The old code was fairly confusing, and contradictory. Notably, the
second `if` also only applied to the Data state, whereas we already
terminated the Data state earlier. This was bad.

The else fallback applied in three cases:

(1) We reached our limit
(2) We are Persistent
(3) We are headers

Now, it always failed as a transient error if it had
nothing left in the buffer. BUT: Nothing left in the buffer
is the correct thing to happen if we were fetching content.

Checking all combinations for the flags, we can compare the results
of Die() between 2.1.7 - the last "known-acceptable-ish" version
and this version:
                                2.1.7           this
Data !Persist !Space !Limit     OK (A)           OK
Data !Persist !Space Limit      OK (A)           OK
Data !Persist Space !Limit      OK (C)           OK
Data !Persist Space Limit       OK               OK

Data Persist !Space !Limit      ERR              ERR          *
Data Persist !Space Limit       OK (B)           OK
Data Persist Space !Limit       ERR              ERR
Data Persist Space Limit        OK               OK

=&gt; Data connections are OK if they have not reached their limit,
   or are persistent (in which case they'll probably be chunked)

Header !Persist !Space !Limit   ERR              ERR
Header !Persist !Space Limit    ERR              ERR
Header !Persist Space !Limit    OK               OK
Header !Persist Space Limit     OK               OK
Header Persist !Space !Limit    ERR              ERR
Header Persist !Space Limit     ERR              ERR
Header Persist Space !Limit     OK               OK
Header Persist Space Limit      OK               OK

=&gt; Common scheme here is that header connections are fine if they have
   read something into the input buffer (Space). The rest does not matter.

(A) Non-persistent connections with !space always enter the else clause, hence  success
(B) no Space means we enter the if/else, we go with else because IsLimit(), and we succeed because we don't have space
(C) Having space we do enter the while (WriteSpace()) loop, but we never reach IsLimit(),
    hence we fall through. Given that our connection is not persistent, we fall through to the
    else case, and there we win because we have data left to write.
</content>
</entry>
<entry>
<title>http: Fully flush local file both before/after server read</title>
<updated>2020-08-11T11:09:04Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-08-11T08:55:09Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=73780d7f664a4ea1da55527d726b4c9c7753f1fb'/>
<id>urn:sha1:73780d7f664a4ea1da55527d726b4c9c7753f1fb</id>
<content type='text'>
We do not want to end up in a code path while reading content
from the server where we have local data left to write, which
can happen if a previous read included both headers and content.

Restructure Flush() to accept a new argument to allow incomplete
flushs (which do not match our limit), so that it can flush as
far as possible, and modify Go() and use that before and after
reading from the server.
</content>
</entry>
<entry>
<title>http: Do not use non-blocking local I/O</title>
<updated>2020-08-11T11:09:04Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-08-11T09:42:15Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=13ab2317451931f055855f1aeaec6c8b28b14ce2'/>
<id>urn:sha1:13ab2317451931f055855f1aeaec6c8b28b14ce2</id>
<content type='text'>
This causes some more issues, really.
</content>
</entry>
<entry>
<title>http: Restore successful exits from Die()</title>
<updated>2020-08-11T11:09:04Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-08-11T09:40:14Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=af7ab7c0002ef2cdfb1a4c0a468c5dbbda3d5dd0'/>
<id>urn:sha1:af7ab7c0002ef2cdfb1a4c0a468c5dbbda3d5dd0</id>
<content type='text'>
We have successfully finished reading data if our buffer is empty,
so we don't need to do any further checks.
</content>
</entry>
<entry>
<title>Do not retry on failure to fetch</title>
<updated>2020-08-10T09:39:30Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-08-10T09:39:30Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=fa375493c5a4ed9c10d4e5257ac82c6e687862d3'/>
<id>urn:sha1:fa375493c5a4ed9c10d4e5257ac82c6e687862d3</id>
<content type='text'>
While we fixed the infinite retrying earlier, we still have
problems if we retry in the middle of a transfer, we might
end up resuming downloads that are already done and read
more than we should (removing the IsOpen() check so that
it always retries makes test-ubuntu-bug-1098738-apt-get-source-md5sum
fail with wrong file sizes).

I think the retrying was added to fixup pipelining messups,
but we have better solutions now, so let's get rid of it,
until we have implemented this properly.
</content>
</entry>
<entry>
<title>basehttp: Correctly handle non-transient failure from RunData()</title>
<updated>2020-08-05T12:15:49Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-08-05T12:14:19Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=4b439208203cd584e158fd240a3a4a72d1248099'/>
<id>urn:sha1:4b439208203cd584e158fd240a3a4a72d1248099</id>
<content type='text'>
When we failed after a retry, we only communicated failure as
transient, but this seems wrong, especially given that the code
now always triggers a retry when Die() is called, as Die() closes
the server fd.

Instead, remove the error handling in that code path, and reuse
the existing fatal-ish error code handling path.
</content>
</entry>
<entry>
<title>http: Fix infinite loop on read errors</title>
<updated>2020-08-05T09:08:16Z</updated>
<author>
<name>Julian Andres Klode</name>
<email>julian.klode@canonical.com</email>
</author>
<published>2020-08-05T09:04:45Z</published>
<link rel='alternate' type='text/html' href='https://git.kalnischkies.de/apt/commit/?id=ded246bb61b9b0f4ca658be45c1691844e1dc122'/>
<id>urn:sha1:ded246bb61b9b0f4ca658be45c1691844e1dc122</id>
<content type='text'>
If there was a transient error and the server fd was closed, the
code would infinitely retry - it never reached FailCounter &gt;= 2
because it falls through to the end of the loop, which sets
FailCounter = 0.

Add a continue just like the DNS rotation code has, so that the
retry actually fails after 2 attempts.

Also rework the error logic to forward the actual error message.
</content>
</entry>
</feed>
