Follow

@jamey Not sure it this should work for my URLs: reader.minilop.net/read/https: or reader.minilop.net/read/https: give me a 500 result. Quite possibly something doesn’t work for my feeds? It needs better error reporting so people can use it for testing their RFC 5005 implementations. πŸ˜€

@kensanata I saw that in my error logs and I started composing a message to you about it. 😁

1) It absolutely needs better error reporting, but to be useful for testing, it also needs to not cache quite so aggressively. πŸ˜…

2) It interpreted the query-string as being parameters for my code, not part of your URL. If you URL-encode the '?' as '%3F' I think it should work. But due to the aforementioned aggressive caching you might want to clone it and run it locally first so you can delete .scrapy/httpcache/ if you want...

3) I need to add a trivial front-page with a form entry for the feed URL, which would take care of percent-encoding and all that. That's basically next on my to-do list for this project πŸ˜…

@kensanata I'm working on some of the basic usability improvements for my feed reader demo, and just got around to looking at your actual feed contents. I don't have a plan for handling RFC5005 section 3, which is what you're using; I only know what to do with sections 2 and 4. So until/unless I become more enlightened, your feed isn't going to work with my demo. 😞

@jamey Nooo! What's your reasoning? I mean I could special case the addition of the parameter from=1 and add <fh:complete/> but unless this is an archive, it doesn't seem to make much sense. As a blog author I don't want my readers to download the entire blog every time. That's how I understand the utility of section 2. And section 4 is tricky, too. I can always add blog posts to the past, and sometimes I do (like after a trip, perhaps). I'd love to hear your thoughts regarding blogs and wikis…

@kensanata I was trying to be clear that you aren't wrong to use section 3, I just don't know what to do with it. I have some idea how to build UI if I have the complete archive, but when I have to go back to the origin server every time someone crosses a pagination boundary, that needs a different approach.

But section 4 /does/ support adding posts out of order, or editing or deleting old posts. It's just that you have to set up your archive URLs such that if a page changes, then its URL and all newer archive URLs also change. I know several different ways to do that, with various efficiency tradeoffs, but I've had trouble writing them down.

@jamey Hm. And what if you just page through the entire archive for sec 3 just as you would for sec 4? You'd end up with the complete archive in either case, no? The only technical difference I see is that sec 3 doesn't promise stability of the pages where as sec 4 does. If you're suggesting that I could just change the archive structure then that shows how the boundary between sec 3 and 4 is fluid. :) I think fetching the full set via sec 3 and caching it for a while is fine.

@kensanata That would usually work, sure! I'm not keen on it though, for two reasons:

1) If you insert or remove a post while I'm walking the links, I may see an inconsistent view of the feed, but I can't reliably detect that problem. With Β§4 I can always tell if the feed changed while I was fetching it.

2) I want feed reading software to allow jumping to any page in the archive at any time, without requiring that it keep the entire archive cached at all times. With Β§4 I can record which URL I saw that entry ID in, and forget about most everything else until I need it. I wrote a little about that in github.com/jameysharp/reader-p

Either issue may not be important to other people or in other use cases; I just don't know how to deal with them and still get what I want. πŸ˜“β€‹

@kensanata Oh, also:

The only technical difference I see is that sec 3 doesn't promise stability of the pages where as sec 4 does.

You're correct, but that seemingly-tiny difference enables a huge range of implementation options for feed consumers, where those options aren't feasible for feeds using Β§3. But again, those options aren't necessarily important for everyone, and the requirements of Β§4 make it unusable for publishers in some circumstances, so I'm not saying Β§3 is "bad" or anything πŸ˜„

@jamey Sure! All I can say after rereading the README is that the disk space for campaignwiki.org/osr/ according to du -sh is 176M!

@jamey but then again, I don't actually know what Planet Venus implements. It's old code, and it actually only cares about the last four feed entries, so I'm suffeering because Blogspot and Wordpress are stuffing everything they have into their feeds, apparently.

@kensanata Haha, that's either nothing or completely unreasonable, depending on what one's doing with it... πŸ˜„

My primary source on this (I should actually add this link to the README, come to think of it) is this comment: github.com/samuelclay/NewsBlur

@jamey Yeah, that's a good point. I always love to see some real numbers.

@kensanata hey look, something resembling helpful UI, even including vaguely human-readable error messages: reader.minilop.net/

try pasting your URL in that form. it still won't work but at least it'll tell you why not 😁

Sign in to participate in the conversation
Octodon

Octodon is a nice general purpose instance. more