Lobsters fetches web pages for a couple reasons:
* to prefill the title field on new links as a convenience
* to cache story text for the search engine
* to check for rel=canonical links
* to auth github/twitter/keybase accounts
* to send webmentions back to blogs
The exception came from deep in the common page-fetching code. In set_cookie:
if val.to_s == ""
@cookies[host][name] ? @cookies[host][name].delete : nil
@cookies[host][name] = val
Sites delete cookies by "setting" them to empty string, so the logic's right, but there's an obvious confusion in how to delete from a Ruby hash, should be .delete(:name). This indeed never worked.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!