Follow

Are you interested in how to bring secure, private, peer-to-peer distributable content to the fediverse that can survive nodes going down? I've finished writing the documentation for the Golem demo which explains how to do just that: gitlab.com/spritely/golem/blob

It also includes a running, workable demo which you can try yourself. Please do and let me know your thoughts!

Releasing on a Sunday night where almost nobody's gonna notice this, I'm a PR Professional (TM)

Show thread

Side note, I've gotten a lot of friendly and positive responses to the Golem demo, and I see people excited about the ideas and what it could mean. I've also gotten a reasonable number of "that was fairly easy to follow" messages which is really good to hear. It makes me feel like this work is worth doing and that I'm off to a good start. There's more of these coming!

Show thread

I added a new "Encryption has a shelf life" section to the Caveats section of Golem's writeup. It's an important point I hadn't called out previously! gitlab.com/spritely/golem/blob

> Encryption has a shelf life. In general, secure ciphers from about 15 years ago aren’t secure today, so it’s possible that chunks that are currently only readable by intended recipients can eventually be read by anyone who gets their hands on them. [...]

Show thread

This is also a concern with any of the "cryptography will save us!" stuff (including E2E)... yes, but with limits!

Show thread

@cwebber If content is deleted in our current system, subsequent fetches to the same URL will yield tombstones and/or HTTP 410. When the content is distributed via a P2P network, how do you ensure that deleted content cannot be accessed again?

@Gargron @cwebber that depends on the p2p implementation. Magnet / DHT jazz works a bit like WebDAV in this regard so a 404 would be provided (not the best). I don't know too much about Dat / IPFS but I think it's also a 404

@gargron Good question! You can't ensure that *anything* can be deleted in a decentralized (or arguably, even centralized) system... not even on the present fediverse. If someone wants to keep around a message, they always can. Heck, someone could start mining the public feed for Delete activities and start publishing them publicly if they want to punish deleting.

But we can interpret the 410 as a *request* to delete, to not show as available. Subsequent delete updates can do that.

@gargron I've added a section explaining this to the Caveats section. I hope it helps!

@cwebber The difference is that in the current system the domain has the authority to tell the content is gone, and anyone who fetches it has to ask the domain. In a P2P setting where anyone can serve the content, it's harder to ensure something is deleted, because you need everyone who caches it to be notified and comply

@gargron That's true. Though most of the time when a post hits my timeline and then is later removed, I don't click through to see if the original post was deleted, my site removes it from the timeline because it received a Delete activity. So in that sense, it's the very same problem. (The chunks are meaningless in this system except to direct recipients.)

But it does mean I can't link to it anymore (unless someone explicitly went out of their way to mirror the post) and that much is true.

@gargron In other words, if your server sent me a post, but then you deleted it, and there was a partition and I didn't see the delete request, I would still see it on my timeline.

@cwebber It's especially relevant for announces. If you decide to announce my deleted post, your followers' servers would check my server first and find a 410, so even though you missed out on the delete, further propagation is prevented

@gargron Well... wait, I think that's probably a race condition.

- You make a post
- I announce it

And now it's one of two things racing:
- a) The servers get the announce
- b) You delete your post

Depending on which of those happens is whether or not the announce is removed, right? If b, a then yes. If a, b (which I think might usually be the case) then no, right?

@gargron In other words, if they process and check whether your post is live, and it is, then they cache it, and *then* you delete, if the deletion was never federated to them then they would have missed it anyway?

@gargron At any rate, I think Freenet has a very nice solution to this that's actually quite reliable, for when cooperatively you want to show these kinds of updates (or deletions)! Figuring out how to incorporate it into the current fediverse will be the hard part. Back into research mode!

@cwebber this is the way twitter streams worked: a stream item might be a tombstone for a previous item... clients were supposed to honor tombstones unconditionally "or else twitter the corporation will be angry at you"

@cwebber This is awesome! Your explanation was clear, and I feel like I have a pretty good understanding of how it works. I didn't try playing with the demo, though.

My only meaningful question is: how much overhead does this add? It seems like you have to make a number of requests (3?) where you used to make less (1?), and presumably that effect could become more pronounced depending on chunk size. Do you think it's significant? I understand this is a trade off for other benefits.

@carlozancanaro It will add hoverhead especially if retrieving chunks is done over HTTP. Though, note that I clarified that the chunks can be retrieved via a variety of store mechanisms... the protocol for retrieving the URNs does not really matter. So, a more direct and less wasteful p2p protocol that does not use HTTP could be used for gathering and sharing chunks from/with peers.

@carlozancanaro It's a great question though! And I'd also consider it this way: it might always be more overhead for the client, but it reduces overhead from a single server being responsible for being able to reply to all requests.

@cwebber I'm just thinking in terms of user experience. In Mastodon terms: how long does it take from clicking a toot to seeing the whole thread. That seems like it would result in a lot of requests, each of which take time.

If it's a bounded number per item (which I guess it might be, depending on your chunking) then it's just a matter of whether it's "fast enough" for human interaction.

@cwebber Just one little nitpick: Golems are not just from "fantasy literature and folklore" but Jewish folklore specifically, all other uses descend from that.

@kirby I changed that paragraph. Could you read it and see what you think?

@cwebber Sounds good to me, but I'm just a goy myself 😅

@cwebber indeed : it's in the end not a technical problem (it take x years to crack the cypher, with machines or... waiting) but a political one, where humans collectively have to ensure a structure of trust where they still can think and communicate ...
democracy won't be solved by technical trick, but yep, technology can be a tool to help and to think about it :D

@cwebber Good stuff!

Re: encryption "shelf life": would the URI scheme support multiple encryption?

Barring weaknesses in the actual ciphers (and the various other ways to undermine encryption), it's unlikely that data encrypted with modern ciphers at sufficient keysizes will ever be able to be decrypted without the key (Bremermann's limit, with the optimal brute-force post-quantum attack against symmetric ciphers being Grover's algorithm, which is mitigated by doubling the keysize).

So one option to mitigate the compromise of a cipher due to some sort of cryptanalytic attack is to use multiple ciphers, each with different keys.

Of course, if Alice is communicating an ephemeral symmetric key to Bob using a asymmetrically encrypted channel, the robustness of the symmetric algorithms won't matter much if attacker that can monitor network traffic between Alice or Bob may be able to decrypt that key exhcnage in the future. But that exchange could take place over a more trusted connection that is not available to the public, unlike the e.g. IPFS-stored encrypted messages themselves (though it may still be available to e.g. the NSA/GHCQ/etc). So there is still value in hardening the symmetrically encrypted message as much as Alice and Bob desire based on their threat model.

@mikegerwitz A good set of comments to which I don't honestly have a great reply. My crypto-math-fu is pretty weak here, but the observation of things weakening is partly based on warnings from more cryptographically astute people I know warning of such and also that so many cipher recommendations of yesteryear *have* weakened. But it's hard to tell if I'm over or under cautioning :)

That said if you wanted to compose ciphers, you could set the es= parameter to something that knows to do that

@cwebber To clarify, my second paragraph applies only to symmetric encryption.

You're absolutely not under-cautioning; I don't believe in such a thing in crypto. :) I was inquiring to see if multiple encryption was supported out of caution.

Certain ciphers have been weakened (or broken entirely), absolutely, which is what makes multiple encryption attractive. I didn't mean to suggest otherwise.

Thanks for your reply. I'm hoping to have the time to look into Spritely more deeply after LP2019.

@cwebber I can highly recommend MC Frontalot's "Secrets from the future" for a near-perfect and entertaining musical performance of this argument. :)

@cwebber @raucao Gotcha. Certainly apropos. Was almost jealous of you getting to hear a lot of good music for the first time.

Sign in to participate in the conversation
Octodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!