Follow

Backdoor discovered in Ruby "strong password" library, takes your "strong passwords" and uploads them into a pastebin nakedsecurity.sophos.com/2019/

Hi, do you believe me when I say we need ocap security yet

@cwebber You definitely make me think I should read up on ocap.

@liw Here's a good start: mumble.net/~jar/pubs/secureos/

Imagine if instead of (solitaire) running with your full authority, you passed in the authority you need, eg (solitaire get-input write-to-screen read-write-score-file)

Instead of solitaire being able to exfiltrate your private keys and cryptolocker your data, now solitaire doesn't even have network and general file access (only to the one file), you simply didn't pass access to it.

Lambda is your new security model now.

@cwebber Thanks, saved to my already long list of important things to read. At least that's not a 600-page textbook on software architecture.

@phoe @cwebber @liw pledge is kind of a self-imposed ocap, but that helps too.
@cwebber sorry to be the one to ask but what does "ocap" refer to here

@opal "object capabilities". It doesn't really have much to do with "objects" in that it doesn't require object oriented programming, and originally they were just called "capabilities", but "capabilities" got overloaded as a term (eg, what the Linux kernel calls capabilities are nothing like object capabilities). ocap is shorthand, refers to a specific paradigm: your security model isn't who you are, but what references you hold onto.

@cwebber @opal I really need to spend more time learning this.

@cwebber I always try to read changelogs, and hate it when they don't include one or stamp "bug fixes" on it and call it a day.

@Chuculate and changelogs won't help you if someone's trying to sneak in a vulnerability :)

@cwebber > An eagle-eyed developer has discovered a backdoor

If only compilers could spot the code doing something not stated in its contract...

@dpwiz A contract of what the inputs are vs what the outputs are won't save you though. You can deliver on those things and in-between do the attack.

Being able to perfectly observe that an application isn't doing tricky things just by static analysis is halting-problem level difficulty. Ocap security fills in the rest.

@cwebber output and effects. But then, there's unsafe this and unsafe that...

@cwebber I think this problem could have been solved with a purely functional programming language. Although the compiler would need an option to disable any unsafe* functions (like the ones in haskell).

Side-effects are really dangerous, this proves it.

@jorge_jbs Even purely functional programs *do* get access to side effects though, because you need to do do anything useful. They do it through a monad.

The question is: who gets access to that monad?

You're right that functional programming can help, but it isn't that the language is functional itself that does it, it's that it supports higher-order functions and the ability to pass around references.

@cwebber If the library's interface doesn't return any monad (for example, isPasswordStrong has type String -> Bool) then there is no need to give access to any monad, everything is pure.

This library seems like a good fit for a pure library. If it needed some types of side-effects (but not all) you could return the FileAccess monad, or something similar.

All the code has access to all the monads. Executing them is another story.

@jorge_jbs You may be right that this is protecting the right behavior/safety. The way you described it, you can only perform side effects if you've explicitly been handed the reference, does sound like exactly the reference-based-ocap-security stuff I'm talking about. That approach isn't limited to purely functional languages, but you've correctly identified a purely functional way to do it.

@cwebber I don't know how ocap works, but yeah, it looks we're saying the same thing but implemented in different ways.

@cwebber @jorge_jbs Indeed, you can even imagine pure functions leaking passwords in their output in non-trivial ways (= ways which cannot be easily seen by looking at the function definition).

@scolobb @cwebber If a functions leaks anything to the outside world aside from its result then it is not a pure function, by definition.

@jorge_jbs @cwebber That's why I said "leaks in its output", meaning that being pure and being secure are orthogonal things.

Imagine a trivial (and stupid) example: you have a pure function taking a string and an encryption key and simply concatenating the two: it leaks the original string in the output. Now, this is clearly a stupid example, but you can imagine insecure encryption procedures which do not obfuscate the input well enough, but which are still pure in the sense that they produce no side-effects.

I am not a security expert (I'm actually not an expert in almost anything 😄), so feel free to take my word with a big grain of salt or anything healthier than that 😉

@jorge_jbs @scolobb @cwebber
I could imagine pure functions leaking information about passwords via timing channels, CPU heat, fan rates, EMF levels related to frequency of RAM accesses, etc. Functional code eliminates state from the perspective of the programmer but in some respects only hides state that still exists from the perspective of physics.

@enkiv2 @scolobb @cwebber Well, you could make a functional language that abstracts over all the implementation details, so you couldn't rely on them. For example, the implementation could add noise so that it really is pure. But, in practice that sounds to be terribly slow xD. But, also in practice, you wouldn't leak side-effects that way.

@enkiv2 Yeah, indeed. These leaks are very hard to deal with, and should probably be handled in close cooperation between hardware and software. Choosing any particular software architecture is probably not enough here. (Thinking as I speak.)

@scolobb @jorge_jbs @cwebber the only thing that sees the output is your password manager, which has to display the password on screen anyway, so that doesn't seem like such a big deal to me.

@popefucker You are right in your situation, but you cannot ensure that it is indeed only the password manager that sees your passwords by using pure functions only. (I'm very much attached to my very precise and small point 🙂 )

@cwebber spooky. went and checked all the rails applications at work. fortunately, we don't use this library.

@cwebber I believe you, I just have no idea what a transition plan looks like

@cwebber how would an OCAP scheme solve the problem of a compromised third-party library loading arbitrary code from an attacker-controlled pastebin?

serious question

@VyrCossont @cwebber I think the point is that code is internally limited so different libraries can't access data or services they're not supposed to?

So in this case, the library can't exfiltrate data because it can't network.

@astraluma @cwebber one could also say the same about other sandboxing mechanisms for third-party code, so i was wondering what the particular OCAP advantage, if any, would be here

@VyrCossont @astraluma Ocaps can be seen as a sandboxing mechanism, but rather a paradigm where everything is sandboxed and yet it isn't hell because it resembles the way we pass around arguments in our programs. One advantage that ocaps have over contemporary sandboxes is that they can acquire just-in-time authority also. But that sounds like nonsense without further explaination, which I will have to do at a future time.

I should probably blog explaining this stuff a bit more clearly :)

@VyrCossont @astraluma Here's an example of what I mean by just-in-time-authority. Here are two worlds:

- One where we list what documents you can access up-front. Now you can't access anything you shouldn't be able to, but you can't access *new* documents.
- One where you start with a set of documents you can access, but as the world moves and changes, we can also pass you access to new documents

Imagine the fediverse built with the former. You could never gain new friends!

@VyrCossont @astraluma This is why the just-in-time acquirement of authority in ocaps is really key: in the fixed-set-of-authority model, it's so annoying and rigid that eventually you'd pass in way more authority than you need, rather than being able to acquire the authority you need when you need it.

@cwebber @astraluma no, that part makes perfect sense, especially given the current "full network access or nothing" choice that many app store sandboxes still have

so you might build a capability-based Mastodon server with an HTTPS capability manager that has its own capabilities:
• make an HTTPS connection to a domain on the safelist
• request safelisting a new domain…
@cwebber @astraluma

the part of your server that handles auth should never be able to request new domains on its own, so you'd give it a diluted capability with only the first one

and it'd never give either to the password library…
@astraluma @cwebber this really doesn't really do much for the transitive trust problem for third-party code

maybe your runtime and package manager is extended to specify additional restrictions on capability propagation between dependencies and transitive dependencies

honestly, it's about time
@astraluma @cwebber but this level of capability-based design would require a fairly massive restructuring of any application that wanted to use it, as well as a language, runtime, and standard libraries that (a) supported capabilities and (b) was totally free of trapdoors into lower-level operations.

which is the real problem. there may be research languages that can do this, but is anyone shipping code in them today? or even close?

@VyrCossont @astraluma We can constrain as much as we can (for mastodon, the whole program), but for new programs, we can get this level of security

@cwebber @astraluma what language/platform would you recommend for new development?

@VyrCossont @astraluma I'm hopefully bringing ocap secure modules to Racket soon, and the Agoric folks are bringing it to Javascript, but it's hard for me to say there's a language-level thing I can recommend *yet*... but making it clear how urgent it is can help us prioritize it, and it's possible

@cwebber @VyrCossont @astraluma Some might see this as a disadvantage, but the advantage of OCAP comes explicitly *from* the API rework that will be required to adopt it. Since ocaps are (as a first-order approximation and most programmers' perspective) typed opaque values used as pointers or handles typically passed by value to dependencies that use them, it makes explicit a lot of security-related state which is currently implicit in trusted code bases that really ought not be trusted.

@cwebber @VyrCossont @astraluma Just as NULL-free coding requires changes to code in exchange for more reliable software, so too would adoption of OCAP-style API designs. It's painful, but it'll be very much worth it.

@vertigo @astraluma @cwebber agreed, but, like, let's say i'm writing a new fediverse server today

y'all are telling me to make that thing OCAPpy

where do i start

@VyrCossont @cwebber @astraluma Good questions; I'd like to know that myself. From my limited understanding, unfortunately, I think it has to start with the host OS's most basic APIs. Without kernel support, there'll always be a confused deputy waiting to accidentally obey orders from malicious code.

@vertigo @astraluma @cwebber yeah so that's not going to happen. period. i don't make apple pie starting with the universe, i don't have the budget…
@vertigo @astraluma @cwebber with current tech and a language targeting native code, you could do something like shatter your app into hundreds of processes to isolate dependencies and important internal components, communicating thru IPC, each running in a possibly-ephemeral sandbox with permissions set by the parent process

think actor system + SELinux

it'd be an absolute nightmare to write and debug, and probably run like shit

@VyrCossont @cwebber @vertigo Sounds like local SOA? It could be doable with at least high-level components.

But yeah, if you wanted to do this for every actor in your code, that would be thousands (millions?) of processes and would perform like crap.

@VyrCossont @cwebber @astraluma It might not be the fastest thing around, but run like shit might not be accurate either. This is the basic runtime model for Erlang, and it seems to work quite well in the telecommunications niche it was designed for, which also makes it reasonable for Internet applications as well.

@vertigo @VyrCossont @cwebber Vyr is talking about shattering the application into many individual processes and applying linux kernel security features to each one.

Erlang works because you don't need to do OS-level context switches or de/serialization of data.

Context switching processes is expensive, and so is de/serialization. (The latter is mitigatable with shared memory, but that comes with its own pile of trouble.)

Show more

@vertigo @VyrCossont @cwebber You could start with the language runtime, assuming the language doesn't have c ffi and has ocaps built into every API from the ground up.

@astraluma @VyrCossont Yes, that's exactly right. It was simply never passed network access.

@cwebber so same end effect as existing per-process MAC tech (SELinux, seccomp-bpf, etc.) but maybe slightly more efficient with an ideal runtime

certainly you'd need to run every piece of third-party code in its own sandbox/with its own set of capabilities; if the assembled product was a Mastodon server with this compromised gem, and had OS-derived capabilities allowing it to make arbitrary network connections, as a Mastodon server does, we're back to the same problem

@VyrCossont But what if Ruby *libraries* weren't able to access authority they weren't granted? What if instead of every Ruby library being able to reach out and grab whatever authority it wants, you pass in the authority *to* the module the same way we pass in arguments to a function?

Sign in to participate in the conversation
Octodon

Octodon is a nice general purpose instance. more