Follow

Would you buy/use a computer that ran 3x slower than modern machines if it were more secure (less vulnerable to side-channel attacks)?

Interestingly, the numbers are very different on here than they are on birdsite. Many more people here seem to be willing to take a speed hit for security here than there. Not surprising, I suppose: selection bias.

@cwebber According to my official and unbiased mastodon poll Hillary should have won the election too :)

@cwebber There wasn't a "3x slower than modern machines would probably still be faster than my nine-year-old desktop machine." 😆

@cwebber

FWIW, I wouldn’t make the tradeoff (except maybe on a second computer) because I’m confident in my ability to avoid those kinds of attacks.

@cwebber

Rarely. I use NoScript and only enable it for sites I trust.

(Yes, I realize that’s not bulletproof because how much can I *really* trust a site but I expect actual wide-ranging in-the-wild side channel attacks to get on the public radar before they can reach me.)

(And that assumes such attacks are actually feasible and profitable.)

@cwebber Nope not as my main computer.

But I would buy one in addition to my other computers and use it specifically for any high security needs I might have.

@cwebber The notebook computer I still use regularly is over six years old and was a budget model. I think it is already 3x slower than a modern machine, so I would certainly appreciate a security upgrade with similar performance :blobcheeky:

@cwebber I assumed you meant 1/3 the speed. I already do - a Gluglug Libreboot X60 from the days before they became minifree.org/

@cwebber Yes, the modern machine that you referred to is Ryzen 7 gen 2.

In another word, if it is not slower than Pentium N4200, it will be fine.

@cwebber and how much more secure is it?

acceptable: processor does things at 1/3 speed but no one can break in without actively using the machine under my (or my executor's) credentials

unacceptable: processor unpredicably slows waaayyy down periodically to do things for an average of 1/3 speed, machine is perfectly silent

@cwebber It really depends on how much more security you get for your 3x speed sacrifice.

@aran Indeed. In this case I'm suggesting security against things such as rowhammer, spectre, etc (btw my understanding is that the slowdown would be more like 1.5x-2x but I decided to give a significantly more conservative estimate)

@cwebber It depends how you define modern machine. If it were 3x slower than my current computer, no. It might technically be possible, but it'd be very unpleasant. If it were 3x slower than, say, a brand new, top of the line, ridiculously overpowered machine, then maybe.

@shadowfacts It's interesting that people feel this way, and I wonder how much is "anchoring"... I guess having run computers that were 300x slower than modern computers and getting a lot done on them makes me less bothered by it.

By "anchoring", I mean: if moore's law didn't die out and I was asking you this after computers were 3x faster than today, and asking if you were willing to drop from that to "today's speeds", would you object?

@cwebber If Moore's Law still held, it would be a much different question. A 3x decrease would only be a few years old, so despite being objectively much slower, I'd be much more used to it because it would be so recently in the past.

A great deal of the reluctance to change, I think, is not just that the hardware's gotten better, but the software that we use _expects_ that modern, fast hardware. My (and I suspect other people's) answer was based on the assumption of continuing to use the software we already do as normal. But, if something like your hypothetical actually happened, and lots of people started using computers that were several times slower, I think we'd see the software adapt to that.

If the software adapted to work about equally well on 3x slower hardware as current software does on current hardware, I'd be much more willing to take the slower hardware, knowing that I'd still be able to do everything I currently do and be similarly productive. But, current software designed for modern computers, running on a 3x slower device, would for a great many tasks be unbearable.

@shadowfacts

It sort of comes down to having programmes properly use multiple cores. Because then you can compensate for the individual core slowdown by putting more of them in the machine.

And more cores is already a trend in order to manage power and battery life.

@cwebber

@cwebber but I would say that part of that is probably due to perceived speed being very very different then just like, raw processor speed. it's dependent on a lot of things like OS programming, cache size, workflow, amount of RAM, etc.

when you said "3x slower" my gut reaction was "you know the amount of times your macbook completely freezes up when trying to do some trivial task like alt-tab or open a new text file? what if those freezes took three times as long or happened three times as frequently"

@nightpool Sure. "Software is a gas, and expands to fill all available space."

The question is whether it can be compressed again.

@nightpool I'm also aware that many people in this generation have been using computers *primarily since* moore's law has leveled off, so unlike people from my generation, have gotten used to an approximate baseline of speed.

@cwebber sure. I'm probably willing to give up "having 100 chrome tabs open" but not willing to give up "having a super high DPI laptop display". trade-offs are going to be different for different people

@nightpool If it's any consolation, I think that high DPI displays aren't a concern in themselves (but the amount of processing necessary to display to them might be, dunno).

@cwebber my understanding is that there's some complicated performance cost for the thing I'm doing, which is configuring a high DPI display with a non-integer scaling factor. I dunno though.

@cwebber
> Sure.  "Software is a gas, and expands to fill all available space." The question is whether it can be compressed again.

Only after freezing :D

@nightpool

@cwebber and of course that's completely nonsense, that's all based on stuff like, "I always forget to close programs I'm not using" and "retina displays with high scaling are hard for even top-line laptop graphics cards with current implementations"

@cwebber basically the only computationally expensive task I do is software development (okay and guix).
If people would realise that their phones are sometimes 1.5-2x times slower than their desktop, more would answer positively I believe.
If not for development, I would say "yes" probably. Not sure.

@cwebber It depends how you measure speed, but in general yes.

My requirements for development are not very significant. I can work easily on a 4 core A53. I don't care about games or 3D graphics, other than that browsers often make use of it.

@cwebber sidebar: if we all got high we wouldn't notice it was any slower ;)

@cwebber "no i need all the speed" but also "i would actively choose to make my computer less secure if it became faster"

but only because my threat model is basically nonexistent. like, "security" is abstract, but "speed" is immediately tangible. there are far greater risks.

@cwebber things run poorly already, and I'm not a likely candidate for whatever computer shenanigans. I want a snappy computer!

@cwebber
I've been buying used business systems and installing Linux for personal use for a decade and a half, so use... sure. Buy? Hmm...

I'm good with the performance of an i5-6600 (Skylake, 2015) for a lot of demanding applications. I couldn't afford a current generation i9 to get that, but make that a business requirement today and we'll see what's on the online auction sites in 3 years

@cwebber
Yes.
I already run off a 10? year old laptop and manage OK. Anything new would be faster

@cwebber
Beyond side channels I'm even more worried about the ever present issues of poor security design/architecture and of security-critical components being written in unsafe languages. The fact that there is always another buffer overflow waiting in the kernel, in the browser, etc is nonsense. Who knows when someone will find a critical vulnerability in libjpeg and start manipulating images to take over the browser, then call a vulnerable syscall to install a rootkit.

@cwebber
I really want to run a microkernel (so poorly written driver code doesn't compromise the whole system) written in a safe language with arbitrarily nestable security contexts (eg. beyond users having different privileges, I want any program to be able to spawn processes, threads, etc in more restricted contexts, which can also spawn more restricted children, etc).

Also I want a modern Lisp machine...

@cwebber I actually wonder what 3x slower means... CPU Disk access? Network speed? Wall time? Would it "feel sluggish" or just do some tasks more slowly?

@cwebber
It would be unbearable, depending on your threat model you can always make new hardware secure enough

@cwebber I would!

But I'm questioning whether such a machine would actually be slower. It would require us to recompile, and occasionally rewrite, all our software though.

@cwebber
What's the current list of "safer" computers? aarch64 A53's, Talos, Pentium 1?

@cwebber I am running a 2017 MacBook (12 inch) so I am pretty sure that it's already 3x slower than the average computer ...

@cwebber Yup, my notebook is 10 years old and runs Linux. I can perfectly work with it. You don't need high end stuff for office works or medium sized programming jobs.

@cwebber Oh, I have to admit that I cheated by upgrading with an ssd drive...

@cwebber
I'm not the target audience for this question clearly, because I regularly use 386 and 486 machines, but yes.

The thing that gives us the illusion that we need more power in our computers is that our applications are badly written.

With more well tuned applications, raw speed becomes inconsequential.

@cwebber
Much of computing is what you optimize for. While Moore's law was operating, that certainly wasn't CPU cycles 😂

The CPU requirement for the Oculus Rift SDK is a fifth generation i5 with a Passmark just over 7k. The generation of Intel CPU that debuted 2 years ago started with a Passmark just over 22k and currently sells for under $1000usd

If running a secure OS is CPU constrained, the power is available and close to being economically viable

@lxoliva had some compelling words about this at LP2019:

https://media.libreplanet.org/u/libreplanet/m/who-s-afraid-of-spectre-and-meltdown/

I don't know if your comment related at all to Spectre, but---if all the software running on your system is free software, what is there to fear? And I agree.

The biggest trouble is that people often run non-free and untrusted code all of the time in their web browsers, and don't see it as a software freedom or security issue. It's important to recognize it for what it is---untrusted, unsigned, ephemeral software---if you're going to consider security tradeoffs when it comes to certain mitigations. I personally don't run JS at all, even if it's free, with very few exceptions, because it's unsigned.

@mikegerwitz @lxoliva I'm glad you ack'ed the "not signed" aspect; regarding the nonfree vs free software: mark the metadata of javascript as librejs compatible, then perform a read or write attack against the system. (Heck, it even *could* be free software compliant; most likely the target isn't checking the licensing situation when they're under such attack, but it's also trivial to lie about it.)

@mikegerwitz @lxoliva However, we shouldn't believe that just because something is free software that it is trustworthy, or that we have the capacity to fully audit our software systems for security. The sad reality is that people run way too much code to be able to trust or audit systems, and Ka-Ping Yee's thesis showed that if an attacker wants to add vulnerabilities to (even free) software, even the best programmers won't detect it zesty.ca/pubs/yee-phd.pdf

@mikegerwitz @lxoliva At any rate, defense in depth. Free software helps, but we shouldn't be saying "well, we're not going to be bother with these other (critical) layers because we're just focusing on this one layer."

Also as someone who wants to build a decentralized, free software powered distributed game where you can safely run other peoples' game code, heck yeah I want to be sure that it doesn't open my system to attacks.

@mikegerwitz @cwebber it's not just about being free software, you have to actually know that it does what you wish, which requires auditing by a trusted party. truth is we've long known about side-channel attacks that allow information leaks. s&m aren't the first nor the last of these, and some are deemed unfixable, so if you wish to run untrusted code on your system, you'd better not have information you wish to keep private on it
@cwebber @lxoliva Certainly we need to trust it as well. But if you're installing software on your system, there are generally other, more effective ways to compromise the user than resorting to side-channels.

But ensuring your software is signed and reproducible also helps to mitigate targeted attacks---if you're running the same software that everyone else is running, then the risk is very high for someone to do something malicious and tarnish their reputation.

Many users just `curl foo | sudo sh` the latest hot thing as they're instructed.

@mikegerwitz @lxoliva I don't think we're disagreeing there. I'm just arguing for a *multi-pronged approach*, and from there I don't understand where the objections are coming from. I have a hard time believing that if we had a community-oriented libre-hardware-design RISC-V machine that was less vulnerable than these side channel attacks that the bunch of us wouldn't advocate that people should use that *and* free software.

@cwebber Oh, I would certainly advocate for libre hardware. What I was replying to was your original message:

> Would you buy/use a computer that ran 3x slower than modern machines if it were more secure (less vulnerable to side-channel attacks)?

I interpreted this as buying a modern e.g. Intel processor that has Meltdown/Spectre microcode mitigations, which can cut performance of certain processes by half (which we have to deal with at work).

But RISC-V is another story. We actually gain something substantial there.

@mikegerwitz I wasn't talking about the microcode updates specifically, but I think they're also a good example if you put the non-freeneess aside of the question as I posed it. But for context, what prompted this conversation was a chat on Friam (meeting of some programming language nerds that happens once a week) where Meltdown/Spectre were discussed, and more fundamental cpu architectural changes were proposed (as well as changing some ways we program, because it's Friam)

Sign in to participate in the conversation
Octodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!