Would you buy/use a computer that ran 3x slower than modern machines if it were more secure (less vulnerable to side-channel attacks)?
crossposted to https://twitter.com/dustyweb/status/1127222086144020480
@cwebber According to my official and unbiased mastodon poll Hillary should have won the election too :)
@cwebber There wasn't a "3x slower than modern machines would probably still be faster than my nine-year-old desktop machine." 😆
FWIW, I wouldn’t make the tradeoff (except maybe on a second computer) because I’m confident in my ability to avoid those kinds of attacks.
Rarely. I use NoScript and only enable it for sites I trust.
(Yes, I realize that’s not bulletproof because how much can I *really* trust a site but I expect actual wide-ranging in-the-wild side channel attacks to get on the public radar before they can reach me.)
(And that assumes such attacks are actually feasible and profitable.)
@cwebber nothx, I don't use side channels.
@cwebber Nope not as my main computer.
But I would buy one in addition to my other computers and use it specifically for any high security needs I might have.
@cwebber The notebook computer I still use regularly is over six years old and was a budget model. I think it is already 3x slower than a modern machine, so I would certainly appreciate a security upgrade with similar performance
@cwebber Yes, the modern machine that you referred to is Ryzen 7 gen 2.
In another word, if it is not slower than Pentium N4200, it will be fine.
@cwebber what does "3x slower" mean? 3x longer to do anything?
@cwebber and how much more secure is it?
acceptable: processor does things at 1/3 speed but no one can break in without actively using the machine under my (or my executor's) credentials
unacceptable: processor unpredicably slows waaayyy down periodically to do things for an average of 1/3 speed, machine is perfectly silent
@cwebber It really depends on how much more security you get for your 3x speed sacrifice.
@aran Indeed. In this case I'm suggesting security against things such as rowhammer, spectre, etc (btw my understanding is that the slowdown would be more like 1.5x-2x but I decided to give a significantly more conservative estimate)
@shadowfacts It's interesting that people feel this way, and I wonder how much is "anchoring"... I guess having run computers that were 300x slower than modern computers and getting a lot done on them makes me less bothered by it.
By "anchoring", I mean: if moore's law didn't die out and I was asking you this after computers were 3x faster than today, and asking if you were willing to drop from that to "today's speeds", would you object?
@cwebber probably not
@cwebber but I would say that part of that is probably due to perceived speed being very very different then just like, raw processor speed. it's dependent on a lot of things like OS programming, cache size, workflow, amount of RAM, etc.
when you said "3x slower" my gut reaction was "you know the amount of times your macbook completely freezes up when trying to do some trivial task like alt-tab or open a new text file? what if those freezes took three times as long or happened three times as frequently"
@nightpool Sure. "Software is a gas, and expands to fill all available space."
The question is whether it can be compressed again.
@nightpool I'm also aware that many people in this generation have been using computers *primarily since* moore's law has leveled off, so unlike people from my generation, have gotten used to an approximate baseline of speed.
@cwebber sure. I'm probably willing to give up "having 100 chrome tabs open" but not willing to give up "having a super high DPI laptop display". trade-offs are going to be different for different people
@nightpool If it's any consolation, I think that high DPI displays aren't a concern in themselves (but the amount of processing necessary to display to them might be, dunno).
@cwebber my understanding is that there's some complicated performance cost for the thing I'm doing, which is configuring a high DPI display with a non-integer scaling factor. I dunno though.
@cwebber and of course that's completely nonsense, that's all based on stuff like, "I always forget to close programs I'm not using" and "retina displays with high scaling are hard for even top-line laptop graphics cards with current implementations"
@elomatreb It's not a theoretical branch of security? It's been demonstrated, general attacks that run in your browser on any page you could load that can change your read from your memory.
That doesn't seem "theroetical" to me...
@elomatreb Sure, because that's a low hanging fruit. Also worth addressing, but a different topic.
@cwebber basically the only computationally expensive task I do is software development (okay and guix).
If people would realise that their phones are sometimes 1.5-2x times slower than their desktop, more would answer positively I believe.
If not for development, I would say "yes" probably. Not sure.
• today's "own" machine, with whatever performance and security it's got
• subscription to a "cloud computing" provider, including equipment for the access (lease)
For the same price, period of time, availability, etc. But times higher performance in the latter case.
I'd say, the only place where the performance is needed for s/w development and maintenance is reproducible builds.
And the applications for content creators of course: CAD, graphic design, music creation, etc.
@cwebber sidebar: if we all got high we wouldn't notice it was any slower ;)
@cwebber I'd honestly like to see VLIW resurrected. Maybe see if with spectre and meltdown and whatnot as motivations the compiler technology can catch up to give decent performance.
@cwebber "no i need all the speed" but also "i would actively choose to make my computer less secure if it became faster"
but only because my threat model is basically nonexistent. like, "security" is abstract, but "speed" is immediately tangible. there are far greater risks.
@cwebber things run poorly already, and I'm not a likely candidate for whatever computer shenanigans. I want a snappy computer!
I've been buying used business systems and installing Linux for personal use for a decade and a half, so use... sure. Buy? Hmm...
I'm good with the performance of an i5-6600 (Skylake, 2015) for a lot of demanding applications. I couldn't afford a current generation i9 to get that, but make that a business requirement today and we'll see what's on the online auction sites in 3 years
Isn't that basically what a Raspberry Pi does?
@cwebber I'm posting this on a secondhand ThinkPad T60 manufactured in 2007. It runs fine, even if it lacks the raw power of a high-end laptop manufactured in 2019.
I already run off a 10? year old laptop and manage OK. Anything new would be faster
Beyond side channels I'm even more worried about the ever present issues of poor security design/architecture and of security-critical components being written in unsafe languages. The fact that there is always another buffer overflow waiting in the kernel, in the browser, etc is nonsense. Who knows when someone will find a critical vulnerability in libjpeg and start manipulating images to take over the browser, then call a vulnerable syscall to install a rootkit.
I really want to run a microkernel (so poorly written driver code doesn't compromise the whole system) written in a safe language with arbitrarily nestable security contexts (eg. beyond users having different privileges, I want any program to be able to spawn processes, threads, etc in more restricted contexts, which can also spawn more restricted children, etc).
Also I want a modern Lisp machine...
Yes and no. I would love a usable lisp OS (especially Racket) even without special hardware. But with hardware designed for it I'm sure it would be better. One of the major reasons Lisp Machines died is that Moore's Law was so fast that by the time you finished the longer design of the specialized hardware newer simple chips were already faster. Now that Moore's law (and friends) are largely over that could change.
Just the other day there was an article circulating about recent work on hardware assisted GC. Combined with eg. math instructions that automatically strip and check the type tag, etc, I could see it ameliorating many of the performance concerns of using higher level lamguages.