Christopher Lemmer Webber is a user on You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.
@ninjawedding I'd say I'm more excited about the possibilities of AI than worried about the dangers. I've been interested in AI since I was a teenager and based upon the things I've read and written, and also my industrial robotics experience, I'm confident that within the foreseeable future most production processes will be capable of being fully automated should we choose to do so.

The dangers of AI are not the ones which Singularitarians obsess over. i.e. The "paperclip fallacy", robot uprisings and things like that. The real danger is that the world's productive capacity will fall in entirely under the command of a few robber barons, that unequal access to the produce of automation will become even more extreme than it is now. There's also I think a very real possibility of world economic dictatorship by one man, who comes to control most of the world's wealth, regardless of national borders, in a sort of final showdown between oligarchs. This isn't a very scifi fear because it has already been happening for quite some time.

How do we bring advanced automation within democratic control? This is really the big question of the 21st century. If you're a young adult today, this question will probably be a background theme throughout your lifetime.
Christopher Lemmer Webber @cwebber

@bob @ninjawedding "Will robots inherit the earth? Yes, but they will be our children. We owe our minds to the deaths and lives of all the creatures that were ever engaged in the struggle called Evolution. Our job is to see that all this work shall not end up in meaningless waste."

Β· Web Β· 3 Β· 3
@cwebber @bob To elaborate a bit on my nuclear take (which I admit was clickbait-y pithy):

I am more concerned than excited about AI because I think it's harder to understand and judge a program than it is to write a program that does something, and because it's even harder to convince people things should be done a different way to avoid negative impacts resulting from that understanding.

So perhaps e.g. Amazon's Alexa and Amazon's logistics could debut in a positive future providing assistance to those who need it; that assistance becomes richer as the Alexa platform becomes more capable and more developers build stuff on top of it. That's a neat possibility. But it's hard to step back and ask "how could 'drop in on [someone]' be abused?", and even harder to get people to adopt something like Mycroft. As such I see countermeasures as much more difficult to implement than those neat possibilities; and I am therefore a lot more concerned about the threat.
@bob @cwebber I guess a more assholeish take on the SO survey would go something like:

A lot of people can glue together Tensorflow, ImageMagick, and a big-ol' dataset, call it an image-recognition AI, and apply it to their image-recognition problem. Substantially fewer people have the motivation and capability to understand that AI's limitations; even fewer people have the courage to disable that AI once they find it is unfit for its purpose.

So perhaps the threat isn't AI, but more like faith in the machine plus the same apathy that plagues many other problems.

@ninjawedding @bob So first of all, I am also worried about AI, but especially the current direction with neural nets where the software is not accountable. But "AI" is broad, and there are other paths for AI that *are* accountable... some thinking on this:

@cwebber @bob You were at the FSF's 30th anniversary party? Huh, I was too -- I guess I missed you there :P

Still making my way through Sussman's propagator paper; had to take a break in section 6.