AI: Now I am getting scared

I have been hearing the warnings from Musk and others for a while now… but it is finally soaking into my feeble brain… check out https://aws.amazon.com/deeplens/

Facial recognition for $250 plus an AWS account… now that’s scary.

I’m not scared of anything that can be thwarted with a cheesecloth.

Don’t worry… to quote “WestWorld”… “Nothing can go wrong… go wrong… go worng…”

It can do Hot Dog / Not Hot Dog!

Isn’t this the same company that made us laugh with https://arstechnica.com/gadgets/2018/05/amazon-confirms-that-echo-device-secretly-shared-users-private-audio/ ?

My grand mother told me about the maid who was afraid of the phone, right around the turn of the last century…

This hysteria could very well be about fear of technology and abuse of substances, combined with too much Hollywood bad scenarios…

[quote=389775:@Michel Bujardet]My grand mother told me about the maid who was afraid of the phone, right around the turn of the last century…

This hysteria could very well be about fear of technology and abuse of substances, combined with too much Hollywood bad scenarios…[/quote]
Haven’t you guys ever seen “The Terminator”?

Personally I think the worst will come from self-aware, mutating viruses, sponsored by governments to take down various parts of computer controlled infrastructure, or to manipulate an electoral system.

I’m not going to start worrying about AI until Google Translate can actually translate something properly.

I’ve been somewhat scared by Siri when I first discovered it…
now I’m not anymore.

What makes me concerned is the fact that everything seems to be ruled by the companies. The Uber car that ran over the lady crossing the lane with her bike detected her but decided not to brake because it wasn’t sure about what it had detected. (I feel urged to use a certain 3 letter word at this place.)

As child I was dreaming of a high-tech future of course, but all those inventions had Asimov’s laws hard-coded into their CPU (or positronic brain :wink: ). Without regulations built on ethics and human rights, technology poses a lot of risks. It does not have to be Skynet, although the conclusion that mankind is the biggest danger for planet Earth is not that far off. Netflix’ Dark Mirror series is excellent in extrapolating our civilization into the future, and I am afraid some of the dystopias presented could become real very soon: https://www.theatlantic.com/technology/archive/2018/02/chinas-dangerous-dream-of-urban-control/553097/

Have you seen this? Still not perfect but way better. https://www.deepl.com/Translator. Here’s your sentence after translating it into German and back:
I won’t worry about AI until Google Translate can actually translate something correctly.

In some above texts, there is an error:

the planet does not needs us “to be saved” *.

We need the planet to live. This is a huge difference.

  • When there will be no more humans, the planet will continue to live a life, whatever life it can be. :frowning:

[quote]
My grand mother told me about the maid who was afraid of the phone, right around the turn of the last century…

This hysteria could very well be about fear of technology and abuse of substances, combined with too much Hollywood bad scenarios…[/quote]

Unfortunately, comparing the next years with the past years does not work due to our linear thinking brain.
Or said a little bit more friendly… we are all stupid;-)

https://singularityhub.com/2016/04/05/how-to-think-exponentially-and-better-predict-the-future/
While the auto driving of cars will be more secure, the same (exponential) progress succeeds for worst scenarios; see example from Ulrich for China (which is really scary).

The developer certainly have the same mess I have…

When I am adding Boolean tests, after the write part is done, comes the tests: if the first and second tests does not works as intended (read as I intended them), I quit, modify the Boolean (set True to False and False to True) and it works fine.

The above developer just have to do that: “in cases of doubt, activate the brakes“.

Another answer would be to talk abour Karma: but it is not in my Karma to enter into that discussion. :wink:

@Ulrich Bogun: what I read is that the emergency breaks were simply off as not to irritate the user. I’ve been in a car with such collision braking and it’s rather odd when the car breaks.

In another car a collegue and myself tried the automatic parking feature and I almost got a heart attack because the car does much less of a distance to the neighboring car than a normal driver would do.

My own car nowadays has traffic sign detection. Not even this works perfectly. How do you want to do autonomous driving when something that should be way simpler isn’t working 100%?

Here‘s a quote from the investigation report: https://twitter.com/binarybits/status/999661474900475904
Money quote: [quote]The system is not designed to alert the operator.[/quote]

That‘s exactly what I mean. This evolution is much too fast, probably mostly driven by the companies seeing future growth markets. We are so used to banana software, now Alpha and Beta versions of autonomous cars are translating that topic into physical life. I see big chances in autonomous driving, but maybe we should implement it bit by bit and after a lot of testing (on closed drives first)?

Bnana ? The one I just ate was very nice, I love bananas ! :wink:

More serious:

100% Ok for me.

The fact that people rely on Google Translate (or any other technology for that matter) and that it often does not work well can lead to big problems. The Uber self-driving car accident is a good example.

I had a very interesting conversation with my son (he is beginning a masters degree in engineering) on the topic of autonomus cars. It would seem based on current studies that autonomous cars are already safer than human drivers, but human drivers are safer than partially autonomous driving help systems. For safety, we should go with the infamous Uber car and similar implementations, or nothing at all. Accidents involving autonomous cars are spectacular and highly visible in the media, but statistics would apparently tell a different story. (yes, I know that 75.3456789% of statistics are meaningless, and that one can make statistics lie… Notice the use of conditional.)

The in-between solutions that we all intuitively think are good first steps should apparently be avoided because they are not as safe as human drivers alone. Again based on current studies.

For now, I keep relying on human control.

The issue as I see it is that the current autonomous driving systems are trying to cope with a system designed (evolved?) for human drivers. For it to be fully successful, the system of roads and vehicles should interact with each other and totally eliminate the human driver option. If the road knows its capacity limits and the vehicles communicate with each other traffic jams and accidents can more easily be eliminated. To allow for the obstinate human who wants to be in control of the vehicle, like me, a limited number of alternative routes could be provided (and gradually phased out.)