George Battye IRL computing

A long time ago (late '90'sI read an article in SciAm about the dangers of machines being governed entirely by software. The gist of it was that wetware (i.e. human) errors are something that we've evolved to deal with and although a faulty person may be scary, it's generally on a very limited scale and - in the absence of, or equivalency of, weaponry - pretty solvable (notable exceptions would be faulty politicians with the ability to scale their errors... Hitler?). With software, especially when connected to hardware, the scale and might of the problem quickly becomes overwhelming. The article predicted that people's interaction with smarter software and hardware would lull them into the belief that they were invariably safe and that this comfort would quickly lead to pervasive delegation of control to devices. And unlike simple software errors on our devices that have only a digital impact, control problems could cause severe economic and safety catastrophes.

The question remained: Should we accept these risks and push through the beta problems in order to achieve a much better standard of living?
Posted: 28 August 2014 at 09:04

Anton Musgrave Man vs. machines...wehre will it end?

Interesting that it's OK for humans to die in war but not for machines to be used. The armaments industry is also facing disruption...who is actually voicing opposition to machine soldiers?
A FuturesForum post (titled: "Man vs. machines...wehre will it end?") refers to this MindBullet. The full FuturesForum post can be read here:
Posted: 28 August 2014 at 10:07

Doug Vining Tools designed to be our friends

As the internet of things becomes more entrenched in our lives, we will be faced with ethical questions about the behaviour of robots and connected devices. If a driverless car causes an accident, who is to blame? If your home security fails to recognise you, will you have a claim against the vendor? Here's a great quote from this article in the Guardian:

"The spam sending fridge from earlier this year could easily be seen as being a ‘bad refrigerator’, not because of an inappropriate interior temperature but because of its coerced online activity. Add to the mix the potential for affective computing; the ability for a computer device to recognise your facial expression or predict your mood through your gestures or body pose and respond appropriately; increasingly sophisticated language recognition and speech production technologies, and this illusion of humanity will become even stronger."

"As smart devices start to work with us, and understand our social rules, we may increasingly see them as human like - a world filled with tools designed to be our friends."
A FuturesForum post (titled: "Tools designed to be our friends") refers to this MindBullet. The full FuturesForum post can be read here:
Posted: 8 September 2014 at 14:45

Mark John Smart Home

While turning your own home into a smart one, there are some keypoints that we need to consider. I can tell for example, my friend who is going to transfer to her new place at marina bay residences is also planning to install devices that will make her unit smarter is now thinking twice after I shared this one I tried to study the floor plan and I told her that it's up to her if she wants to do it.
Posted: 15 January 2015 at 09:00

Neil Jacobsohn Smart Home

I think it is simply a case of being thoughtful about why and what you want.
Posted: 15 January 2015 at 09:27

Doug Vining Internet of dumb things

In an echo of the scenario we published last year, here's the real-life tale of a burned out light-bulb that wouldn't stop asking for help - until the whole network froze up.
A FuturesForum post (titled: "Internet of dumb things") refers to this MindBullet. The full FuturesForum post can be read here:
Posted: 4 March 2015 at 12:22
Comments by users of MindBullets are those of the authors and are not necessarily shared, endorsed and/or warranted by FutureWorld. All MindBullet content is Copyright FutureWorld International © 2019. All Rights Reserved.