That’s what our robots and AI/ML driving software slaves are saying about us.
Two interesting and related articles:
Software detects hidden emotions in parents
University of Bristol News (U.K.)
October 5, 2023
Software developed by researchers at the U.K. ’s University of Bristol and Manchester Metropolitan University can identify complex hidden emotions by mapping facial features and evaluating the intensities of multiple facial expressions. The researchers used data from participants recorded by headcams worn by their infants. The participants’ facial expressions in the videos were analyzed by automated facial coding software and human coders; the researchers assessed how often the software detected faces in the videos and how often the software and humans were in agreement. Machine learning then was used to predict human judgments of parent facial expressions based on the decisions made by the computer. University of Bristol’s Romana Burgess said, ”Deploying automated facial analysis in the parents’ home environment could change how we detect early signs of mood or mental health disorders, such as postnatal depression.”
Geoffrey Hinton on the promise, risks of advanced AI
October 8, 2023
U.K. computer scientist and 2019 ACM A.M. Turing Award recipient Geoffrey Hinton said advanced artificial intelligence (AI), for all its promise, could conceivably take over. In a ”60 Minutes“ interview, Hinton said AI systems are intelligent, capable of comprehension, and can make experiential decisions in the same sense that humans do; achieving self-awareness is only a matter of time, effectively making AI more intelligent than humans. Hinton and collaborators Yann LeCun and Yoshua Bengio created a neural network to learn by trial and error, strengthening connections that lead to correct outcomes. Hinton suggested modern AI systems can learn better than the human mind despite having fewer connections, even though their exact inner workings are unknown. Hinton urges experiments to improve our understanding of how the technology works, as well as government regulation, and a worldwide ban on the use of military robots.
And what’s the conclusion? That if AI bots take over (it’s never been explained to me what exactly they will take over and once they do, what they will do with it or to it, but I guess that’s another discussion), they will be able to do so because we are so totally predictable. Stimulus A also produces action B—always or at least 95% of the time. Therefore with such root behavior, AI bots don’t have to achieve what we call consciousness.
Good biological self-aware and conscious robots that we are, our actions are almost totally predictable. Once that’s learned by the scarry and threatening AI formally known as Skynet, then we will have no free choice and the AI will predict and then presumably control. That could be a good thing. The AI will see us looking at our phone while crossing the street and not looking at the traffic light, or seeing Bruce Willis speeding down the wrong side of the street in a hot chase, and alert us to the impending danger—Hey! You big bag of salt water—watch out!
Or, the AI may decide—one more eliminated, jeez, this is almost too easy they kill themselves for us.
So somewhere a random number generator has to be injected into the system. An app on our phone that we’re constantly looking at while walking down the street, says wait! Normally, you would turn right in 100 feet in the Starbuck and get a chilled pumpkin seed no fat, cuckoo-milk cappuccino frappe—don’t! Get whole milk today—trust me. And you do, and Skynet, says Damn- Foiled again. We’ll never take over at this rate,
If David Bowie were still alive, he’d write a song: Stop making predictability.