Nearly everyone has an opinion on “robots“. My nerdier friends get excited and talk about the latest advances; my sociologist and anthropologist friends shake their heads and bring up issues of inbuilt prejudices and morality; some of us STEAM types, who operate on cusps of disciplines, see the possibilities and the risks and get alternately curious, elated, worried.
This week’s links — far more numerous than four! — are about artificial intelligence, robotics, consciousness and machine ethics.
If you have ever used Cortana, or engaged in pointless back-and-forth with Siri, you already know that speech recognition platforms are getting better at context recognition. Bigger developments are in the works.
A leading unnamed bank in the UK is testing Amelia to help its staff assess mortgage lending suitability of applicants. Amelia is an AI and machine learning driven cognitive platform, a machine agent, to assist or perhaps, replace human agents in some customer interactions. Colloquially, a “robo-advisor”.
… traditional automated response systems, which are pre-recorded menus, equipped with speech recognition software that guides customers through a range of options, are cumbersome.
Amelia, however, has contextual filters which allows it to understand loosely stated problems and recognise sentences that have the same meaning but are structured differently.
When faced with foreign queries, the system will call upon a more experienced human agent to help resolve the issue. It will then listen in to the human-to-human interaction and create new steps in its process ontology, which will enable Amelia to address the same type of issue with subsequent callers.
Kurzweil predicts that humans will become hybrids in the 2030s. That means our brains will be able to connect directly to the cloud, where there will be thousands of computers, and those computers will augment our existing intelligence. He said the brain will connect via nanobots — tiny robots made from DNA strands.
“Our thinking then will be a hybrid of biological and non-biological thinking,” he said.
The bigger and more complex the cloud, the more advanced our thinking. By the time we get to the late 2030s or the early 2040s, Kurzweil believes our thinking will be predominately non-biological.
Many of Kurzweil’s predictions have been on the ball. So this one is worth watching. Meanwhile a new startup is working on transferring people’s consciousness into artificial bodies or deceased humans. Considering the question “what is consciousness?” is still unresolved, this will be fascinating to watch. Not least because of the many questions of medical ethics arising from the way they word their value proposition now.
“We’re using artificial intelligence and nanotechnology to store data of conversational styles, behavioral patterns, thought processes and information about how your body functions from the inside-out. This data will be coded into multiple sensor technologies, which will be built into an artificial body with the brain of a deceased human. Using cloning technology, we will restore the brain as it matures.”
Is our fascination with machines and making them “human-like” new?
Far from it. This beautiful essay looks at medieval technology and human fascination with things invisible but powerful. It is hard to imagine now how exciting it must have been back then to see a galvanometer needle move.
In the 19th century, scientists and artists offered a vision of the natural world that was alive with hidden powers and sympathies. Machines such as the galvanometer – to measure electricity – placed scientists in communication with invisible forces. Perhaps the very spark of life was electrical.
Even today, we find traces of belief in the preternatural, though it is found more often in conjunction with natural, rather than artificial, phenomena: the idea that one can balance an egg on end more easily at the vernal equinox, for example, or a belief in ley lines and other Earth mysteries. Yet our ongoing fascination with machines that escape our control or bridge the human-machine divide, played out countless times in books and on screen, suggest that a touch of that old medieval wonder still adheres to the mechanical realm.
Finally, machine ethics. The Human Computer Interaction Lab at Tufts University is tackling an important problem in robotics: “How exactly do you program a robot to think through its orders and overrule them if it decides they’re wrong or dangerous to either a human or itself?”
This is what researchers at Tufts University’s Human-Robot Interaction Lab are tackling, and they’ve come up with at least one strategy for intelligently rejecting human orders.
The strategy works similarly to the process human brains carry out when we’re given spoken orders. It’s all about a long list of trust and ethics questions that we think through when asked to do something. The questions start with “do I know how to do that?” and move through other questions like “do I have to do that based on my job?” before ending with “does it violate any sort of normal principle if I do that?” This last question is the key, of course, since it’s “normal” to not hurt people or damage things.
The Tufts team has simplified this sort of inner human monologue into a set of logical arguments that a robot’s software can understand, and the results seem reassuring. For example, the team’s experimental android said “no” when instructed to walk forward though a wall it could easily smash because the person telling it to try this potentially dangerous trick wasn’t trusted.
The video in the link — please click through for the 1min long clip — shows a robot that is programmed for intelligent rejection of an order that puts it at risk or comes from an unauthorised person.
To the untrained eye, not concerned with either the technology or the ethical implications, this sufficiently advanced technology looks like Arthur C Clarke’s ‘magic’. However to many of us, this raises interesting and important questions about future developments. The algorithm embodies the biases and prejudices of the humans who design it. Including unconscious bias which doesn’t go away with “training”.
The year 1968 was not so far back in time but back then, 2001 was far out enough in time. This was fiction then, but as we close 2015, we are getting closer to making it a reality.
“Open the pod bay doors, Hal.”
“I am sorry, Dave, I am afraid I can’t do that.”