[ad_1]
You might think that a future where animals are replaced by robots is dystopian. But from horses carrying or pulling heavy loads to dogs guarding livestock and residences, working animals have already been supplanted by machines, such as automobiles or IoT-enabled alarm systems.
Now, we’re on the cusp of further change, where another kind of machine performs even more complex tasks for humans, especially in the fields of healthcare, education, and manufacturing.
Also: 85% of business leaders would let a robot make their decisions
Today, robots are increasingly becoming indispensable. Think of the examples of robots as companions for the elderly, providing vital mental and physical succor, or as co-workers on an assembly line.
Yet, as it stands, there are no ‘guiding principles’ that inform an interaction between humans and robots. In other words, we need to think about how humans and robots should communicate with each other to maximize efficiencies and minimize disruptions — and maybe even forge bonds?
Science fiction writer Isaac Asimov was first out of the block to try and frame this relationship when, in 1942, he created a set of rules in this sphere.
They were: don’t let human beings come to harm, obey orders (and don’t violate rule 1), and protect yourself (and don’t violate rule 2).
The need for horse sense
With all the talk about the imminent demise of humanity due to the impact of AI, Asimov’s rules might appear sensible prescriptions, even if they are a little vague.
Also: This new technology could blow away GPT-4 and everything like it
Unfortunately, these rules are also overly simplistic, and don’t really inform the complex ways in which we will need to design our machine workers.
Studying the behavior of dogs, and considering how they inform a relationship with their masters, offers one option, as canines seem to enjoy human companionship as well as leadership.
The problem, however, is that dogs are predators, and are not the greatest role models when designing benign machines. They also accompany most of their expressive behavior with sounds, such as barks, growls, and whines.
It’s these kinds of issues that confronted Eakta Jain, associate professor of computer science at University of Florida, as she set about trying to conceptualize a framework for human-robot interactions.
As a member of the university’s Transportation Institute, Jain had first-hand experience observing how autonomous vehicles are able to keep tabs of other vehicles and maintain an appropriate distance from everyone, and even monitor the driver.
Also: Future ChatGPT versions could replace the majority of work people do today, says Ben Goertzel
But while trying to size up an appropriate benchmark around which to fashion design principles for robot-human interaction, she had a brainwave.
“Horses have been interacting with humans for over 10,000 years, transporting goods and people, in ranch work and in fighting wars. When I started thinking about this analogy, I realised that there are so many parallels to what robots will be doing,” says Jain in a conversation with ZDNET.
In fact, horses are unusual beings. They have an uncanny ability to sense their environment and are very responsive to their handler. They are also extremely intelligent and can be trained to function as part of a team.
Crucially, horses have been used in activities all over the world, across different cultures and geographies. “I also realised that there’s something about them that is like a universal human-agent interface,” says Jain.
She realized that the path to understanding horse behavior better was by spending time at the Equine Center, which was conveniently situated on the University of Florida’s campus.
Also: Google DeepMind’s new RT-2 system enables robots to perform novel tasks
A few months into her observational research, however, Jain realized that the only way she could have any true insight into a horse’s behavior was not just by observation, but by learning how to ride.
Over the course of six months, Jain learned to ride horses through weekly lessons. Along with her fellow researcher who is a doctoral student in her department, she was able to home in on a few key observational principles about horses that could inform robot design.
The first and most important takeaway was that horses primarily communicate through non-verbal methods — moving their ears about to follow the sound of movements and threats. Horses also move their ears to indicate whether they are listening to you or not. After all, using sound could be a death sentence for a prey animal.
Other non-verbal cues include an accumulation of tension in the muzzle, raising the neck, and an escalation of a threat, or foot stamping.
Call of duty
So, how can we apply this knowledge in a robot-driven future that is designed to serve humanity?
One scenario that the researchers came up with where non-verbal cues could play an important role could be where an individual has a hearing impairment.
A therapy robot could have prominent ears that point to the human user when it is listening to them, and toward the door when it hears a knock.
Another area that Jain and her colleague found to be unique to horses was the notion of respect.
“We don’t typically think about respect in the context of human-robot interactions,” said Jain. “What ways can a robot show you that it respects you? Can we design behaviors similar to what the horse uses? Will that make the human more willing to work with the robot?”
Also: The best AI chatbots
Young horses are trained early on to move away or step backwards when a human approaches them. Similarly, a horse moving at the pace of the trainer indicates that the horse holds the human in high regard.
Hospitals, where robots would need to unobtrusively assist nurses and doctors by following along and stopping and starting in tandem with their human counterparts, are also ideal spaces to illustrate this design principle in a real-world scenario.
If you remember the bickering between R2-D2 and C-3PO in Star Wars, for example, you might have some inkling of the standoff that might occur between different kinds of delivery robots, such as a drone and a sidewalk-delivery robot that refuse to give way to each other.
Using similar cues in horse behavior — such as size or type for determining dominance hierarchy and, therefore, respect — delivery robots can be designed to give way to each other depending on where they stand in the established pecking order, and to determine who gets priority to deliver their item first.
Another key observation that emerged from Jain’s research was that different horses have different skills, as they’re essentially bred for specific traits. A ranch horse, the paper points out, is bred for quickness, a race horse for speed, and a dressage horse for its sensitivity.
Similarly, robots and humans have different abilities and, when working together, different speeds. So, any future relationship between a human and a robot necessitates both parties getting used to each other and learning as they work together during the early stages of the relationship.
Also: The 5 best telepresence robots: Super-charge remote work
For instance, a pack-carrying robot that needs to maintain a set distance from a human has to be taught to walk at varying speeds, so that the robot can acclimatize to that specific human’s walking pattern, including speed, gait, or pace. In other words, the robot will fit its patterns to the human with which it works.
This kind of early-learning through repetitive tasks is especially crucial in some scenarios, such as a factory, where the human has a posture or gait. This education process will mean there are no frustrating slip-ups, or the development of a counterproductive sense of disdain for the robot partner.
As we approach a new dawn for robots, the manner in which we communicate with our machine counterparts, the methods with which we train them, and the ways in which we regard them could become a crucial marker for future human success.
Just take a peek at how we relate with fellow humans to understand the opportunities and perils that face us in the longer term.
[ad_2]
Source link