Cloud Computing

The robots that keep their brains in the web

Robots have many useful attributes but intelligence is not yet one of them. Placed in a strange environment or given a new task to perform, a robot simply does not know what to do. Cloud robotics aims to overcome that big failing by enabling robots to get their smarts from the internet.

As robots come out of their safety cages and into the human world, they will need to learn a greater variety of tasks and cope with unfamiliar environments. That imposes big challenges for robot developers.

“We have the necessary hardware but what’s really missing in the intelligence,” says Gajamohan Mohanarajah, CEO of Rapyuta Robotics, a Japanese startup that has developed a cloud-based collaborative framework for robots.

The philosophy underpinning cloud robotics is to let robots overcome their current limitations by crowdsourcing the resources they lack from the cloud. It is particularly relevant to mobile robots and drones, which have important power consumption, weight and space limitations.

“If we want to build robots that are lightweight and low-cost, we cannot have a lot of computational power onboard,” says Mohanarajah.

Offloading number-crunching to the cloud saves on hardware costs and power consumption, and reduces processing delay, particularly for algorithms that can be parallelised. Take, for example, grasp planning – a skill which every baby acquires before toddlerhood. A robot, however, doesn’t know how to grasp an unfamiliar object. So it has to laboriously evaluate many possible hand configurations before deciding on the best “grasp candidate”.

Researchers at the University of California have shown this task can be dramatically speeded up through cloud-based parallel processing and by using a sampling-based algorithm that reduces the number of grasp candidates by 90%. In one set-up, using 500 processor nodes, they obtained a 515-fold speedup.

Stereo vision – another skill that we humans take for granted – is also a highly resource-intensive task and one that’s essential if robots are to navigate our world. Google’s self-driving car makes this look easy, but the trick here is that it is preloaded with a 3D map of its intended route. So, the software knows what to expect as the car drives around. It also has been taught how dynamic objects like pedestrians behave.

“The Google car has a very sophisticated set of algorithms but it’s a relatively simple example of a mobile robot because it is moving in a predictable and largely stable environment,” says Javier Civera, associate professor of robotics, at the University of Zaragoza in Spain.

Creating mobile robots that can navigate complex environments and interact with unfamiliar objects is a much tougher problem. “Before a robot can manipulate an object, it has to recognise the object,” says Dan Kara, director of robotics research at analyst firm ABI Research.

Is that large fuzzy shape on the kitchen floor a sleeping dog or a pile of washing? If a domestic robot gets it wrong, the family pet could end up in the washing machine.

Performing tasks like image recognition in real time requires substantial computing power. Google’s car offloads most of the heavy lifting to the cloud and tomorrow’s smarter robots will need to do the same, researchers say.

“Robots have real-time constraints that have to be met, so not everything can be offloaded to the cloud. But there are some tasks that will need to be done by cloud resources,” explains Civera.

Just as robots can tap into the cloud for computational power, so too for data storage. This will enable robots to have less onboard memory, so saving power and costs. Uploading data into the cloud has other advantages. Aggregate data from many robots can be analysed using big data tools or monitored by management systems.

Amazon uses a shared data approach to keep its 15,000 mobile robots from colliding as they move goods around its shipping centres. A central server coordinates their movements using the real-time position data the robots send over the network.

The most interesting development in cloud robotics is undoubtedly machine learning. Training robots is today very time-consuming and tedious. So tomorrow’s robots will increasingly need to teach themselves how to handle new tasks. That requires that they learn from past experiences using knowledge on the web.

In the future, robots might be able to tap into web-based “cognition as a service” platforms like IBM Watson. “The web will be used as a source of cognition for robots and for deep learning,” says Kara.

Deep learning is a new branch of machine learning focused on helping robots acquire the kind of complex knowledge needed to interact in a human world. Google researchers have demonstrated how cloud-based deep learning systems made from neural networks can automatically detect and classify objects and human actions

In one experiment, a neural network made up of 16,000 processors taught itself to recognise cats after a week of watching YouTube videos. Astonishingly, it had no prior knowledge of what a cat looked like. Once a domestic robot has learnt to recognise the family cat -- and distinguish it from the family dog -- it could learn about cats’ behaviour by watching more YouTube videos, or it could simply ask other robots what they know about cats.

RoboEarth, an EU-funded research project, was set up to encourage this knowledge sharing, creating in effect, a world wide web for robots. Today, the code and algorithms used by robots are highly hardware dependent and so difficult to reuse. RoboEarth creates a framework to abstract machine-specific knowledge and turn it into generic “recipes” that every robot can understand.

Take for example the seemingly simple task of opening an Ikea Expedit cupboard, a demonstration task in the RoboEarth database. For a robot, the learning process is complicated by the spring-loaded door hinges, which require the pulling force to be varied.

“The first few times, the robot tried to pull the door off,” says Gajamohan Mohanarajah, who participated in the RoboEarth project. Once the robot had mastered the task, the knowledge was uploaded to RoboEarth and downloaded to a different type of robot. The second robot, despite having more rudimentary sensing, successfully opened the door using just the knowledge it had acquired from the first.

When the RoboEarth project finished last year, Mohanarajah and his team set up Rapyuta Robotics to find real-world applications for the research. One promising area is low-cost robotic security guards that work collaboratively to patrol premises in Japan.

But clearly Mohanarajah has missed the “killer application” for his research: robots that know how to assemble Ikea furniture. Now that's a task even humans struggle to master.


« The origins of 'Bad Bots': Which countries to worry about


Grenoble vs. Paris: The home of French high tech »
Geoff Nairn

Geoff Nairn is a freelance writer who writes about technology, business and finance for wide range of business and specialist media. He is currently writing a book on big data, robotics and AI in tomorrow's deeply automated economy.

  • Mail

Recommended for You

How to (really) evaluate a developer's skillset

Adrian Bridgwater’s deconstruction & analysis of enterprise software

Unicorns are running free in the UK but Brexit poses a tough challenge

Trevor Clawson on the outlook for UK Tech startups

Cloudistics aims to trump Nutanix with 'superconvergence' play

Martin Veitch's inside track on today’s tech trends


Is your organization fully GDPR compliant?