Tristan Greene* says a robot that can ‘solve’ a Rubik’s Cube one-handed has the AI community at war.
OpenAI, a non-profit co-founded by Elon Musk, recently unveiled its newest trick: A robot hand that can solve a Rubik’s Cube.
Whether this is a feat of science or mere prestidigitation is a matter of some debate in the artificial intelligence (AI) community right now.
In case you missed it, OpenAI posted an article on its blog last week titled ‘Solving Rubik’s Cube With a Robot Hand’.
Based on this title, you’d be forgiven if you thought the research discussed in said article was about solving Rubik’s Cube with a robot hand.
It is not.
Don’t get me wrong, OpenAI created a software and machine learning pipeline by which a robot hand can physically manipulate a Rubik’s Cube from an ‘unsolved’ state to a solved one.
But the truly impressive bit here is that a robot hand can hold an object and move it around (to accomplish a goal) without dropping it.
The robot, which is just a hand, doesn’t actually figure out how to solve the puzzle.
An old-school non-AI algorithm does the maths using sensor data, then transmits each step to the hand in succession.
The hand just follows directions.
It’s relatively simple to create a purpose-built machine designed to perform a specific function in a perfect environment.
But another to do it with one hand, under unpredictable, adverse conditions.
OpenAI had to develop a training method that constantly challenges the AI to figure out new ways of solving a problem.
As soon as the AI came up with a method that works well, thus becoming complacent, the researchers would change things up.
This kept it ready for things it’d never encountered in more than 10,000 hours of simulation training.
In the real world, the researchers physically messed with it by pushing and shoving it while it tried to work.
These perturbations resulted in the training of an AI that can process all the physics (gravity, friction, etc.) involved in keeping the hand on task.
And it isn’t as robust at performing this task as you might think.
Consider this bit at the end of OpenAI’s blog post: “Our method currently solves the Rubik’s Cube 20 per cent of the time when applying a maximally difficult scramble that requires 26 face rotations.”
“For simpler scrambles that require 15 rotations to undo, the success rate is 60 per cent.”
Remember, when it ‘fails’, that doesn’t mean it can’t figure out the puzzle.
It means that it’s either dropped it or fumbled its attempts to spin the cube’s sides until time ran out.
But, it’s still impressive, nonetheless.
The Rubik’s Cube puzzle is just a placeholder for ‘whatever problem you need a robot hand to solve’.
It could just as easily be tasked with juggling tomatoes without squishing them or playing a piano while people throw beer bottles at it, and it would essentially be the same kind of accomplishment.
Unfortunately, the language OpenAI used to describe this incredible cutting-edge AI research makes it look like it has ‘solved’ a Rubik’s Cube using deep learning neural networks.
Which would not only be, as far as we know, the first time anyone’s done such a thing, but probably pointless.
The point is: a robot hand that can do random stuff is a much, much more impressive accomplishment than using an old algorithm to align the colours on a Rubik’s Cube.
In machine learning, ‘general’ or ‘broad’ tasks are generally much more difficult to pull off than ‘specific’ or ‘narrow’ tasks.
It’s easier to build a Rubik’s Cube solver than a hand that doesn’t drop stuff.
Gary Marcus, CEO of Robust AI and author of Rebooting AI: Building Artificial Intelligence We Can Trust, took immediate exception to OpenAI’s blog post.
He called it misleading and intimated that OpenAI was, once again, causing the media to print overzealous, hyperbolic headlines and stories.
Ilya Sutskever, Chief Scientist at OpenAI, took umbrage at Marcus’s assertion and seems to think there’s an ulterior motive: “Surprised and saddened by all the bad faith criticism of our robotic manipulation result.”
Marcus outright dismisses the criticism of his criticism: “It’s not bad faith, it’s facts.”
“You may feel I made these points because I have a book out, but fact is I have been puncturing hype and championing nativism and hybrid models for 30 years.”
And everyone else seems torn between calling Marcus and those who agree with him pedantic and OpenAI intentionally misleading.
Carnegie Mellon’s Zachary Lipton called the research ‘interesting’ and the PR behind it ‘weapons-grade’.
And perhaps he has a point, after all, The Washington Post published an article titled ‘This Robotic Hand Learned to Solve a Rubik’s Cube on its Own — Just Like A Human’.
That’s a headline that feels like it’s straddling the border between poor interpretation and outright poppycock.
In OpenAI’s defence, it doesn’t control the media.
But, to OpenAI’s detriment, it kinda does control the media.
It’s a non-profit co-founded by Elon Musk (no longer involved) that just received a billion dollars from Microsoft in a much-ballyhooed ‘partnership’ to ‘develop human-level AI’ that just looks like a marketing deal for Azure.
It’s no stranger to press coverage.
Furthermore, OpenAI recently stirred up controversy after choosing to withhold the models for an open-source AI text-generator over concerns it wouldn’t be ethical to release it to the public (to be fair, it’s pretty scary).
OpenAI knows exactly what kind of controversy it’s courting when it presents these stories to the media.
Critics claim it doesn’t do quite enough to head these stories off or disabuse the general public of hyperbolic notions.
And that’s a shame.
The real story, the amazing robot hand that can manipulate physical objects in a non-optimised environment, is a great one.
Unfortunately, it’s been largely swallowed up in the noise.
* Tristan Greene is a reporter for The Next Web. He tweets at @mrgreene1977.
This article first appeared at thenextweb.com.