Robots that can solve Rubik’s cube have been around for some time, but researchers have now developed an algorithm that teaches itself how to get the job done without human input. DeepCube makes its own reward system for this.
When using deep learning for chess or go, for example, systems can learn to win with only knowledge of the rules of the game, through a reward system. At every move, algorithms evaluate the influence on the game and based on ratings, machines can learn how to play well.
With Rubik’s cube, it is difficult for an algorithm to determine whether a random action brings it closer to the solution or not, writes Technology Review. Researchers at the University of California at Irvine are therefore using a deep learning technique called ‘self-taught iterations’. Here the algorithm starts with an already solved cube and works back to other configurations to get an idea which actions work towards the solution and which do not.
After training, it can solve one hundred percent of arbitrarily configured cubes, in an average of thirty actions. According to the researchers, their work can be helpful for self-solving systems such as the puzzle game Sokoban and the game Montezuma’s Revenge, but also for factoring. In addition, they hope to be able to use their technique for more complex problems, such as predicting the tertiary structure of proteins.
The scientists describe their research in the paper Solving the Rubik’s Cube Without Human Knowledge.