Google's Artificial intelligence translator, the Google Neural Machine Translation System (GNMTS), has developed a language of its own, as stated in a paper recently published on Arxiv by a team of researchers.
In September 2016, the tech giant had announced that the GNMTS was live and functional. The system uses deep learning to translate languages. Subsequently, the team conducted an experiment where they first taught GNMTS to translate English into Korean and Japanese, and further tasked it with translating Korean into Japanese, without using English. The team found that the system could make reasonably accurate translations between the two languages, marking a significant achievement for deep neural networks.
Finding that GNMTS was able to understand connections between words and concepts, the question rose if the system had developed its own concepts for words at a deeper level, and could it have developed a language of its own, allowing for relating sentences and phrases between two different languages. Based on how the system has stored the representation, and the relation of concepts and words in its memory space, the researchers think it has.
According to the paper, the language exists at a deeper level where GNMTS understands the similarities in representation of a sentence or word in the three languages. The inner processes of such complex networks are still unclear. However, the independent creation of such an “interlingua” by the system is a milestone in itself.