Researchers are using drones, AI, and digital recorders to create a “zoological version of Google Translate.”
By Neel Dhanesha@firstname.lastname@example.org Oct 30, 2022, 7:00am EDT
The world around us is vibrating with sounds we cannot hear. Bats chitter and babble in ultrasound; elephants rumble infrasonic secrets to each other; coral reefs are aquatic clubs, hopping with the cracks and hisses and clicks of marine life.
For centuries, we didn’t even know those sounds existed. But as technology has advanced, so has our capacity to listen. Today, tools like drones, digital recorders, and artificial intelligence are helping us listen to the sounds of nature in unprecedented ways, transforming the world of scientific research and raising a tantalizing prospect: Someday soon, computers might allow us to talk to animals.
In some ways, that has already begun.
“Digital technologies, so often associated with our alienation from nature, are offering us an opportunity to listen to nonhumans in powerful ways, reviving our connection to the natural world,” writes Karen Bakker in her new book, The Sounds of Life: How Digital Technology Is Bringing Us Closer to the Worlds of Animals and Plants.
Automated listening posts have been set up in ecosystems around the planet, from rainforests to the depths of the ocean, and miniaturization has allowed scientists to stick microphones onto animals as small as honeybees.
“Combined, these digital devices function like a planetary-scale hearing aid: enabling humans to observe and study nature’s sounds beyond the limits of our sensory capabilities,” Bakker writes.
All those devices create a ton of data, which would be impossible to go through manually. So researchers in the fields of bioacoustics (which studies sounds made by living organisms) and ecoacoustics (which studies the sounds made by entire ecosystems) are turning to artificial intelligence to sift through the piles of recordings, finding patterns that might help us understand what animals are saying to each other. There are now databases of whale songs and honeybee dances, among others, that Bakker writes could one day turn into “a zoological version of Google Translate.”
But it’s important to remember that we aren’t necessarily discovering these sounds for the first time. As Bakker points out in her book, Indigenous communities around the world have long been aware that animals have their own forms of communication, while the Western scientific establishment has historically dismissed the idea of animal communication outright. Many of the researchers Bakker highlights in her book faced intense pushback from the scientific community when they suggested whales, elephants, turtles, bats, and even plants made sounds and even might have languages of their own. They spent nearly as much time pushing back against the pushback as they did conducting research.
While that seems to be changing with our increased understanding of animals, Bakker cautions that the ability to communicate with animals stands to be either a blessing or a curse, and we must think carefully about how we will use our technological advancements to interact with the natural world. We can use our understanding of our world’s sonic richness to gain a sense of kinship with nature and even potentially heal some of the damage we have wrought, but we also run the risk of using our newfound powers to assert our domination over animals and plants.
Last June an article by Jonathan Schaeffer, Martin Müller & Akihiro Kishimoto, AIs Have Mastered Chess. Will Go Be Next? was published. “Randomness could trump expertise in this ancient game of strategy,” followed. “Jonathan Schaeffer, a computer science professor at the University of Alberta, in Canada, had been creating game-playing artificial intelligence programs for 15 years when Martin Müller and Akihiro Kishimoto came to the university in 1999 as a professor and graduate student, respectively. Kishimoto has since left for IBM Research–Ireland, but the work goes on—and Schaeffer now finds it plausible that a computer will beat Go’s grand masters soon. “Ten years ago, I thought that wouldn’t happen in my lifetime,” he says.” (http://spectrum.ieee.org/robotics/artificial-intelligence/ais-have-mastered-chess-will-go-be-next)
Jonathan Schaeffer is the man behind Chinook, the computer program that solved Checkers. You can find the paper, Checkers is Solved, to learn about the proof here: (http://webdocs.cs.ualberta.ca/~chinook/)
He has also revised his book first published in 1997, One Jump Ahead: Computer Perfection at Checkers, which I read years ago. Jonathan Schaeffer is like E. F. Hutton in that when he talks about a computer game program, you listen.
For years I have followed news of computer Go programs. Before sitting down to punch & poke I searched for the latest news, coming up empty. This as good news for humans because Go is the last board game bastion holding against machine power. It is also the world’s oldest, and most complicated, board game. It “originated in ancient China more than 2,500 years ago. It was considered one of the four essential arts of a cultured Chinese scholar in antiquity. Its earliest written reference dates back to the Confucian Analects.” (https://en.wikipedia.org/wiki/Go_%28game%29)
Schaeffer and his group have developed a Go-playing computer program, Fuego, an open-source program that was developed at the University of Alberta. From the article, “For decades, researchers have taught computers to play games in order to test their cognitive abilities against those of humans. In 1997, when an IBM computer called Deep Blue beat Garry Kasparov, the reigning world champion, at chess, many people assumed that computer scientists would eventually develop artificial intelligences that could triumph at any game. Go, however, with its dizzying array of possible moves, continued to stymie the best efforts of AI researchers.”
In 2009 Fuego “…defeated a world-class human Go player in a no-handicap game for the first time in history. Although that game was played on a small board, not the board used in official tournaments, Fuego’s win was seen as a major milestone.”
They write, “Remarkably, the Fuego program didn’t triumph because it had a better grasp of Go strategy. And although it considered millions of possible moves during each turn, it didn’t come close to performing an exhaustive search of all the possible game paths. Instead, Fuego was a know-nothing machine that based its decisions on random choices and statistics.”
I like the part about it being a “know-nothing machine.” I have often wondered if humans, like Jonathan Schaeffer, who are devoting their lives to the development of “thinking” machines, will be reviled by future generations of humans as is the case in the Terminator movies. It could be that in the future humans will say, “Hitler was nothing compared to the evil SCHAEFFER!” If I were supreme world controller a command would be issued ending the attempts to crack Go, leaving my subjects one beautiful game not consigned to the dustbin of history, as has been the fate of checkers. I fear it is only a matter of time before chess meets the same fate. GM Parimarjan Negi was asked in the “Just Checking” Q&A of the best chess magazine in the history of the universe, New In Chess 2014/6, “What will be the nationality of the 2050 World Champion?” He answered the question by posing one of his own, “Will we still have a world championship?” Good question. I would have to live to one hundred to see that question answered. Only former President of the GCA, and Georgia Senior Champion, Scott Parker will live that long, possibly still be pushing wood in 2050, if wood is still being pushed…
The article continues, “The recipe for building a superhuman chess program is now well established. You start by listing all possible moves, the responses to the moves, and the responses to the responses, generating a branching tree that grows as big as computational resources allow. To evaluate the game positions at the end of the branches, the program needs some chess knowledge, such as the value of each piece and the utility of its location on the board. Then you refine the algorithm, say by “pruning” away branches that obviously involve bad play on either side, so that the program can search the remaining branches more deeply. Set the program to run as fast as possible on one or more computers and voilà, you have a grand master chess player. This recipe has proven successful not only for chess but also for such games as checkers and Othello. It is one of the great success stories of AI research.”
“Go is another matter entirely,” they write, “The game has changed little since it was invented in China thousands of years ago, and millions around the world still enjoy playing it.”
But for how long?
“Game play sounds simple in theory: Two players take turns placing stones on the board to occupy territories and surround the opponent’s stones, earning points for their successes. Yet the scope of Go makes it extremely difficult—perhaps impossible—for a program to master the game with the traditional search-and-evaluate approach.”
This is because, “For starters, the complexity of the search algorithm depends in large part on the branching factor—the number of possible moves at every turn. For chess, that factor is roughly 40, and a typical chess game lasts for about 50 moves. In Go, the branching factor can be more than 250, and a game goes on for about 350 moves. The proliferation of options in Go quickly becomes too much for a standard search algorithm.”
Hooray! That is the good news, and there is more…”There’s also a bigger problem: While it’s fairly easy to define the value of positions in chess, it’s enormously difficult to do so on a Go board. In chess-playing programs, a relatively simple evaluation function adds up the material value of pieces (a queen, for example, has a higher value than a pawn) and computes the value of their locations on the board based on their potential to attack or be attacked. Compared with that of chess pieces, the value of individual Go stones is much lower. Therefore the evaluation of a Go position is based on all the stones’ locations, and on judgments about which of them will eventually be captured and which will stay safe during the shifting course of a long game. To make this assessment, human players rely on both a deep tactical understanding of the game and a clear-eyed appraisal of the overall board situation. Go masters consider the strength of various groups of stones and look at the potential to create, expand, or conquer territories across the board.”
This sounds good so far, but then they continue, “Rather than try to teach a Go-playing program how to perform this complex assessment, we’ve found that the best solution is to skip the evaluation process entirely.”
Oh no, Mr. Bill!
“Over the past decade, several research groups have pioneered a new search paradigm for games, and the technique actually has a chance at cracking Go. Surprisingly, it’s based on sequences of random moves. In its simplest form, this approach, called Monte Carlo tree search (MCTS), eschews all knowledge of the desirability of game positions. A program that uses MCTS need only know the rules of the game.”
I do not know about you, but I am hoping, “What happens in Monte Carlo stays in Monte Carlo.” Do you get the feeling we are about to be Three Card Monte Carloed?
“From the current configuration of stones on the board, the program simulates a random sequence of legal moves (playing moves for both opponents) until the end of the game is reached, resulting in a win or loss. It automatically does this over and over. The magic comes from the use of statistics. The evaluation of a position can be defined as the frequency with which random move sequences originating in that position lead to a win. For instance, the program might determine that when move A is played, random sequences of moves result in a win 73 percent of the time, while move B leads to a win only 54 percent of the time. It’s a shockingly simple metric.”
“Shockingly simple,” my jackass. There is much more to the article, including this, “The best policies for expanding the tree also rely on a decision-making shortcut called rapid action value estimation (RAVE). The RAVE component tells the program to collect another set of statistics during each simulation.”
As in “Raving lunatic.” The article provides a list of what current computer programs have done to games, and how they rate in “…two-player games without chance or hidden information…”
TIC-TAC-TOE (Game positions, 10 to the 4th power) = Toast
OWARE (Game positions, 10 to the 11th power) = Fried
CHECKERS (Game positions, 10 to the 20th power)= Cooked
OTHELLO (Game positions, 10 to the 28th power)= Superhuman
CHESS (Game positions, 10 to the 45th power) = Superhuman
XIANGQI (CHINESE CHESS) (Game positions, 10 to the 48th power) = Best Professional
SHOGI (JAPANESE CHESS) (Game positions, 10 to the 70th power) = Strong Professional
GO = (Game positions, 10 to the 172th power) = Strong Amateur
They end the article by writing, “But there may come a day soon when an AI will be able to conquer any game we set it to, without a bit of knowledge to its name. If that day comes, we will raise a wry cheer for the triumph of ignorance.”
I would much prefer to raise a stein and drown my sorrows to that…