River Rail Colby Issue
River Rail

Algorithms in the Wild

Oceanographer Nick Record imagines algorithms as living, thinking organisms in the noosphere.

Adventures in the Noosphere

The scene opens to an expansive panorama of a river flowing into a coastal delta. As we fly beyond the river and out over the ocean, a classical score matches the grandeur of what’s before us. We zoom in for a closer look and see a shoal of fish large enough to be visible from the air, some leaping gracefully from the water. A cloud of seabirds teems above. Cue the narration by Sir David Attenborough: “Our coastal ocean. Some of the most productive waters on the planet. From this vantage point, we can glimpse hundreds of species. But there is one type of creature living among them that we cannot see...the elusive algorithm.”

Yes, the algorithm. Perhaps they won’t be the focus of a Planet Earth BBC documentary, but algorithms are all around us, living in many of the nooks and crannies created by nature. If you haven’t heard of algorithms, you’re probably in good company. If you have heard of them, perhaps it was in the context of something that Google or Facebook is doing, or perhaps from some millennial techie.

The working definition of an algorithm is simply a set of ordered rules, typically mathematical instructions carried out by a computer. The idea goes back to the ninth-century Persian mathematician al-Khwarizmi, who gave us algebra. But in the last hundred years or so, as the flow of information has grown into an all-encompassing global network, algorithms have taken on lives of their own, proliferating over this network, competing, surviving, and adapting like exotic alien creatures. As they have done so, they have become ever more intertwined with the natural world around us—the rivers, lakes, and oceans, our planetary life-support system. As the global crises of climate change and biodiversity loss intensify, this eclectic and mysterious ecosystem of algorithms will play a crucial role in how and whether humanity perseveres.

To see these interconnections involving the algorithm, it helps to take a step back and look at our planet from afar. About a century ago, the Russian-Ukrainian scientist Vladimir Vernadsky popularized the term “biosphere,” describing the thin layer of life that surrounds Earth. This thin sphere is the space in which Homo sapiens moves around, eats, dreams, contemplates, masticates, and defecates. The idea of the biosphere—a complex interconnected system that can act globally as a geologic force, shaping the planet’s rivers, hills, and valleys—revolutionized our view of life itself. Today, we recognize the biosphere as a stabilizing influence on Earth, maintaining a balance that supports conditions favorable to human life: the presence of clean drinking water, or of an atmosphere whose proportion of elements is suitable for human breathing.

A lesser-known term popularized by Vernadsky is “noosphere”—from the Greek noús, “mind”—referring to the interconnected sphere of human thought. Vernadsky and some of his contemporaries argued that the noosphere, this cloud of human thought encircling the globe, can also fundamentally reshape the conditions on Earth. Just as the biosphere had emerged as a force to shape the geosphere, so would the noosphere emerge to shape the biosphere. In Vernadsky’s time the noosphere was exclusively an abstraction and gained little traction in mainstream scientific thought. But nowadays, however esoteric the idea might seem, it doesn’t take much imagination to regard the massive, near-instantaneous flow of globally networked information as a very concrete manifestation of noosphere, exerting a tangible influence on the face of Earth. This manifestation of the noosphere, however, differs markedly from what Vernadsky and his fellow thinkers had described.

Imagine the noosphere as something akin to the biosphere: a living, evolving ecosystem, always dynamic and rapidly spreading itself out over the entire planet. If Sir David Attenborough could guide us on a narrated adventure through the noosphere surrounding our world, the creatures he would show us would be algorithms. What would they look like, aside from lines of computer code? Some algorithms take stock of the number of fish in the sea, then compute and decide for humans how many we should catch. Some algorithms track fishing boats and catches in real time in order to zero in on illegal fishing. Both of these examples directly determine how many fish are in the planet’s seas, thereby shaping the biosphere. There are thousands of analogous examples—dealing with weather prediction, commodities pricing, crop irrigation—influencing and shaping every part of the biosphere, affecting everything from the prevalence of metals in drinking water to the amount of the plastics accumulating at the bottom of the Mariana Trench. Algorithms, like a new kind of organism, now travel among the other life forms in our global ecosystem. They have their own taxonomy. Some are as rudimentary as the simplest organisms, carrying out basic tasks such as monitoring and measuring. But the most interesting algorithms are the most complex: learning algorithms, also known as “artificial intelligence” or AI. Usually designed to take the place of human decision-making, these sorts of intelligent algorithms will shape our future world. As Attenborough takes us on our tour, how can we but ask, “Where did these algorithms come from, and in what shape will they leave our world?”

Intelligent Algorithms

Until recently, it had been generally agreed that algorithms themselves couldn’t be intelligent. They were seen essentially as sets of steps used in the solving of problems, just as al-Khwarizmi had illustrated. Ada Lovelace, the 19th-century mathematician who coded the first computer algorithm, wrote in an 1842 memoir, “The Analytical Engine [i.e., computer algorithm] has no pretensions to originate anything. It can do whatever we know how to order it to perform.” She did cast a wide and visionary net for what she thought we could order an algorithm to perform, including composing complex original music. But the conceptual leap to algorithms that could learn and potentially think for themselves didn’t come until the middle of the 20th century from a code-cracking mathematician named Alan Turing.

Turing’s life was short and tragic, ending in an apparent suicide after being legally punished for his homosexuality, but his contributions to our world are hard to overstate. He played a pivotal role in the Allied victory due to his work that led to the decipherment of key intercepted Nazi messages. After the war, he was a pioneer in computer science. Importantly, he pushed back against the widespread notion that machines couldn’t be intelligent, writing and speaking frequently on the idea of intelligent machines. At a time when computers could perform only the simplest calculations, he laid out plans for algorithms that could learn and adapt their own structures based on the input they absorbed. He likened the design of algorithms to what happens in the developing brain of a child, who does not arrive in the world already intelligent but rather possesses the blueprints to grow, develop, and change in the light of new information. That model essentially describes how learning algorithms now work.

Eventually Turing’s ideas won the day. In fact, the canonical test for intelligence in algorithms is known as the Turing Test, in which the successfully “intelligent” algorithm must be able to pass as human by tricking some judge into thinking it is. In the annual competitions in which judges text with humans and with algorithms, algorithms have made steady progress in fooling the judges and passing the Turing Test. If you’ve ever been text-chatting with an online assistant only to realize your interlocutor is a bot, then hats off to that algorithm! It has passed a version of the test. Learning algorithms are now all around us, deciding how grocery-store shelves are stocked, which news headlines or search results are shown to people, which résumés are deemed competitive, even who among eligible prisoners will be let out on parole and who will remain in jail. More and more, the use of intelligent algorithms is replacing human decision-making.

It’s striking, however, that the gold standard for intelligent algorithms to be identified as such—passing the Turing Test—is their ability to deceive humans. We don’t ask whether algorithms can be just, insightful, or empathetic. We ask whether they can fool us. In many ways, the trick has been working. As algorithms improve, they become more seamlessly enmeshed in the workings of the world. People are deceived into thinking that algorithms are making good decisions. Often the algorithm cannot even be questioned. In the case of parole decisions, for example, usually the software that implements the algorithm is proprietary, and its human users aren’t allowed to look at the code and thus understand what factors the decision has been based on. In the worst instances, intelligent algorithms take on and magnify some of the ugliest biases and prejudices in our society. This is all to say that while algorithms can perform impressive tasks (e.g., forecasting, making farming more efficient, translating one language into another, etc.), the algorithms in the world around us in increasing numbers are based on a narrow idea of intelligence. And this narrowness is of concern because they are continuing to fulfill their primary role, serving as a means to replace human decision-making.

So let’s take this back to the biosphere. We have an interconnected sphere of millions (or even billions or trillions) of algorithms making decisions that shape our natural world. Their decision-making processes are grounded in a very constricted view of intelligence. It is now a cause of some concern that the noosphere has taken on a life of its own. One group has written a Biosphere Code Manifesto that outlines seven principles that should guide the use of algorithms in the environment. They range from the pragmatic (“algorithms should serve humanity”) to the lofty (“algorithms should be inspiring, playful, and beautiful”). For my own part, I happened across a case where humans and algorithms came together in an accidentally collaborative way—a way that might suggest a path forward. The story starts with endangered whales.

Save the Whales

Let’s return to our flyover above the coastal ocean. As we zoom in to this particular corner of the biosphere, the sea surface is broken by a sudden spray of mist. A whale. As Attenborough describes the scene, we notice that the whale’s back is scarred, and clumps of rope trail behind its fluke.

In summer 2017, nearly 20 North Atlantic right whales died as a result of ship strikes and fishing gear entanglements. Such losses were a big deal for a species with only around 100 remaining breeding females—a species edging closer to extinction. The National Marine Fisheries Services convened a team of about 60 people from different backgrounds to address the problem, among them fishermen, activists, managers, and scientists (including me) in what was basically an attempt to get as many points of view in the room as possible and to build toward consensus. Recently we were tasked with figuring out how to close, rearrange, move, or adjust lobster-fishing efforts in order to reduce the entanglement risk to whales by 60–80 percent, a percentage range set by the feds. Collectively, fishing effort looks like a complex, 17-dimensional geometry, changing in space and time, with depths and seasons, with different gear and rope configurations, according to lobster and human behavior, and so on. We had one week, and the tool we had to use to make our decision was a complex algorithm that could input different fishing strategies; absorb whale data, fishing data, certain types of environmental data, and gear configurations; and output the risk reduction to whales. This cutting-edge algorithm was the product of expert coders, but there was one problem: when we sat down to meet at the beginning of the week, it wasn’t quite finished.

The extended government shutdown of December 2018–January 2019 had delayed the completion of the algorithm code. By the time the meeting took place, the code was barely ready and none of our group had seen it. There was a fair bit of apprehension. The Maine Department of Marine Resources sent a letter to the Fisheries Service asking that the algorithm be removed from the decision-making process until it had been fully peer-reviewed. Yet we needed to come to a consensus within the week. The algorithm, however provisional, stayed.

The process that followed was unexpected. Because the algorithm was still rough around the edges, the coding team was there to help with problems on the fly. That meant that anyone could ask the algorithm anything they wanted. Bit by bit, the team began to dissect the algorithm and tinker with its innards. People asked questions, informed by personal experience and judgment, that the algorithm would never ask itself and in many cases couldn’t answer. What happens if my grandson has to haul lines with too many traps and it becomes unsafe for him? What happens if past whale migrations aren’t the same as what these mammals will do in the future? What if some of the assumptions about gear types are off? The pulled-apart algorithm, its strengths and flaws laid bare, became the central node in a dialogue about the consequences of different management choices—a discussion informed not just by intelligence but by our instincts of fairness, foresight, caution, and wisdom. The algorithm became a tool that played one role in the decision rather than being the decision-maker itself. In the end, the group came to near unanimous agreement on how to reach our goal.

As of this writing, there are people still committed to this decision’s reversal, because we had used an algorithm that was unreviewed, untested, and rough around the edges. Skepticism toward algorithms is salutary for many of the reasons I’ve mentioned. But if we’re going to live in a world where algorithms are integrated into decision-making, we need to jump on every chance we get to open them up, see how they tick, and take their advice within the context of our other knowledge. If I were to add a principle to the Biosphere Code Manifesto, it would be something like: Algorithms should be dissected and torn apart periodically by a room full of people with diverse points of view. It’s possible that the noosphere of algorithms encircling our planet can be a force for good, helping us to reverse environmental damage, mitigate climate change, and use our resources more efficiently. But that happy outcome is far from guaranteed. “Algorithms,” Attenborough narrates, “are curious creatures indeed. But if we pay attention and use our wisdom, we might see them integrate harmoniously with the natural world.”


Nick Record

Dr. Nick Record is a senior research scientist at Bigelow Laboratory for Ocean Sciences in East Boothbay, Maine. He is an oceanographer and computational ecologist. Disclosure: the author shares a distant ancestor with modern-day whales.


The Brooklyn Rail

River Rail Colby

All Issues