What the Information Age Left Us: Conclusion and Opening Toward the Deep Learning Revolution
From the CSIRAC in Sydney to the WEIZAC in Rehovot.
From the ruins of Berlin to Bletchley Park laboratories.
From M-Pesa in Kenya to TSMC in Taiwan.
From Dartmouth to ImageNet.
Six continents. Sixty-five years. What does this crossing teach us?
Four threads run through this period.
The leap across the abyss.
Africa jumped to mobile payments.
India leaped toward software services.
Taiwan invented an industrial model no one had imagined. Latecomers can become pioneers — if they invent their own path.
The cycles of hope and disenchantment.
AI experienced summers and winters.
Unfulfilled promises triggered funding crises.
But researchers who persisted during the winters prepared the following summers.
Yann LeCun, Geoffrey Hinton, Yoshua Bengio — the "godfathers of AI" — worked in the shadows when no one believed.
Continued invisibilization. Betty Holberton and the ENIAC programmers. Rose Dieng-Kuntz and Timnit Gebru. The Argentine ComIC pioneers. Women, minorities, and contributors from the Global South remain underrepresented in the official history — and in the teams that build AI.
The convergence of parallel paths.
Nakashima and Shannon.
India and Japan.
Taiwan and Korea.
Asia now manufactures the chips that run global artificial intelligence. Paths traced for half a century lead to the same horizon.
This period leaves us a question: who inherits the digital revolution?
In 2006, Geoffrey Hinton relaunched neural networks.
In 2009, Fei-Fei Li published ImageNet.
In 2012, AlexNet proved that deep learning worked.
The summer that opened would be the longest in history.
But this summer inherits everything that came before — leaps and falls, frugal innovations and extinguished forges, biases that perpetuate themselves, and questions that remain open.
The journey continues.
The Antipodes of Innovation: How Geographic Isolation Became an Advantage
In November 1949, in Sydney, a machine of two thousand vacuum tubes executed its first calculation. The CSIRAC joined an exclusive club: stored-program computers. There were only four others in the world — all in Great Britain or the United States.
Australia had built the fifth.
Trevor Pearcey worked "largely independently of European and American efforts." Isolation became an advantage: without access to others' solutions, he had to invent everything.
In February 1948, before the machine even worked, he wrote a prophetic sentence: "It is not inconceivable that an automatic encyclopaedic service operated through the telephone system will one day exist."
The Internet. In 1948.
Graeme Clark had grown up with a deaf father.
In 1978, he implanted the first multichannel cochlear device. Rod Saunders heard. Today, more than one million people wear a cochlear implant.
WiFi? The CSIRO team developed a wireless transmission technique that became an essential component of modern networks. When fourteen tech giants tried to invalidate their patent, the CSIRO won — and collected four hundred fifty million dollars.
Google Maps? Born in Sydney. Where 2 Technologies, founded by two Australians and two Danes in an apartment in Hunters Hill. Google acquired them in 2004.
Atlassian? Ten thousand dollars of credit card debt in 2002. Australia's first tech unicorn.
Oceania, at the antipodes of power centers, invented bridges to the entire world.
Gardens of the Desert: How Necessity Made Innovation Bloom in the Middle East
No one expects flowers in the desert. Yet that is where they grow fastest — when the rain finally comes.
In 1954, a six-year-old country undertook to build a computer. Israel had just emerged from its war of independence. The borders were hostile. The economy fragile. The advisory committee included Albert Einstein — skeptical — and John von Neumann — enthusiastic. Some candidates had lost their diplomas in the Holocaust. In 1955, the WEIZAC executed its first calculation.
Lotfi Zadeh was born in Baku, grew up in Tehran, emigrated to the United States. In 1965, he invented fuzzy logic — that way of representing vague concepts that humans handle intuitively. Americans were skeptical. The Japanese seized upon it. Today it is in your air conditioners, washing machines, cars.
Unit 8200 — the Israeli equivalent of the NSA — became, without intending to, the world's greatest startup school. Gil Shwed built the first firewall there. Check Point, Palo Alto Networks, CyberArk — so many cybersecurity giants founded by its veterans.
ICQ — "I Seek You" — was born in a Tel Aviv apartment in 1996. Four young Israelis invented instant messaging. AOL bought it for four hundred million dollars.
Waze and Mobileye revolutionized navigation and autonomous driving. The "Startup Nation" exported eleven billion dollars in cybersecurity in 2021.
The desert has bloomed. Necessity became invention.
Digital Reconstruction: How Europe Invented the Computer, Scuttled Its Future, and Rebuilt Itself
Europe invented the computer twice. The first time in secret. The second time in oblivion.
In 1941, Konrad Zuse completed the Z3 in Berlin — the world's first programmable computer. The Nazi regime saw no use in it. A bombing raid destroyed it.
In 1944, Tommy Flowers delivered Colossus to Bletchley Park — the first electronic computer, two years before ENIAC. He was ordered to burn the plans.
Then came the Lighthill Report.
In 1973, a British mathematician with no AI experience published a devastating assessment: "total failure to achieve its grandiose objectives." The government cut funding. Europe had just triggered the first "artificial intelligence winter."
But Europe rebuilt itself.
In Marseille, Alain Colmerauer invented Prolog — the language that would inspire the Japanese Fifth Generation project. At CERN, Tim Berners-Lee created the World Wide Web. In Finland, Linus Torvalds wrote Linux — the system that runs most of the world's servers.
In France, Yann LeCun laid the foundations for convolutional neural networks — the technology behind image recognition.
You will discover Donald Michie, a Bletchley Park veteran who built MENACE — a machine learning tic-tac-toe through reinforcement. Edsger Dijkstra, who invented the shortest path algorithm. DeepMind, founded in London, whose AlphaGo would beat the world Go champion.
Europe invented, forgot, scuttled — and started over. Its resilience is part of its genius.
Parallel Paths: How Asia Discovered, Invented, and Dominated the Foundations of AI
In 1937, Claude Shannon defended his legendary thesis at MIT. He demonstrated that Boolean algebra could describe electrical circuits. The same year, in Tokyo, Akira Nakashima published the same discovery. Shannon cited him. Then one became a legend.
The other was forgotten.
Two men. Two continents. The same idea. The history of parallel paths.
In 1930, Prasanta Chandra Mahalanobis invented in Calcutta a statistical measure still used every day in machine learning.
In 1960, India inaugurated TIFRAC, its first locally designed computer.
In 1982, Japan launched the Fifth Generation Computer Project — a dream of revolutionary computing that became "the lost generation."
Then came the leap.
India had COBOL programmers. The West no longer did. The "Y2K bug" became the launchpad for the Indian computer industry. TCS, Infosys, Wipro. Bangalore — the "Silicon Valley of India" — with thirty-eight percent of the country's IT exports.
Morris Chang was "put out to pasture" at Texas Instruments at fifty-two. He left for Taiwan. He invented the "pure-play foundry" model — a company that manufactures chips without designing them. TSMC now enables NVIDIA, AMD, and Apple to exist without owning factories.
You will discover Fei-Fei Li, born in China, creator of ImageNet — the database that launched the deep learning revolution.
Kai-Fu Lee, who developed speech recognition, led Google China, and became one of the most influential AI investors.
The parallel paths are converging. Asia manufactures the chips that run global artificial intelligence.
The Forge and the Forgetting: The Summers, Winters, and Invisible Women of American Artificial Intelligence
In 1956, twenty-one researchers gathered at Dartmouth College for an eight-week summer conference. They had an ambitious goal: create a "machine capable of simulating every aspect of human intelligence." They thought they could do it in one generation. They were wrong — by a great deal.
American artificial intelligence experienced summers and winters. The Dartmouth summer, then the first winter when funding collapsed in the 1970s. The expert systems summer, then their collapse when conventional machines caught up.
Finally, the deep learning summer — the one still ongoing.
But the American history of AI is also a history of forgetting.
In February 1946, the army presented ENIAC to the press. In the background of the photos, six women manipulated cables — Betty Holberton, Kay McNulty, and their colleagues. They were not introduced. It took fifty years for their names to be learned.
You will also discover Mexico, which received its first computer in 1958 and created the first computer science master's in Latin America. Argentina and its ComIC pioneers — Clarisa Cortes, Cristina Zoltan, Liana Lew, Noemi Garcia. Brazil, which manufactured sixty-seven percent of its computers locally in 1982.
And Chile. Salvador Allende. Fernando Flores who wrote to Stafford Beer. The Cybersyn project — "a sort of socialist Internet, decades ahead of its time," according to The Guardian. The futuristic operations room, destroyed by the September 11, 1973 coup.
America forged artificial intelligence. It also forged forgetting.
The Digital Palaver Tree: How Africa Invented Financial Inclusion and Algorithmic Bias
In every African village, there is a tree beneath which people gather to talk, listen, and decide together. The palaver tree. A patient democracy where decisions are binding only when all parties agree. No majority vote crushing the minority. An inclusive consensus.
Ubuntu: "I am because we are." This philosophy guided Nelson Mandela and Desmond Tutu. And it contains, without knowing it, the principles of distributed systems and digital consensus protocols.
On March 6, 2007, a Kenyan company launched M-Pesa — "M" for mobile, "Pesa" for money in Swahili. Sending and receiving money by simple mobile phone. In 2006, less than nineteen percent of Kenyans had access to a bank account. M-Pesa brought this figure to eighty percent. Before the West invented Apple Pay, Africa was already paying by mobile.
Then came Ushahidi — "testimony" in Swahili. During the 2007 electoral violence, four technologists created in three days a citizen mapping platform. One hundred thousand deployments in one hundred sixty countries since.
You will discover Rose Dieng-Kuntz, the first African woman admitted to Polytechnique, a pioneer of the semantic web.
Timnit Gebru, who revealed that facial recognition systems erred up to thirty-five percent for dark-skinned women — versus less than one percent for white men. Mark Shuttleworth, who named Ubuntu Linux after the philosophy that had inspired him.
Africa leaped across the technological abyss. It invented mobile financial inclusion before the rest of the world. And it posed the first questions about artificial intelligence biases.
The palaver tree has become digital.
The Information Age: From the Ashes of the World War to Leaps Across the Abyss
In February 1948, in a Sydney laboratory, an engineer named Trevor Pearcey wrote a sentence that still resonates: "It is not inconceivable that an automatic encyclopaedic service operated through the existing telephone system will one day exist."
The Internet. Predicted from Australia. Forty years before the World Wide Web.
This period — from 1945 to 2010 — is when dreams became machines. Shannon's circuits took form in microprocessors. Turing's universal machine became the personal computer.
Boolean logic became the Internet. And the Dartmouth dream — simulating human intelligence — passed through summers of euphoria and winters of disillusionment before being reborn, transformed.
But this era was also one of leaps across the abyss.
Africa, disconnected from the global telephone network, jumped directly to mobile payments. M-Pesa preceded Apple Pay.
India, having missed the hardware turn, leaped toward software services — the Y2K bug became its launchpad.
Taiwan invented the "pure-play foundry" and became the world's silicon shield.
Israel, a six-year-old nation surrounded by enemies, built one of the world's first computers and became the "Startup Nation."
You will traverse six continents.
Sixty-five years of history. From the secret laboratories of Bletchley Park to the Tel Aviv apartments where ICQ was born. From the ruins of World War II to the servers of Google Maps, born in Sydney.
Everywhere, the same question: who inherits the digital revolution — and who is excluded from it?
The AI winter is over.
The summer that opens will be the longest in history. But to understand where we are going, we must first understand where we came from.
Welcome to A Brief History of AI, season 5.
What the Age of Revolutions Bequeathed to Us: Conclusion and Opening Toward the Information Age
From the African aquifers to the burned codices of Yucatan. From Ramanujan's notebooks to the secret laboratories of Bletchley Park. From Aboriginal stars to the dried springs of Baghdad. Six continents. One hundred and fifty-six years. What does this crossing teach us?
Four threads run through this period. Epistemicide as policy: everywhere, knowledge was destroyed to justify domination. The exile of geniuses: Ramanujan, Al-Sabbah, Rutherford — all had to leave their homelands to flourish. The invisibilization of contributors: Nakashima, Seki, the women of Bletchley Park — erased because they did not fit the expected image. Parallel discoveries: the same truths emerge in places that know nothing of each other.
This period bequeathed us binary, logic, the universal machine — and their blind spots. From Leibniz to Turing, the path is direct. But other paths could have been taken.
The artificial intelligence we build today bears the imprint of this double history. It speaks the languages that were written, not those that were sung. Its corpora contain Cook's journals, not Tupaia's navigation songs.
The inferno is extinguished. The ashes are still warm. What we build on these ashes depends on us.
The next period — the information age — will inherit these silences. It will also inherit the possibility of repairing them.
The journey continues.
Forgotten Stars: How Oceania Developed Humanity's First Astronomy — and Was Erased
There is an emu that crosses the southern sky. You cannot see it by looking at the bright stars, but by observing the darkness between them.
The Aboriginal peoples of Australia had developed what researchers call humanity's first astronomy. Sixty-five thousand years of observing the sky. Constellations in the dark spaces between stars. The Gawarrgay — the great emu — predicts the breeding seasons of the earthly bird.
Polynesian navigators memorized two hundred and twenty stars to cross the Pacific without instruments. Their body-counting systems, their kinship mathematics represented algorithms before the word existed.
Then came colonization. The legal fiction of terra nullius denied sixty-five thousand years of human presence. Between 1788 and 1900, the Aboriginal population collapsed by ninety percent.
The Stolen Generations — children torn from their families between 1910 and 1970 — interrupted knowledge transfer.
On the same soil, Ernest Rutherford was born in New Zealand, discovered the atomic nucleus, and received the Nobel Prize.
Alexander Aitken, a New Zealand calculating prodigy, could multiply thirteen-digit numbers in his head.
Two traditions on the same territory. And no bridge between them. Rutherford was knighted with a coat of arms bearing a Maori warrior — an aesthetic symbol, not an epistemic source.
Oceania reminds us that coexistence is not dialogue, that stars can be extinguished in a single generation.
Dried Springs: How the Middle East Bequeathed the Words and Lost the Institutions
Every time a computer executes an operation, it performs an algorithm. The word comes from al-Khwarizmi — a ninth-century Persian mathematician. "Algebra" comes from al-Jabr. "Arabic numerals" still carry the memory of a transmission.
Words survive. Institutions die.
The House of Wisdom in Baghdad was destroyed in 1258. But in the nineteenth century, the Nahda — the Arab Renaissance — tried to make the springs flow again. Rifa'a al-Tahtawi translated two thousand European works into Arabic. Muhammad Abduh reformed al-Azhar. The Bulaq Press disseminated scientific knowledge.
Then colonialism, the Sykes-Picot agreement, and the fragmentation of the Arab world interrupted the momentum.
Hassan Kamel Al-Sabbah was born in Lebanon in 1895. A genius of electrical engineering, he filed more than seventy patents — for General Electric, in the United States, where he had to emigrate. He designed solar turbines, photoelectric cells, and power transmission systems. He died at thirty-nine in a car accident. In Lebanon, a statue was erected. The patents remained American.
In Egypt, Muhammad Ali had built engineering and medical schools, sent students to Europe. The country had the world's fifth-largest cotton industry. Then debt, the Suez Canal, and British occupation ended the modernizing momentum.
The Middle East gave the world the fundamental concepts of calculation. And was prevented from continuing what it had begun.
The words remain. The springs await their chance to flow again.
The Forge and the Inferno: How Europe Invented Artificial Intelligence on the Ashes of the Libraries It Burned
A forge is not only a place of creation. It is also a place of fire.
In 1679, Leibniz conceived the binary system.
In 1854, Boole formalized the algebra of logic.
In 1843, Ada Lovelace wrote the first computer program for a machine that did not exist.
In 1936, Turing invented the universal machine.
In 1944, Tommy Flowers completed Colossus — the first electronic computer.
Europe forged all the conceptual tools of artificial intelligence.
But the inferno accompanied the forge.
The women of Bletchley Park made up seventy-five percent of the staff. Joan Clarke worked alongside Turing on decrypting Enigma. Mavis Batey cracked the Abwehr code at nineteen. Their names were erased for decades.
In Berlin, Konrad Zuse built alone the Z3 — the world's first programmable computer — in 1941. The Nazi regime was not interested. A bombing raid destroyed it. When history was written, Zuse was barely mentioned.
Colossus preceded ENIAC by two years. But the Colossus machines were destroyed after the war, their plans burned.
Tommy Flowers received orders to erase everything. The history of computing ignored this first for thirty years.
Refugees fleeing Nazism — Einstein, Fermi, Goedel — enriched America with what Europe was losing. European colonialism destroyed elsewhere the knowledge systems it did not recognize.
Europe forged the tools of AI. It also forged them on the ashes of the libraries it burned.
Parallel Paths: How Asia Discovered the Same Truths as the West — and Was Forgotten
Great discoveries never occur just once. They emerge simultaneously, in places that know nothing of each other.
In Japan's closed Edo period, Seki Takakazu discovered infinitesimal calculus independently of Newton. He presented the concept of the determinant ten years before Leibniz. His disciple Takebe Katahiro obtained the expansion of the arc sine fifteen years before Euler. They were nicknamed "Japanese Newtons" — as if Newton were the reference and they the imitation, when in fact they were walking on parallel paths toward the same summits.
In 1868, the Meiji Restoration opened Japan. Reformers looked at wasan — two and a half centuries of mathematical tradition — and saw only a backward system. Within decades, this treasure was swept away in favor of Western mathematics.
Akira Nakashima formulated switching circuit theory between 1934 and 1936. Shannon published the same discovery in 1938, cited him — and became a legend. Nakashima remained unknown.
In colonial India, Srinivasa Ramanujan, largely self-taught, proved more than three thousand theorems that Western mathematicians took decades to understand. Prasanta Chandra Mahalanobis invented in 1930 the distance that bears his name — still used every day in machine learning.
Asia teaches us that intelligence has never had only one form.
That paths to truth are multiple. That we have lost other routes, other ways of arriving at the same results.
Parallel paths still exist. We need only look for them.
Lost Memories: How the Americas Invented and Forgot the Foundations of Artificial Intelligence
A knot can be a memory. A memory can be burned.
On July 12, 1562, a Franciscan monk named Diego de Landa ordered twenty-seven Maya codices thrown into fire — centuries of astronomical observations reduced to ashes. Of the entire Maya civilization, four books survived. Four books to bear witness to an entire library.
The Maya had independently invented zero, developed positional notation, and created three interlocking calendars allowing eclipse predictions accurate to within minutes. This was a system of calculation, prediction, and world-modeling. De Landa threw it into the fire.
In the Andes, the quipu — knotted cords — stored staggering quantities of data. Position of the knot, type of knot, color of the cord: a portable computer before its time. The quipucamayocs who mastered it were human processors.
In Canada, one hundred and fifty thousand Indigenous children were torn from their families between 1883 and 1996. The goal: to "kill the Indian in the child." The Truth and Reconciliation Commission called this system cultural genocide.
Then came 1946. The American army presented ENIAC to the press. In the background of the photos, six women were manipulating cables — Betty Holberton, Kay McNulty, and their colleagues. They were not introduced. No one asked their names. It took fifty years for them to be recognized.
The Americas are a continent of lost memories — and sometimes recovered ones. The Maya zero, the Andean quipus, the ENIAC pioneers.
The cords are still there. The knots are waiting to be made.
The Invisible Threads: How Africa Wove the Foundations of Computing That Colonialism Tried to Erase
There exist transmissions that official history refuses to see. Threads of knowledge running beneath the surface of dominant narratives, invisible but never broken.
In 1884, fourteen European powers partitioned Africa in Berlin as one might cut up a fabric — with no regard for the patterns already woven into it. Epistemicide accompanied colonization: systematic destruction of a people's knowledge to impose a foreign system of understanding.
And yet, not all threads were severed.
The Ifa system of the Yoruba rests on two hundred and fifty-six figures, each composed of eight positions taking two states — open or closed. Two states. Eight positions. We have just described exactly a computer byte. The associated literary corpus contains more than four hundred and thirty thousand verses, classified according to these categories. A database, a memory addressing system — centuries before the computer.
The ethnomathematician Ron Eglash traced a troubling lineage: from Ifa to Arabic geomancy, from geomancy to European alchemists, from alchemists to Leibniz himself. The mathematician who formalized binary corresponded with missionaries who described these African traditions to him.
You will discover African fractals in architecture, textiles, and hairstyles. The Timbuktu manuscripts hidden in attics during colonial occupation. Benjamin Banneker predicting eclipses and building a clock entirely from wood. Shelby Davidson inventing an automatic postal rate calculation device in 1911.
Binary was not born in a European laboratory. It may have traveled from the forests of Nigeria to the salons of Hanover.
The invisible threads were never broken. They await recognition.
The Age of the Forge and the Inferno: How the Age of Revolutions Created and Destroyed the Foundations of Artificial Intelligence
In 1937, a student at MIT defended what some consider the most important master's thesis of the twentieth century. Claude Shannon demonstrated that Boolean algebra could describe how electrical circuits function. That same year, in Tokyo, an engineer named Akira Nakashima published exactly the same discovery. Shannon cited him in his paper. Then one became a legend. The other was forgotten.
This double fate encapsulates the period we are now traversing — from the revolutions of 1789 to the total wars of 1945. An era when humanity forged the conceptual tools of artificial intelligence: Leibniz's binary system, Boole's algebra, Ada Lovelace's first program, Turing's universal machine, Colossus, and ENIAC.
But the forge was also an inferno. Wherever Europe extended its dominion, knowledge systems were destroyed. The Maya codices, Japanese mathematical traditions, Aboriginal astronomy, the manuscripts of Timbuktu: so many ways of thinking about calculation that were swept away or forced into silence.
You will meet Benjamin Banneker, a self-taught African American astronomer who predicted eclipses and challenged Jefferson's racial theories. Srinivasa Ramanujan, the Indian genius who proved three thousand theorems with almost no formal education. Seki Takakazu, the "Japanese Newton" who discovered infinitesimal calculus in the isolation of the Edo era. Betty Holberton and the ENIAC programmers, erased from photographs for fifty years.
Six continents. One hundred and fifty-six years of history. And everywhere the same paradox: foundations rising while others collapse.
Welcome to A Brief History of AI, season 4.
What the Early Modern Period Bequeathed Us: Conclusion and Opening Toward the Contemporary Era
From African aquifers to the colonial archives of Mexico. From the imperial courts of Beijing to the watchmaking workshops of La Chaux-de-Fonds. From the ruins of Istanbul's observatory to Tupaia's canoes. Six continents. Three centuries of history. What does this journey teach us?
In this final episode, we weave the threads connecting all the stories we have told.
Four convergences run through these three centuries. Direct encounter—for the first time, civilizations separated for millennia found themselves face to face. Mirrors of discovery—Seki and Leibniz, Jyeshtadeva and Newton, the I Ching and binary, minds separated by oceans arriving at the same truths.
Windows that open and close—the destroyed observatory, the forbidden printing press, the expelled Jesuits. Partial documentation—Cook's journals, but not Tupaia's songs.
And six singularities.
Africa revealed the aquifers of knowledge—the underground transmissions that nourished European thought.
The Americas preserved forgotten logics—the tolerance for ambiguity of the tlamatinimeh.
Asia demonstrated the universality of mathematical structures—bridges and mirrors.
Europe formulated the program of artificial intelligence—the beast-machine, the calculus ratiocinator.
The Middle East showed what happens when windows close—governance matters more than talent.
Oceania embodied the missed encounter—the map misunderstood for two hundred fifty years.
Four lessons emerge.
Governance determines trajectory.
Documentation creates history.
Translation is always incomplete.
Universality does not erase diversity.
The Early Modern Period bequeathed us the intellectual program of artificial intelligence—and its blind spots.
From Leibniz to Turing, the path is direct. But other paths could have been taken. Other paths can still be taken.
The artificial intelligence we build will reflect the intelligences with which we nourish it.
The Early Modern Period showed us this—through its successes as through its failures.
The journey continues.
The Map and the Song: How Early Modern Oceania Reveals What We Do Not Know How to See
There exist maps that cannot be read.
In this episode, we discover the story of a missed encounter—between two forms of intelligence that could not translate each other.
In July 1769, a Polynesian priest named Tupaia boarded the Endeavour, Captain James Cook's ship. He was no ordinary navigator. Trained at the marae of Taputapu-atea—the most important center of sacred knowledge in Eastern Polynesia—he carried in his memory a map of one hundred thirty islands scattered across seven thousand kilometers of ocean.
You will discover how Tupaia attempted something extraordinary: inventing a cartographic system that would bridge his way of thinking about the world and that of Europeans. Polynesians did not measure distance like Europeans. For them, distance was measured in navigation time, not miles. Space was not an abstract grid—it was a lived, bodily experience.
On the map he drew for Cook, Tupaia placed a central marker labeled "avatea"—the sun at noon. This system allowed him to translate his knowledge of maritime routes into the logic of the European compass. It was an invention—a hybrid born from the encounter between two intelligences.
This map survived. For two hundred fifty years, researchers judged it confused, inaccurate, primitive. It was not until 2018 that two German academics, after six years of research, finally understood its logic. Tupaia had made no errors. He had simply written in a language no one bothered to learn.
You will also learn that Papua New Guinea and Oceania harbor nearly nine hundred distinct counting systems. That Aboriginal kinship systems can be modeled using group theory. That Polynesian navigators calculated their position by feeling the rhythm of waves beneath their canoe's hull.
Tupaia died in Batavia in 1770, taking with him knowledge no one had taken the time to collect. Oceania reminds us that our data corpora contain Cook's journals, but not the navigation songs. This bias is not technical. It is historical.
Windows That Close: How the Early Modern Middle East Warns Us of the Dangers of Institutional Choices
A window can open onto the world. It can also close—sometimes for centuries.
In this episode, we discover what happens when a civilization that had been at the forefront of scientific thought decides to turn its back on its own inventions.
In 1577, Ottoman astronomer Taqi al-Din completed in Istanbul an observatory comparable to that of Tycho Brahe in Denmark. He was no ordinary man. He had invented an observation clock with three dials—hours, minutes, seconds—a revolutionary precision. He was the first to use decimal notation rather than the sexagesimal fractions inherited from the Babylonians. And he had designed a rudimentary steam turbine—two centuries before the Industrial Revolution.
Then a comet appeared. Taqi al-Din predicted it heralded glorious conquests. A plague struck the empire instead. The religious leader—the şeyhülislam—issued a decree: countries possessing observatories were struck by catastrophes. In 1580, three years after its completion, the Istanbul observatory was demolished.
You will discover the history of Ottoman printing.
In 1493, Jewish refugees from Spain established a Hebrew press in Istanbul. But for Muslims, printing in Arabic characters remained forbidden for two hundred fifty years—until Ibrahim Muteferrika in 1729. While printing transformed Europe, the Ottoman world remained apart from this information revolution.
And yet, innovation continued elsewhere. Persian astrolabes of the Safavid era were judged "better and more precise" than their European equivalents. Mughal celestial globes, cast using lost-wax without welding, still astonish experts. Talent had not disappeared. What was lacking was an environment to protect it.
The Early Modern Middle East bequeaths us a warning: governance matters more than individual talent. Institutional choices determine civilizational trajectories.
Windows closed at the wrong moment can have consequences lasting centuries.
Clocks of the Soul: How Early Modern Europe Formulated the Program of Artificial Intelligence
A clock can measure time. But can it think?
In this episode, we discover how Early Modern Europe dared a vertiginous question—and built the conceptual framework that would one day make artificial intelligence thinkable.
In 1637, René Descartes published the Discourse on Method. He declared animals to be pure machines—automata of flesh incapable of true thought. And he proposed two criteria to distinguish man from automaton: language and universal reason. These criteria strangely resemble the Turing test and the dream of artificial general intelligence.
You will meet Thomas Hobbes, who declared in 1651 that "reason is nothing but reckoning."
Gottfried Wilhelm Leibniz, who dreamed of a characteristica universalis—a formal language capable of expressing all thought—and a calculus ratiocinator—a reasoning machine that would resolve disputes through calculation. "Calculemus!"—Let us calculate!—such was his program.
You will discover Blaise Pascal, who built at nineteen the Pascaline—the first commercially viable calculating machine. Jacques de Vaucanson, whose mechanical flute player modulated its breath like a living musician.
Pierre Jaquet-Droz, whose Writer could be programmed to write any text of forty characters—the first programmable computer in history, according to some historians.
And Wolfgang von Kempelen's Mechanical Turk—that chess-playing automaton that defeated Napoleon and Benjamin Franklin. It was a hoax: a hidden human manipulated the machine. But the question it posed was sincere: can one distinguish real intelligence from simulated intelligence?
Early Modern Europe did not merely build machines. It built the conceptual framework of artificial intelligence—with its assumptions, its ambitions, and its blind spots. Descartes' dualism, Hobbes' mechanism, Leibniz's rationalism: these ideas still structure how we think about AI.
The clocks of the soul have become silicon computers. The question remains.