When the usual daily sitting meditation period is suddenly extended, without any warning, there is the tendency to think that something is “longer” than usual. From the standpoint of before-thinking mind, “don’t know,” nothing could be further from the truth…
Moment has no border, no edge or boundary. So, it has no length.
(Insta-bite excerpt from the longer teaching-video, “Moment = Infinite Time”.)
Attaining Zen is not difficult — unless we want some complicated explanation, some understanding for our heads.
The beautiful thing about posting teaching videos on Instagram is that you are forced to a 60-second limit. It temporizes you into a mindset that you need for haiku: extremely compressed form, so that every single word counts, every single gesture or pause.
[Excerpt from a Q&A in May 2019 in Haugesund, Norway — the “Bible Belt of Norway”.]
Dae Soen Sa Nim used to say to people, “Every day, you clean your body, clean your teeth, clean your clothing. But nobody cleaning this brain! Every day, we always using using using using our mind, it becomes very dirty. So, you must clean your mind necessary — use don’t-know soap. Then you can see clear, hear clear, smell clear, taste clear, perceive clear. Everything is clear. But using don’t-know soap very important.”
Frontline documentaries have always been among the most respected journalistic work on American public TV, on American any TV. They are non-political, in-depth, and they tear the veil covering some of the most important issues of the day.
If you want to understand the current trend-lines with artificial intelligence, this is something that you should definitely invest a look at. It is a chilling presentation on the politics of AI, the new tech war, the future of “work,” AI and corporate surveillance, and the threat to democracy posed by the abuse of AI by so-called “bad actors” (i.e., human beings). Must-see stuff. It is not technical, and gave a really clear sense of some off the basic concerns about AI, in ways that any layman could understand.
I came away from this feeling definitely more worried, especially with regard to how China is using AI to build a dystopian total-surveillance state — already, right now — that would make George Orwell or Philip K. Dick freak out.
A Facebook Messenger exchange I had with someone I never met, back in 2017.
Reading more books just to learn the Dharma is just delaying further the day when you sit and practice, look inside. If someone has gotten enough from your videos or recorded teachings on the Internet to call you “teacher,” they already know enough that knowledge can ever possibly show them about Dharma. Reading more books, at this stage, would be like someone sitting in a French restaurant reading Michelin’s Guide to French Cuisine. No amount of reading will satisfy the appetite OR fill the stomach. Just eat!
In my whole life, I have not read even 10 Buddhist books cover-to-cover — and maybe 5 of those were books I had to write or translate. Perhaps the greatest blessing of having the unbelievable karma to meet the Teacher I met is that it absolutely eliminated any need for further reading or edification in Buddhism, its theory or history. I really don’t know much about Buddhism, as a subject.
Instead, I was able to pour everything into hard-ass practice, in sub-freezing Buddha halls on howling mountainsides and in city temples with bustly coming-and-going all around. There was just no other way. He made the road so simple and clear. The only thing left to do was to “just do it.”
One elderly Korean man cut his hair at 60 and became a monk under my Teacher. He respected Dae Soen Sa Nim’s clear Dharma, but it was also this man’s dream to obtain a PhD in Buddhist studies. (In traditional Korean Confucian culture, “the scholar” was the noblest profession, the most lauded status.) He sat one or two of the 90-day Kyol Che’s, and then told Dae Soen Sa Nim that he was going to enter the PhD program at Dongguk University, the main Buddhist university in Korea.
Dae Soen Sa Nim shouted at him, “PhD? How can that help your life? You are already old man. Nothing guarantees your life even one more day! How will even 100 PhD’s help you in the moment you die?”
But the 60-plus baby-monk was adamant. There may have been some family issue involved, some feeling of needing to achieve something in his family line. He was dumb enough to say this to Dae Soen Sa Nim.
“But your parents long ago dead! Your brothers and sisters all dead!” But the aged novice wouldn’t listen — he felt it was his “destiny” in this life to end his life with a PhD in Buddhist studies. He thought that he could not train as hard as the younger monks and nuns, so maybe also having a PhD would enable him to at least qualify for running a temple somewhere as abbot. Anyway, he claimed that he would finish the program as soon as possible and he would do only meditation retreats after that. But he at least needed to get a PhD, to fulfill some obligation to his (long-dead) ancestors.
Dae Soen Sa Nim was really yelling at him by now. He said, “If you enter this PhD program, you cannot practice Zen — it will make too much thinking for your head. So, if you follow this way, then do not come to me for three years.” This would have been an especially severe hardship, because the university-monks’ dormitory was located right at the front gate of Hwa Gye Sah Temple, where we all lived and practiced together with Dae Soen Sa Nim! This sentence would have meant that the monk would basically be living in the temple precincts, but not able to come to see his own Teacher. But Dae Soen Sa Nim was resolute about certain fundamental principles, and one of them is that he was against any sort of book-learning which substituted for keeping the Great Question and looking right into it through strong, clear practice. And doing the long retreats was a central path for that.
Sometimes, when the community of monks gathered on special occasions to bow together to Dae Soen Sa Nim (say, on the Chinese New Year, or Korean Thanksgiving), the old baby-monk would sheepishly hide in the far back of the group, so desperate was he to offer greetings. And we would all smile at him wanly, and give him some “Cheer-up”s. And whenever Dae Soen Sa Nim caught a glimpse of him in the back, he would bark out something like, “Oh, scholar-monk! Did your PhD make you Buddha yet?”
In the end, the monk did get his PhD. (And I believe Dae Soen Sa Nim relented on the three-year “ban” on meeting him: As with everything with Dae Soen Sa Nim, it was meant to teach and inspire, and never to punish or demean.) Eventually, he also was invited to be a teaching-monk at a large Korean temple in the US. He died a few years later, deeply involved in temple politics — and without ever having sat one of his promised Kyol Che retreats. But not before he had used all of his scholarly knowledge to develop a grand schematic theory for how to spread Buddhism in the West, which nobody ever heard about or uses.
This is the lineage I descended out of. It is truly impossible to describe how profoundly grateful I feel, every single day, to have made the merit (somehow!) for this connection. And this is some of the spirit I try to convey to others, because I know the benefit of not examining and analyzing the poison arrow shot into our arms, but rather in putting everything into ripping it out, at once.
Ioannis of Thessaloniki took the Five Precepts last March at ZCR with Zen Master Dae Bong. Now, he’s back at ZCR and practicing hard. It’s so great having him on the cushion, looking into don’t-know in silence together.
When he’s not here, Ioannis helps on film production back in Greece. A work he recently teamed on, “Back to the Top” (dir. Stratis Chatzielenoudas) is an award-winning true story about Leonidas (a 33-year old punk rock paraplegic) and his friends, who plan to climb to the highest peak of Mount Olympus. For whom is it going to be more difficult? A real story of total try-mind.
Try-mind. Like Ioannis. Don’t we all have our own Olympus to climb? And are we not all handicapped by some things, disabled by aspects of our karma (“mind habits”)? This is the meaning of the teaching, “Sudden enlightenment, gradual cultivation.” Insight is possible, in even a single retreat, and much moreso in decades of practice. But, Children of Entropy that we are, we exist under constant challenge by the work of applying and reapplying the practice in the flow of mind-habits, some of the strongest of which do not always trend in helpful directions. For monks as well as for beginners. This is the patience of “try, try, try, for 10,000 years nonstop — get enlightenment, saving all beings from enlightenment.”
Ioannis is such a great example, among the people I know most nearly, of this quality of pure “try-mind.” His anarchist friends back in his former squat in Thessaloniki think he’s so strange and exotic because of the full-dive immersion he has made into meditation over the last three years. But they also see the changes in him, and they are quietly respectful. But not too much. The “Buddhism” part seems a tad reactionary to some of them. It has the stink of tradition, precedent, authority, rules. They don’t know that the most radical rebellion is the one we wage against automatic imprisonment chained to our unexamined karma. Viva la revolución!
We all know what a seeming insurmountability it is to grapple with the forces and effects of our own karma, or “mind habits,” as Dae Soen Sa Nim so tersely defined it. Even after years and years of practice. And even he clearly could not perfectly master it, in his own life, despite a great enlightenment and a really really really strong center. I know that I cannot yet master it!
Now, how about super-intelligent machines having karma wired into their very DNA? It is happening, because we make them. We are making AI, and we are wiring our karma right into it. This image that AI will somehow be just some benign force that drives our cars perfectly and detects cancer cells and kills them earlier is pure fiction.
Imago dei. In the image of god.
This article in today’s New York Times points to two things I have been saying in public talks about computers, AI, and the human mind. In short, AI definitely has “karma,” and therefore, we should definitely fear it.
This article naturally made me worry. It makes me worry because I have already been worrying about this issue of AI a great deal, intuitively, and the latest developments in AI clearly confirm that that suspicion is merited. Several years of a latent, unspoken gloom was given flesh and bones by Sam Harris’s great TED talk on AI (“Can We Build AI Without Losing Control Over It?” — link below). Rather than newly informing me, Sam’s talk just confirmed — with facts and Sam’s extraordinary clarity — things already darkly suspected. His later conversation with Nick Bostrum, “Will We Destroy the Future?” just confirmed things. I am no specialist, and actually I don’t read much, but this AI is something I am definitely worried about. I am worried about it precisely as a student of meditation, a student of mind and its discontents.
And God said: ‘Let us make man in our image/b’tsalmeinu, after our likeness/kid’muteinu (Genesis 1:26–28)
In talks both public and private, I often use examples of digital performance/experience to describe aspects of the meditation experience. The operations of computers are a perfect way for explaining simply the operations of our minds because human minds have birthed computers, and we have given them the negative capacities of our minds just as much as we have designed the amazing functions that they possess. So, when people come with all sorts of questions about meditation practice or human psychology, it is so simple to answer their questions using the terminology and function of our digital ecosystem. In the past, teachers used examples from “nature” to explain mind and human nature; these days, humans are removed from nature, and connect far more readily to the examples drawn from their ubiquitous computer existence.
And God said: ‘Let us make man in our image/b’tsalmeinu, after our likeness/kid’muteinu
I often describe how complicated thinking comes from “keeping too many apps open” in our minds (and that meditation “shuts them down”); that Zen meditation is itself a very powerful “mind-hacking tool” that cuts through the normal barriers preventing a sense of spiritual/mental wholeness; that we constantly “download data” through the senses all day that, when not well integrated through a consistent meditation practice, makes us tired or run-down, scattered, edgy, stressed, or makes us give suffering to others in an uncontrollable way; that we are constantly downloading “malware”- and “computer-virus”-like thoughts through constant close-contact with other human beings in the modern world, and therefore must practice constantly in order not to be controlled by it all; that when we are attached to our thinking, we cannot “download” our true (innate) wisdom from the “cloud” of don’t-know. I talk about how, just as our computers perform much more slowly when there are too many programs open simultaneously, or when we are viewing a movie on the computer and try to do other computer functions at the same time, there is a slowing-down, even a crash, and how this is connected to the very ways we use our minds, overburdened with sensory stimulation and now “continuous partial attention.” I speak quite often how “karma” can be understood as something that we have “downloaded” from family or society and its many experiences.
I often talk about the fact that it is no mistake that it was a Zen meditation practitioner (Steve Jobs) who had the first intuition of the intuitive integration of human-machine functionality called the smartphone. Dae Soen Sa Nim used to also sometimes point to a student’s head and say “Your computer too complicated.”
It’s not a novel insight: Our minds function like computers because our minds designed computers. Our minds designed computers to function like them, to retain and process abstract bits of information as nearly as possible to the linear ways that we retain and process and act on them.
So, naturally, as a neuroscientist and serious, seasoned meditator, one of the great accessible speakers on the topic is Sam Harris. Forget the fact that I consider him — on a vast range of human insight — to be a modern Buddha who is, in every respect, a shining Jeremiah of our Age. But his TED talk on the subject in 2016 (nearly 5,000,000 views to date) presents the tersest explanations of the profound existential conundrum in which we now find ourselves. Everyone to whom I have introduced this emphatically prophetic talk has come away with eyes opened so wide they can only shiver at the horizons looming mercilessly forward into view:
The Future of Life Institute summarizes Sam’s thrust pretty well:
In the talk, [Sam Harris] clarifies that it’s not likely armies of malicious robots will wreak havoc on civilization like many movies and caricatures portray. He likens this machine-human relationship to the way humans treat ants. “We don’t hate [ants],” he explains, “but whenever their presence seriously conflicts with one of our goals … we annihilate them without a qualm. The concern is that we will one day build machines that, whether they are conscious or not, could treat us with similar disregard.”
Harris explains that one only needs to accept three basic assumptions to recognize the inevitability of superintelligent AI:
Intelligence is a product of information processing in physical systems.
We will continue to improve our intelligent machines.
We do not stand on the peak of intelligence or anywhere near it.
Humans have already created systems with narrow intelligence that exceeds human intelligence (such as computers). And since mere matter can give rise to general intelligence (as in the human brain), there is nothing, in principle, preventing advanced general intelligence in machines, which are also made of matter.
But Harris says the third assumption is “the crucial insight” that “makes our situation so precarious.” If machines surpass human intelligence and can improve themselves, they will be more capable than even the smartest humans—in unimaginable ways.
Even if a machine is no smarter than a team of researchers at MIT, “electronic circuits function about a million times faster than biochemical ones,” Harris explains. “So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week.”
Harris wonders, “How could we even understand, much less constrain, a mind making this sort of progress?”
Harris also worries that the power of superintelligent AI will be abused, furthering wealth inequality and increasing the risk of war. “This is a winner-take-all scenario,” he explains. Given the speed that these machines can process information, “[for one country] to be six months ahead of the competition here is to be 500,000 years ahead, at a minimum.”
So, the development of AI by human beings who, themselves, are nearly always under the operation and control of karmic softwares that they, themselves, neither perceive nor recognize, nor exert the least bit of control over, gives very little confidence to the prospect of the development of AI systems. You might like the way your Alexa or WhatsApp (I do!) or Google Translate (I DO!) function, but it does not equip even microscopically you for the immense — insuperable, as I see it — challenges and impossibilities of runaway technology. In fact, it humanizes and makes fatally convenient a tool which is far far too capable of running away with our very humanity than we are even remotely able to comprehend.
As these tools and technologies become ever-more essential for how we carry out our lives — from dictation to law enforcement — we erase the lines of clear perception and enter fully and irreversibly into the realms of our darkest impulses.
From the New York Times article today:
Researchers have long warned of bias in A.I. that learns from large amounts data, including the facial recognition systems that are used by police departments and other government agencies as well as popular internet services from tech giants like Google and Facebook. In 2015, for example, the Google Photos app was caught labeling African-Americans as “gorillas.” The services Dr. Munro scrutinized also showed bias against women and people of color.
BERT and similar systems are far more complex — too complex for anyone to predict what they will ultimately do.
“Even the people building these systems don’t understand how they are behaving,” said Emily Bender, a professor at the University of Washington who specializes in computational linguistics.
So, as I said above, one of the nascent, insidious dangers of AI is that it has become just so normal to live alongside it with the same lack of apprehension as someone who has brought a cute little lion cub into their home as a novelty pet that might yet adapt to our domestic ways. I have serenely accessed (and benefitted from) Google’s search AI several times just in the speed-writing of these quick reflections (it’s hard not to feel a deep urgency about these things). AI lives in our pockets already. As a result, according to Sam, in the talk above, most people “seem unable to marshal an appropriate emotional response to the dangers that lie ahead.” And that worries me a lot more. This is really just another existential fuel for the burning fire of the Great Question. If anything fires up don’t-know, it’s this subject.
Reading the New York Times article this morning in the waiting room of the doctor’s office just inflames the possibilities further: if AI begins to “inherit” the biases — i.e., thinking-habits, “mind habits” (Dae Soen Sa Nim), meaning “karma” — of its flawed creators (who, themselves, “run” on reams of unexamined mind-habits), this is no longer the benign relationship to searches such as “How do I share Instagram Story to my WordPress blog?” queries. It’s hard enough managing our own in-built karmic tendencies, and then the countless mind-habit biases of the myriad humans in our ever-widening constellation of human relations, stretching from each one of us out to family, friends, partners (and their families), work colleagues, into sangha members, local community, national compatriots, fellow world citizens, right up into the dank reptilian recesses of a Donald Trump or Vladimir Putin, finger-on-the-button and tearing-up-cooperative-agreements-and-weapons-treaties.
The Lord with his own two hands created mankind; and in a facsimile of his own face. (2 Enoch 44:1–3)
We’ve been not so good at managing the karma-glitches in our software. As individuals, as a species, we have not come remotely close to being the true Lords of our own cause-and-effect behaviors. Our mind-habits dictate so much of the destruction of our health, our relationships, societies, and now even our entire biosphere. What if super-intelligent machines possess these blind mind-habits, too? Well, AI already possesses these karmic “glitches”, inherited from its masters.
I cannot urge with greater urgency Sam’s great conversation with Nick Bostrum (“one of the most provocative philosophers I can think of, ” Sam says), the Director, Future of Humanity Institute, Oxford University. It’s a division of Oxford focused on the study of existential risk — the existential risks to human existence. It’s definitely must-listening.
They discuss public goods, moral illusions, the asymmetry between happiness and suffering, utilitarianism, “the vulnerable world hypothesis,” the history of nuclear deterrence, the possible need for “turnkey totalitarianism,” whether we’re living in a computer simulation, the Doomsday Argument, the implications of extraterrestrial life, and other topics. But it’s especially their discussion of AI that is most relevant. Here is the link, and you should set aside some time to hear their whole, absorbing, talk in its mouth-gaping entirety:
So, yes, I am worried about AI. Rather than just from a fear of tech, I have this worry from everyone’s experience of the lack of effort we put into examining and digesting karma, our mind-habits. This is my basic sense of things. It’s not a complete picture, and I claim no expertise or leading knowledge. I might even be pathetically uninformed, since I really cannot read books these days and prefer nearly always to spend the remaining hours available husbanding energy and enthusiasm for sitting meditation.
But, this NYT article really hit my Buddhist training right straight in the solar plexus today: AI is being developed, at breakneck speed, by a species which is notoriously blind to its own karmic tendencies, with the first country-tribe of these karmic-disasters to “reach the goal” instantly becoming the world-dominating force which would (potentially) reduce every other seeker — competitor and friend — into utter, irrevocable subservience. The incentives for “winning” are scary-immense; the outcomes for “losing” unforeseeable, and impossible to contemplate.
And as if this NYT article was not enough: Just very recently, the esteemed American documentary program, Frontline, presented this fascinating program called “In the Age of AI.” It describes the race between China and the USA to gain mastery in the field of AI. “Who arrives at AI first, controls the future,” many people say. I recommend a watching of this program for the latest insights into where this “karma” is all going — from surveillance capitalism of social media to the surveillance state of AI, which China is already implementing with “social credits,” use of AI face-recognition software of identify even jaywalkers, shame them with their picture posted instantly on a vast screen at the intersection, and fine them before they ever reach the other side of the street; and a face-recognition-only, cashless economy which tracks one’s every purchase, and which learns one’s behavior to predict models of one’s future behavior, choices, even thoughts and feelings.
So, we still don’t see our karma, even as tribes, as societies, as nations, and as a species. I would not want even my best and clearest human friend armed with tools so infinitely absolute and unchallengaeable what AI clearly and demonstrably offers.
In the end, I am not a social analyst or commentator. These are just the ramblings of a meditator who has been left too long with boundless internet access in his hands, and a lifelong foreboding about the human condition and my own unwitting place in all of it, and how I got here, and what should be done about these unfortunate conditions so received. I will never pretend to be making some new or particularly piercing insight here, and on many points will probably be proven to be wrong.
Also, I am not just a Luddite, some anti-tech Buddhist monk so attached to “nature” and fearful of the benefits of these silicone-and-metal demigods. I grew up among computers — big big really heavy computers — long long before 99% of the population ever conceived of their existence or role in their lives. In fact, it was the sale and maintenance of advanced Wang Laboratories computer systems which funded my beloved after-school peanut-butter-and-jelly sandwiches with a cold glass of milk, Catholic school beating-sessions from nuns, alienated summers on Cape Cod seeking shade from a blinding sun and vague social stratifications, and Yale education among English aristocracy, Jews, gays, New York socialites, Hollywood move stars, children-of-prominent-intellectuals — all the things a northern Jersey boy never gets to know from danker childhood, yet grows to view with fascinations which shape and enrich him. My Father was a skillful salesman (and executive) with Wang — a company which was once considered a significant rival to IBM, before Wang decided (early 80s or so) that this new PC-thang would never catch on, regular folks would never make computers their own and buy them in such quantities for personal use in the home as to turn a profit. (And yet the explosion occurred, and Wang is now no more, and a company never dreamed of in those years is now the richest corporation in the world, and transmits these words to you while you sit on the toilet or on a subway.)
I do not rebel against AI because of any weak-flower rebellion against “the new,” and too-hard holding of the ancient, the traditional, the temple, the mystic, the “Zen.” Through my good Dad’s earnest paycheck, I am also a child of the computer.
But this alone is not the dilemma we are facing. It is not enough merely to question things from the point of view of the subject-object: AI is bad. AI is dangerous. AI threatens us.
AI does not threaten us, any more than this pen before me, this sound outside the window, or this cup of coffee on the desk threaten me/us. It is our own lack of insight into our own minds that is the greatest threat, and AI is merely its own best (worst!) catalyst.
Really, there might be no better words directing us to the heart of the most essential matter on this than Dae Soen Sa Nim. In his seminal book, Only Don’t Know, we have a collection of exquisite letter-exchanges between the Zen master and his Western students, how quaintly preserved for us in the pre-Internet amber of an intimate one-on-one letter coming to a master, and a one-on-one answer arriving to the student, albeit by several weeks of delay.
It is important to note that this exchange happens in the early 1980s: Steve Jobs’s dream is still, as yet, some years away. My own father’s computers have not entered the experience of regular people’s lives, much less the experience of an itinerant Zen master from Korea traveling through the strange, weird lands of post-Watergate America.
In this simple, yet priceless exchange, an American student asks about the compatibility of science with meditation practice. Dae Soen Sa Nim, as usual, like your UPC scanner at the supermarket checkout, he scans the issue and issues an insight which is so completely updated to the question in that moment that it is, almost digitally, one might say, calibrated to any question raised by AI or digital reality or anything else either dreamed of or not yet imagined by this babbling essay.
Dae Soen Sa Nim’s main point is this, and his answer has clear force for how we should view any of these reports about the “advance” or problematics of AI or anything: WHO is the driver?
We have consciousness, and this consciousness is like a computer. A computer does not work by itself; somebody controls the computer. Our consciousness also does not make itself work; “something” controls our consciousness. Then our consciousness makes science. So this something controls consciousness, and consequently science. This “something” is not science, not consciousness, but has consciousness and science. So I say to you, if you attain this “something,” you understand consciousness and understand science. The name for that is Zen.
So, AI does have karma. And we should definitely, clearly be absolutely concerned about that. The reason is because “karma” means mind-habits, and unexamined mind-habits nearly always produce suffering for others.
As usual, Dae Soen Sa Nim’s words about the present conundrum are clear, piercing, prescient, event prophetic.
This is it. Before developing better Alexas and better self-driving cars — all based on AI, for our convenience — we should really make effort to develop insight into the driver of these cars, the listeners of these automated sound-systems.
Yeah, I’m worried. I really don’t believe in human beings’ ability to safely regulate and perceive this stuff. The Chinese government is already employing its vast pools of personal-data, through algorithms and self-learning machines, to round up, imprison, and oppress a whole ethnic group, the Uighurs. In the Frontline documentary, we are told by leading authorities of the ways in which China is already exporting these manifest technologies, and their supporting infrastructure, to African countries.
If you think that Sam Harris might not yet qualify as true prophet of where we are going, I leave you with the mesmerizing soul-insights of Kraftwerk — prophets of an order way way beyond anything “predicted” by moldy biblical texts.
Interpol and Deutsche Bank, FBI and Scotland Yard Interpol and Deutsche Bank, FBI and Scotland Yard Business, Numbers, Money, People Business, Numbers, Money, People
Computer World Computer World
Interpol and Deutsche Bank, FBI and Scotland Yard Interpol and Deutsche Bank, FBI and Scotland Yard Business, Numbers, Money, People Business, Numbers, Money, People Computer World Computer World
Interpol and Deutsche Bank, FBI and Scotland Yard Interpol and Deutsche Bank, FBI and Scotland Yard Crime, Travel, Communication, Entertainment Crime, Travel, Communication, Entertainment Computer World Computer World
Kraftwek already saw where this was heading: “…Business, Numbers, Money, People… Crime, Travel, Communication, Entertainment.”
AI has karma. We should fear about it what we fear about our false understanding of our self. With this subject, as with everything: really, the way is just “only don’t know.” Instead of better better better tech ubiquity, who “controls” our mind-computer.
Entering the Temple of Apollo, in sacred Delphi, the ancient words call us to a truer path than chasing all this tech “knowing”: γνῶθι σεαυτόν. “Know thyself.” The road that leads to insight into not-knowing.