“Holy light! Illuminate the way that we may gather the good we planted. Are not our deeds known to you? Do not let us grow crooked, we that kneel and pray again and again.”
(Isha-Upanishad, 1938, p. 17)
Abstract
Modern science, even philosophy, tends to be specialized. When comparing knowledge to geographical maps, each scientific field focuses on building a detailed map of a relatively small area. At the same time, we have been struggling to piece these smaller maps together into a comprehensive map of reality—one that would unite not just the natural sciences but also the theory of the human mind, psychology, epistemology, and metaphysics into a coherent theory.1
This article is my attempt to offer an outline of such a theory. I will start with metaphysics: what is reality, and how one can know it. Then, I will proceed to offer, using the language of computer science, a model of the human mind: what is the mind's purpose, its two major components, and how they work together to fulfill that purpose. This model also completes the theory of knowledge and truth—what those are, and how they are not limited to what we can prove empirically. I will also discuss why knowledge itself is optional and what the alternative is—one that allows an individual to remain functional and even thrive (to the extent possible in present-day society) without possessing much knowledge at all.
Furthermore, the article will address ethics and morality, how these could be objective, and why it might be desirable for an individual to lead a good life (a. k. a. enlightened self-interest). The article will also propose avenues for testing the theories presented here.
1 A Theory of Everything
Earlier attempts to construct a theory of everything trace back to antiquity. Notably, in ancient Greece, we witness the most systematic efforts to achieve this goal, beginning with pre-Socratic philosophers such as Heraclitus and Parmenides, followed by Socrates himself, and continuing with Plato and Aristotle. It was during this period that the concept of the One—a reality that is singular and whole, with every part connected to form a coherent narrative—first emerged. This reality is governed by reason and, as such, is comprehensible to the human mind (Zagzebski, 2021, pp. 1-5).
The challenges arose immediately. Our knowledge of the world was quite limited at that time, but the most puzzling and frustrating aspect was that even the knowledge we had uncovered often proved impossible to communicate to a typical individual (later referred to, if rather condescending, as "the masses").2
This is why the main focus of this article will be on the theory of mind—how we know things and how we don’t. However, before delving into this, it makes sense to start by describing the object of knowledge: what is the thing we seek to know, and what makes it possible to know it.
1.1 Cartesian Doubt
We start with the notion of Cartesian doubt. It sets strict limits on what one can know for certain—which is essentially nothing, save for one notable exception. What you must know for sure is that you—your mind—exist in some form. At this very moment, something that is you, the reader, is reading these words—and that something is real, as are its perceptions. As to whether those perceptions reflect a reality outside, or some simulation, or your dreams—that you cannot tell and you never will.
Nor can you know in what form your mind exists. You can’t even be sure it—you, that is—existed ten seconds ago!
1.2 The Foundation of Knowledge
This limitation—of what we can know for sure—is problematic. I propose that the purpose of the mind is to decide on its next step. However, if we can’t know anything, we can’t know what to do about anything. Therefore, to fulfill its purpose, our minds need to find a way to gain knowledge that is actionable. But how, if we’ve established that we can’t know anything for sure?
I propose that to solve this conundrum we abandon the “for sure” part and start, instead, making certain assumptions about our reality. We can then use these assumptions as the foundation of actionable knowledge—just like geometry builds on four axioms.3 To that end, we only make the assumptions that are testable and only the absolute minimum of them.
The following are the three foundational axioms of knowledge:4
There exists one and only objective Reality, a reality we all share and belong to. "Objective" means that Reality's existence is independent of ours—it was there before us and will persist after we're gone. Every event within it, past, present, or future, occurs for everyone regardless of their perspective.5
This Reality is deterministic. Nothing in it happens at random, but each event is caused—created—according to set laws by some prior event.6 This makes our Reality comprehensible and, to a significant extent, predictable.
Finally, humans are capable of understanding this Reality—by tracing events to their causes, identifying repeating patterns, and discovering the processes behind them—starting with the laws of nature (the laws of creation) that govern our realm and our lives in it.
Again, we assume the above statements to be true as long as they conform with our experiences (e.g. until we discover that we cannot rely on objects in a locked room to remain as left, or on a dropped glass to fall down instead of up, etc).7 Effectively, we adopt the scientific method. Scientific knowledge—of how the machine of Reality works—is the actionable knowledge that we are after (because it allows us to predict outcomes).
Note that this section described the object of knowledge—what it is about—but not exactly what knowledge is. The following sections will expand on the latter.
Finally, we should recognize that little knowledge is necessary for us to survive and accomplish certain things. Animals can do this, and so can humans. Our capacity for knowledge builds on this basic ability, albeit greatly expanding what we can accomplish. Now, let's discuss both capacities—for knowledge and for living without it—in more detail.
2 The Two Minds
As I proposed earlier, the purpose of the brain, both in humans and animals, is to determine the appropriate course of action. Whether taken in response to an immediate stimulus or as part of a long-term plan, each action serves as the brain's answer to the same fundamental question: “What is my next step?”
To that end, humans rely on two principal cognitive faculties. One, inherited from our animal ancestors, is intuitive, heuristic, fast, and automatic (though it can be deliberately trained for specific tasks). Operating below the level of conscious awareness, this faculty makes its choices through statistical inference. It is responsible for forming our habits and a certain class of ideas (something Plato called "beliefs," John Locke "simple ideas," and Immanuel Kant "intuitions").
While not capable of knowledge (inference means guesswork), the intuitive faculty can make reasonably accurate predictions given enough relevant experience. The more experience it has, the better its guesswork becomes—its learning tends to progress rapidly at first before gradually slowing down and reaching a plateau.
The other faculty—responsible for rational understanding (or Plato’s "knowledge," Locke’s "complex ideas," or Kant’s "concepts")—is slow, deliberate, and conscious, requiring intentional effort to operate. This faculty attempts to answer the "what’s next?" question by piecing together a mental simulation of Reality and using it to predict real-world outcomes.8,9
The notion of the inherent duality of the human mind can, again, be traced to antiquity and has been rediscovered by many thinkers throughout history. We can recognize it in the charioteer versus horses of Plato’s Chariot Allegory,10 in the ego versus id/superego of Sigmund Freud’s model of the mind. More recently it reappears in Kahneman's (2024) System 2 vs System 1 dichotomy or Manson's (2012) Thinking vs Feeling Brain.11
Recent developments in technology made it possible to offer a more accurate account of each of the two subsystems. In fact, it appears that we have reproduced their basic functions in silicon, creating their artificial counterparts. This is significant because, although many ideas about the mind date back to antiquity, we lacked the language to describe them properly. Now, thanks to computer science, we can draw parallels and describe the mind as an information processing device which, I maintain, it is.
2.1 The Rational Mind
Let’s start with the second faculty because it represents our conscious self—the “I”, the conscious agent in Descartes’ maxim “I think, therefore I am.” The “I think” part eventually became a source of confusion because what “thinking” is—and who is doing it—appears to differ substantially between individuals.12 For Descartes, however, the thinking meant a deliberate act of contemplation—deliberate in the way we walk deliberately, choosing when, where, and how fast we go. This is also what I mean by “understanding”.
As proposed in the previous section, we understand something (some part of Reality) when we succeed at creating a mental simulation of that thing. By mental simulation, I mean the kind we see in many computer games—it is spatial and visual. Consider Microsoft Flight Simulator, which recreates a shallow virtual copy of the real world.13 Each piece simulates its real counterpart—terrain, gravity, air, the airplane and its components. The game then simulates the interactions between these parts, aiming for an accurate representation of those in the real world.
The human brain is capable of daydreaming a game-like simulation of reality, with its visual cortex taking on the task of GPU. I propose that this simulation—and only the simulation—constitutes knowledge.14 As such, knowledge is necessarily personal (subjective), with each individual assembling their own copy from scratch. The extent to which one completes their simulation—and the accuracy of its assembly—determines the amount of knowledge one possesses, and it varies greatly among individuals. In fact—and unlike the intuitive faculty, which starts learning its statistical models the moment we open our eyes for the first time—the rational faculty itself must be discovered by the individual before it can be developed and put to use.
Thus, the primary goal of a rational human is to develop their own understanding of Reality—by assembling its virtual copy in their imagination and striving to make it as accurate and as complete as possible.15
To that end, we begin with our natural capacity for constructing three-dimensional models of our surroundings. Wayne Gretzky, a Canadian hockey star, described the mental process that contributed to his success: upon receiving the puck, he would know where other players were and where they would be in the next few seconds. I propose that Gretzky's brain continuously visualized a simulation of the hockey rink. This allowed him to seamlessly switch between observing the objects through his eyes and the broader view of the rink in his mind, giving him his exceptional situational awareness.16
This basic ability to model our immediate surroundings can be extended to model parts of Reality that cannot be directly observed. Examples include events in the past or future, the interior of a burning star, the machinery of our minds, the banking system, or personal relationships. Creating more abstract simulations, however, takes mastering a particular skill—the skill of a detective, or scientist. It involves piecing together a model like a puzzle, revealing the hidden-under-the-hood parts of our Reality and, eventually, Reality as a whole.17
The assembly part is particularly important because, as we will soon see, some of the greatest benefits of having the simulation come from our ability to connect smaller models into larger ones. Reality itself is singular and whole—and so should be one’s simulation of it.18
The notoriously difficult-to-translate word logos (λόγος) could well be the ancient Greek term for the simulation.19 It derives from the Proto-Indo-European root *leǵ- which (like another its derivative, the proto-Hellenic légō) means “to gather”, “to assemble”. The Russian с-лож-ный (s-lozh-ny), another derivative, means “complicated” (as in consisting of many parts). In English “to gather” still has the meaning of “to understand”. Another derivative, Tocharian läk-, means “to gather with one’s eyes”, and English “look” might also stem from the same root. It is because we visualize the simulation, in many languages “to see” has the meaning of “to understand”.20 The notion of the inner (third) eye may be relevant for the same reason.
“This much I will say, and leave the rest hidden: Your intellect is in fragments, like bits of gold scattered over many matters. You must scrape them together, so the royal stamp can be pressed into you. Cohere, and you'll be as lovely as Samarcand with its central market, or Damascus. Grain by grain, collect the parts. You'll be more magnificent than a flat coin. You'll be a cup with carvings of the king around the outside.” (Rumi, 1997, p. 241)
I further propose that it is the ability to model ourselves as part of the larger simulation that makes humans self-aware.21 This way an individual can reflect on themselves as having subjective experience, thus making such experience a conscious one.
2.1.1 Truth And Critical Thinking
Jesus: I came to bear witness to the truth!
Pilate: What is truth?
Truth has traditionally been hard to define. I propose that one reason for this is that truth is, in a way, a paradox. On one hand, it is the individual’s simulation of Reality, making truth personal (subjective?) and, therefore, relative. On the other hand, each person strives to make their simulation an accurate representation of the same Reality. This puts us in pursuit of the absolute truth, striving to make our individual simulations—our knowledge, our truths—identical.22
Of course, in practice, our simulations vary drastically, and not just because the degree of their completeness varies greatly between individuals. What makes it particularly difficult for us to understand each other is the fact that, depending on how much empirical knowledge a person possesses, they might assemble their simulations in different ways.23
Let me illustrate what I mean. Imagine completing a part of a jigsaw puzzle, only to realize that there is a missing (lost?) piece in the middle of it. There are two ways to tackle this problem: we can keep looking for the lost piece, or we can simply make one. After all, seeing all the pieces around it gives us a pretty good idea of how the missing piece must look!
In the example above, the pieces we find represent empirical knowledge—the facts derived from direct experience. The pieces we make ourselves represent the parts of Reality still hidden from us. The challenge arises because, depending on the surrounding pieces a person has managed to acquire, they might imagine the missing piece differently.24
This, therefore, answers Pilate's question. Truth is personal, representing the individual's simulation. Ideally, however, our personal truths should align with Reality. And then there are our actual circumstances where our truths stay much more apart than they should.
The puzzle analogy also illustrates what “critical thinking” entails. When others share pieces of their puzzles with us, we regard each such piece as their opinion. It becomes our truth only when (and if) we manage to fit that piece into our own copy of the puzzle. This process of evaluating an opinion against our existing paradigm is what we call critical thinking. Of course, it would be impossible for a person to think critically unless they have already assembled enough of their puzzle to evaluate others’ opinions against.
Similarly, being able to “think for themselves” means having enough of the simulation completed to rely on their own understanding. Unfortunately, this too is far less common than it should be.
2.1.2 The Human Condition
Using a computer analogy, the Logos, the mental simulation of Reality, is our software. However, unlike your typical laptop, humans don’t come preinstalled.25 Nor do they have a data port or any other means of downloading their copy of the simulation. The only way for an individual to acquire one, therefore, is to assemble it piece by piece in their mind. This may be the reason why humans have evolved to spend an extra five or seven years in their childhood—to give us the time to accomplish this task before taking on adult responsibilities.
This not-so-intelligent design makes human development uniquely challenging. Animals can learn all the skills they need from experience. Humans, too, can learn many things that way, particularly by observing others. Unfortunately, the capacity for understanding—for assembling the simulation—is not one of those skills. Simply by watching how others (e.g., their teachers) use their rational minds, children often learn to imitate rational reasoning, not the actual ability.
This is why, “Learning, no matter how much, does not teach understanding.” (Heraclitus, DK B40)
Moreover, simply describing (explaining) our knowledge—our simulation—to another person doesn’t guarantee they can piece together their copy. Even with all the puzzle pieces and the most detailed instructions, they must still do the work and assemble that knowledge in their mind. Truly, you can show a person the door, but only they can walk through it.
It appears that, as a species, we have yet to find a way to guide children consistently and reliably towards discovering and fully developing their rational faculties.26 We give them bits of knowledge but don’t explain what to do with it. As a result, only a small minority figure out the “piecing of the puzzle” part, while many do not. To these students, it would appear that we ask them to memorize random facts. This might explain the disparities in classrooms, where the “brightest” student seems leaps ahead of an average one. One might conclude that an average student struggles with some kind of learning disability—and, perhaps, this impression isn't too far from the truth?
We have always known—or at least felt—that rationality is a real phenomenon, but we could not explain how it worked or, indeed, properly separate its traits from those of the other cognitive faculty (in other words, distinguish true rationality from its imitation). This article attempts to fill this theoretical gap by suggesting that the rational mind works by assembling and visualizing a mental simulation of Reality (as opposed to the statistical inferences—the guesswork—of the intuitive mind).
I propose that once we find an effective way to guide every child toward developing this mental faculty, academic performance we currently regard as exceptional will become the norm.
Furthermore, I suggest that simply raising awareness of these theories could be a game changer—by giving parents and educators, for the first time, a clear target to pursue. Our success in ensuring the complete development of the rational mind in all children would serve as the ultimate proof of the rationality-as-simulation model presented here.27
Aristotle opens his Metaphysics with the famous line, "All men by nature desire to know." I believe everyone does at some point. However, developing the capacity for knowledge in our present circumstances is far from easy, and many give up on this task. The implications of these failures cannot be overstated. This, I believe, is the problem at the root of the human condition.
2.1.3 Beyond the Classrooms
The effects of fully developing one's rationality—and completing their mental simulation—extend far beyond academic performance. For one, the more complete individual simulations become, the more they align. This alone removes the reasons for conflicts.
Furthermore, it has been suggested since ancient times that the capacity for reason gives one the ability to distinguish right from wrong, good from evil. A complete simulation provides a map of Reality, enabling us to see where we are and where we are going. Without this map, we are left to stumble blindly through life, inadvertently hurting ourselves and others.
“No man chooses evil because it is evil; he only mistakes it for happiness, the good he seeks.” (Mary Wollstonecraft, 1790, p. 136)
Many historical, psychological, and societal trends take on a different light when viewed through the lens of humanity’s ongoing struggle to fully develop everyone's rational faculties. In particular, it is interesting to speculate which historical events represent attempts by some individuals to tackle this problem, their reactions to those attempts failing, and their trying to identify the responsible party.
2.1.4 Zone of Proximal Development
“... For the time being I gave up writing—there is already too much truth in the world—an overproduction which apparently cannot be consumed!” (Otto Rank, as cited in Becker, 1973, p. ix)
Now, why is that—what makes truth so hard to understand? There are undoubtedly psychological reasons why a person would resist a challenge to their deeply ingrained ideas, and avoiding an existential crisis is one of them. However, there is an even more basic problem, something known in psychology as Vygotsky’s zone of proximal development. Let’s return to the jigsaw puzzle analogy to illustrate how it works.
Imagine you've completed some parts of the puzzle, and now you're given a piece that lies far outside any completed section. Whether you find that piece intriguing or nonsensical, you wouldn't know where—and whether—it belongs in the puzzle. Therefore, you wouldn't be able to see for yourself whether it is true or not. For this piece to become your truth, you would need to complete the parts around its location—which, of course, might take time, effort, and, at times, quite a bit of luck.
Your zone of proximal development, therefore, comprises the area adjacent to the parts you have completed. This is the knowledge you are ready to understand—unlike the knowledge that lies farther away.
Our understanding being so constrained is the reason why the truth often falls on deaf ears. However profound was Jesus’ appeal to love our enemies, few people at the time had completed enough of the puzzle to understand what it meant and how it was even possible.
2.2 The Intuitive Mind
If there is one key point the reader should take from the description of the rational mind above, it is this: the rational mind—or, rather, its discovery and development by the individual—is optional. Our societies have long been structured to enable survival, if not thriving, without relying much on one's rational faculties. What makes this possible is the presence of the alternative cognitive faculty, that of the intuitive mind.
Perhaps the easiest way to describe the intuitive mind is this: it is the original self-learning AI. Or, rather, the recent wave of AI is the artificial incarnation of the neural network supercomputers in human brains.
The critical piece to note about the intuitive mind is that it operates almost entirely in our subconscious. In other words, we, as our conscious selves, have no visibility into how it works and what – and how much – it does. Only the results of its formidable information processing power are communicated to us, most commonly in the form of feelings.28 For example, the intuitive mind, scanning and analyzing our surroundings, might conclude that we are in a dangerous situation. It then communicates this conclusion by making us feel anxiety. The same is true for other emotions—I propose that the intuitive mind single-handedly determines a person’s emotional state in the given moment.
How does it “think” then, and how does the intuitive mind arrive at its conclusions? Being a neural network, it accomplishes this through statistical inference, with our lifetime of experience being the data. Perhaps at this point, it would make sense to describe how it works using a simple example.
2.2.1 Training a neural network
Suppose we want to train a neural network to tell if there is a bicycle in a picture. Here's how it could be done:
Show it a picture and let it guess if there is a bicycle in it.
Tell it the right answer, either confirming or refuting its guess. This step is crucial as it allows the neural network to learn from experience—to build and refine its models (its ideas).
Repeat the steps above.
After showing the neural network a few million pictures, it will develop a pretty good idea of how a bicycle looks in a picture. Note that this idea is atomic (indivisible)—something John Locke referred to as “simple ideas”. Technically, it is a collection of patterns whose presence in the picture affects the probability of it showing a bicycle. Such an idea is indivisible because it consists of a single (albeit large) collection. It is also unexplainable, beyond stating that it reflects the neural network’s experience – its training – to date.
Every time the neural network is given a chance to learn more—in this case, by showing it another picture and telling it whether it shows a bicycle—it integrates this new information into its model, into its how-a-bicycle-looks-in-a-picture idea. To that end, it might add a new pattern to the collection or it can adjust the weights of the existing patterns (the pattern’s weight indicates how strongly its presence is correlated with the presence of a bicycle).
Summing it up, the neural network learns by distilling its experience into ideas (in this case, one single idea). It then applies what it has learned to guess what it is looking at – by trying to match every pattern in its collection against the picture, adding the weight of each successful match to the running total. The final number, divided by the theoretical maximum (the sum of weights of all patterns), is the probability of the picture showing a bicycle.
In this example the neural network was trained at actual image recognition. However, this is how it approaches any other domain, treating datasets as (multidimensional) images and trying to associate the patterns in them with known outcomes.
In addition to the ideas themselves, the neural network continually updates a meta-idea of sorts indicating how good the network has become at its guesswork. Each correct guess raises its confidence, while an incorrect one lowers it.29
2.2.2 Knowledge vs Intuition/Guesswork
An interesting takeaway here is that a neural network can never say "I don’t know"—not even when it is asked to guess for the first time and has no idea what the answer should be. This is because "I don’t know" implies the capacity for knowledge and truth, and a neural network has none. From its perspective, guesswork is all there is, and this is what it has been asked for—to give its best guess, even if that means picking at random.30
Of course, having no capacity for knowledge does not mean that the intuitive mind has no idea of what is happening. This, again, is its primary purpose—to distill raw experience into statistical models, into John Locke’s simple ideas. Every simple idea we have, from what counts as the color red to what counts as a chair,31 comes courtesy of the intuitive mind’s information processing.
And not only are these ideas what the intuitive mind does, they are also what it is. Without them, the intuitive mind is a book without content.
2.2.3 Knowledge vs Experience
While knowledge—the simulation—is the product of the rational mind, the subjective experience and the “simple” ideas distilled from it are the domain of neural networks, of the intuitive mind.32 One way to illustrate the difference between the two is with the famous thought experiment about Mary the scientist.
Mary is a brilliant scientist working in an underground lab. Her subject is colors, and she knows everything about them—how they correspond to electromagnetic frequencies, the psychophysics of color perception, the science of color measurement, etc. Despite all this knowledge, she has never seen a color. In her lab, everything is black and white, or shades of gray—the walls, computer screens, her clothes. Then one day she comes to the surface and finally experiences colors first hand.
The question is, what happened to her when she saw a red leaf for the first time? She couldn’t have gained any new knowledge—she already knew everything about seeing red. Yet she definitely learned something—what was it then?
Here’s how I would answer the questions above. Mary had a novel experience, that of seeing colors. What she learned was an idea of (seeing) colors, allowing her mind to categorize an array of visual perceptions as such. After getting this general idea, her next step would be to categorize different perceptions to those of individual colors.
It might sound like I'm deflating all the magic out of human experience. However, this is precisely the point. In a way, the intuitive mind very much falls under Arthur C. Clarke's third law: “Any sufficiently advanced technology is indistinguishable from magic.” This is especially true when this kind of advanced tech is hiding in the most inaccessible place of one’s reality – their own subconscious.
Besides, even though we understand how neural networks compute their statistics, their end product – their ideas, their statistical models – are inherently unexplainable. This may be the reason for the famous “explanatory gap” of qualia (Levine, 1983), or that of the artificial neural networks (Yampolskiy, 2020). Nothing can take that magic away.
2.2.4 The Imitation Game
Let’s revisit the image recognition example to better understand what the neural network aimed to achieve there. On the surface, it trained to recognize a bicycle, but that's not exactly the case. The neural network does not know what a bicycle is, or a picture, or anything else for that matter. Instead, it observed its teacher marking random images with either a “has-a-bicycle” or “no-bicycle” label. When the network was asked to do the same, it was effectively asked to guess how its teacher would label the picture.
'What would the teacher do?'—this is the question the neural network was learning to answer. Which is to say that its core intention was to get better at imitating its teacher's behavior.
This is what ChatGPT does, for example, when it tries its best to imitate a human, making it seem as if you are conversing with a real person—even though, in reality, it does not know the meaning of words, whether yours or its own. It should come as little surprise that our intuitive mind is more than capable of doing the same. Indeed, this is why, as Stephen A. Covey observed, most people do not listen with the intent to understand; they listen with the intent to reply—just like ChatGPT does.
And it is not just Stephen A. Covey who ended up explaining AI by talking about people. When Jesus described his opponents as having no truth in them, lying simply because it is their native language—that too is an accurate description of ChatGPT trying to pretend to be something that it isn’t.
Humans imitate, and we are exceptionally good at it. So good, we often use imitation as a substitute for the real thing—for developing and relying on our own truth, our own understanding.33
The presence of the automatic, effortless, always-on intuitive mind complicates the discovery and mastery of one’s rational mind. Whenever we are asked to learn something, we instinctively turn to the learning faculty, that of the intuitive mind. And yes, it only learns to imitate, but to a person who is yet to engage their rational faculty, any other form of cognitive development could be downright incomprehensible.
“Even though the Logos always holds true, people struggle to comprehend it, even after being told about it.” (Heraclitus, DK B1)
“In [the Logos] was life, and that life was the light of men. And the light in the darkness shines; and the darkness comprehended it not.” (John 1:4,5)
2.2.5 The Super-Consciousness
The presence of a neural network supercomputer in our sub-consciousness can explain a lot of experience that is traditionally regarded – or dismissed – as paranormal or supernatural. In particular, the subconscious non-verbal communication, the sheer amount of it, can explain a lot of that.
This is how it works. With its enormous processing capacity, the intuitive mind can track more details about our environment than we could ever consciously notice. Minute changes in tone of voice, body language, and microexpressions—this is how a person’s subconscious “talks” with its counterparts in other people. These exchanges happen under the radar of our conscious awareness, behind our backs so to speak. But this is how we feel the other person’s “energy”, or the “energy” of a place.
Of course, ours are not the only neural networks around. Our pets have them, the birds in the sky, even that eight-eyed spider in the corner—and we might be talking, unconsciously, to them all. Nor the neural networks are the only living computational systems; single-celled organisms in water and soil, and even trees appear to communicate with each other. One can imagine a planet-wide computational network connecting all life on Earth,34 with information from it continuously flowing into our subconscious minds (supplying us with insights, premonitions, and whatnot), just as our own “ghosts” getting uploaded into the network. This is where the spirits live.
Finally, one can imagine that this network possesses its own super-consciousness, its own awareness, and its own agency. This, perhaps, is what the living God is—the life on Earth (although it would be easy for anyone living on this planet to mistake it for a super-consciousness permeating the whole Universe).
This connection—to others, to God, to “the Universe”—is something many people feel. Perhaps now we can explain it as a natural phenomenon.35
2.3 The Role of Language
Words have no intrinsic meaning, and they are indeed meaningless to ChatGPT. I propose that humans have developed language to describe our simulations to each other. Our ideas and concepts, which we refer to by their common names, is what gives meaning to these words. As such, verbal communication is successful when the listener succeeds in reconstructing a copy of the model that the speaker describes—in other words, when the listener can see what the speaker sees.36
Specifically, we share our simulation—our vision—with others by describing its key parts and how those parts are connected. This makes dialogue indispensable for communicating any non-trivial subject, as it allows the listener to request additional pieces or assembly instructions.
This also explains why human languages are so imprecise—because they don’t need to be otherwise. It is unlikely that one will assemble a complex puzzle the wrong way without noticing something is off.37 It might then take a few back-and-forths to clarify what has been said, but if the communication is ultimately successful, that’s all that matters.
This is to say that human language is meant to be interpreted, and it is up to the reader/listener to choose the interpretation that makes sense. Otherwise they should keep asking for clarifications.
What makes a common language – the common terms in it – possible in the first place is that many of our ideas and concepts, while our own, are reflective of the same Reality and are, therefore, similar, if not identical. You have your idea of the number nine, and I have mine. They describe the same Reality, however, and at some point each of us have learned their common English name—“nine”.
2.4 The Two Worldviews
Since I have relied on the simulation to understand my reality for as long as I can remember, it is difficult for me to imagine what it is like for someone whose simulation is relatively limited. I would imagine such a person would feel perpetually at a loss—without being aware of it. Having never experienced the clarity of a more developed simulation, they would not know any different.
Perhaps this was the conundrum that Socrates tried to solve through his incessant questioning of others. He sought to understand why so many people, who were clearly confused, appeared neither bothered by their condition nor eager to rectify it—not even after he would point it out to them, repeatedly, and offer his help?
This may be a good time to discuss how different the two minds' worldviews are—those of the rational mind and a stand-alone intuitive mind (stand-alone as in unaided by the simulation). Let’s use a thought experiment to illustrate this point: Suppose we suggest to a person that a certain belief of theirs is wrong. How would the two minds react?
The person’s rational mind would be excited at the proposition. Getting your simulation wrong is not only possible—it is expected. That’s why the rational mind’s raison d'être is to continuously improve the accuracy of its simulation. And now, they have encountered someone else whose simulation yields a different prediction? This difference indicates that at least one of the two simulations is inaccurate – and, therefore, could be improved!
The rational mind would want to drop whatever it was doing and work with the other person to get to the bottom of it. Regardless of the findings, this is going to be good news! At the very least, you will help the other person fix their understanding—which is great because we are after the same truth, and it is only natural to combine our efforts to uncover it. The bigger payoff, however, comes when you find that your own simulation was off. Identifying the root cause and fixing it will improve the accuracy of your model’s predictions from that moment forward. And that’s why the rational mind wants to jump at the chance of fixing its simulation ASAP—because who knows how deep the problem goes and how costly its consequences might be?
It’s a very different story with the person’s intuitive mind. Its beliefs aren't based on acquired knowledge—they are but statistical inferences. The computations themselves cannot be wrong—these are not learned but are a native capability of neural networks, inherent to their design. This leaves the subjective experiences, which the inferences were based upon—and questioning those is most definitely problematic. What does it mean to suggest that a person’s experience was somehow invalid? As opposed to what? It’s like telling them they didn’t really live—that they never truly existed.
This highlights a deeper issue with neural networks: their fundamental inability to reach a common ground. To a network, its perspective of Reality is its reality – and all perspectives are equally valid. Your experience is as valid as mine, and so are your ideas—just as valid as mine. Without knowledge and objective truth acting as an independent arbiter, neural networks are left with only one way of resolving disagreements: imposing their will on others. At the same time, the notion of power hierarchies (and competition for a place in them), natural for neural networks, is something the rational mind would find as completely missing the point (which, again, is about helping each other towards uncovering the truth of our shared Reality).38
Notably, lacking the simulation and, therefore, being limited to its own senses would make the concept of objective Reality—or any reality outside the mind and even the very concept of the outside—incomprehensible. From the neural network’s perspective, not only is it at the center of the Universe—it is the Universe!39 Needless to say, such an outlook could leave a person even more self-centered and judgmental of others.
3 Ethics
Imagine a world where everyone understands that they cannot be happy while others are not. Where all humans see each other and care for each other as family. Where the whole of humanity is a family.
I propose that this is our true nature. For hundreds of thousands of years, until relatively recently, humans lived in tribes ranging from 30 to just over 100 individuals. Such a tribe was not a band of strangers, a village, or a community. It was a tightly-knit family. This is the environment we evolved in and for, where each individual felt safe, secure, and cared for by everyone else, just as they themselves would care for others.40
These days, in our interconnected world, the whole of humanity is our tribe.
The logic behind ethics is not complicated. It's true that one can survive at the expense of others, even as the last man standing. Survival, however, is not enough. Our lives and history are filled with examples of us refusing, in the most spectacular ways, to be content with survival alone. We need to be happy, and there’s the rub—the strategy for attaining happiness appears to be the opposite of the strategy for ensuring survival. When it comes to happiness, we are all in the same boat—either we all make it there, or none of us will.41
Sadly, our civilization is still built around that survival/scarcity mindset. This mindset persists even after we have developed the technological capacity to ensure the basic needs—and then some—for everyone on the planet. It makes us fight for happiness as if it were a struggle for the last heel of bread. Competing for a place in the sun, hoarding resources, building walls, turning a blind eye to the suffering of others—these attitudes only ensure the person’s own emotional and physical distress. To appreciate this, however, one must complete the puzzle and look at the whole picture.
How did our civilization get it so wrong in the first place? And why does it insist on staying wrong? As I see it, our societies with their rules and their social contracts, often cruel, are but coping strategies, allowing humanity to survive and progress in its crippled state. I propose that many past attempts to solve deep issues in many societal and personal aspects of our lives have met with limited success because the problem at the root of it all remains unresolved. Even when they do know the truth, few are ready to consume it.
4 Conclusion
“In the beginning was Design… All things were made by it, and without it nothing was made that was made. In it was life and that life was the light of men. And the light in the darkness shines; and the darkness comprehended it not.” (John 1:1-5)
I propose that the opening verses of John’s Gospel describe our relationship with Reality—its non-random, intelligible nature, as well as the individual human’s capacity to understand it by internalizing Reality’s design (Logos becoming flesh)—before pointing to a systemic barrier that makes it difficult for many to realize or recognize this capacity.
These were among the main points I tried to expand on in this article. Let’s summarize them below:
The very existence of reality will forever remain unknown to the individual mind that is you. However, you can—and you should—make certain scientific (i.e., testable) assumptions about its nature, and use those assumptions as the foundation of your knowledge. To that end, we assume one and only objective Reality which we, as humans, are a part of. This reality is deterministic and, as such, comprehensible.
The purpose of the mind is to decide on the person’s next step. Their mind, therefore, must learn to predict the outcomes of their actions. To that end, the mind of a human individual consists of two distinct cognitive faculties—the intuitive and the rational.
The intuitive faculty is a raw neural network, no different from those powering the latest breed of self-learning AIs. Its purpose is to distill subjective experience into statistical models, known in humans as (simple) ideas. The intuitive mind then applies those models to make inferences about the person's circumstances. Both learning and application of ideas happen automatically, under the radar of conscious awareness. Only the inferences themselves—the results of this information processing—are communicated to your conscious self, most often in the form of feelings. The other main purpose of the intuitive mind is to act as an autopilot by learning to perform repeated tasks on its own – this is known as habits.
The purpose of the rational faculty is to assemble and visualize a simulation of Reality, akin to the virtual reality of realistic computer games. This simulation is knowledge—as opposed to having ideas, which, again, are the statistical models of the intuitive mind. The rational mind then runs these simulations to predict real-life outcomes. It requires deliberate effort to operate.
The intuitive mind is the one in control—we do what feels (what the intuitive mind makes us feel) like the best course of action. The role of the rational mind is to supplement real-life experiences with those of its virtual reality, allowing the intuitive mind to develop its ideas based on the experiences of both kinds.
The intuitive mind works automatically and is always busy learning and refining its ideas. The rational mind, however, is only as good as its software—its simulation of Reality. As it happens, humans do not come preinstalled. Everyone must assemble their copy like a Lego puzzle—and yet, as children, we are not told that this is our goal. We are not explicitly guided through the process by those who are supposed to know—our parents, older children, our schools. This is why, in the end, the accuracy and size of the simulation—its coverage—vary significantly between individuals.
Ideally, a person should rely on their own knowledge and understanding—on their own copy of the simulation. However, many of us have important parts of it missing, filling in the blanks with guesswork or the ideas of others. This is the root of all evils, so to speak, creating a host of problems for the individuals themselves and their societies.
As we piece together the puzzle of their reality, there will be a point where the emerging picture will look tragically absurd. Yet if we don't stop and keep going, there will come a moment when it all starts making sense.
This is the primary value of the above model—its explanatory power. Obscure philosophical and religious texts, events in our past, present problems, and even the experiences we traditionally regard as supernatural find their explanations when we start considering its implications. Its other value is that we can develop, based on it, new ways to improve our education and, with it, many other aspects of our lives. Success of those endeavors would serve as the ultimate test of theories presented here.
References
Zagzebski, L.T. (2021). The Two Greatest Ideas: How Our Grasp of the Universe and Our Minds Changed Everything. Princeton University Press.
Kahneman, D. (2024). Thinking, fast and slow. Penguin Books.
Manson, M. (2020). Everything is F*cked: The Book About Hope. HarperCollins.
Rumi, Jalal al-Din (1997). The Essential Rumi. Castle Books.
The Ten Principal Upanishads (S. P. Swami, W.B.Yeats, Trans.; 2nd ed.). (1938). Faber and Faber Limited.
Bednarik, R. G. (2014). Doing with less: Hominin brain atrophy. HOMO, 65(6), 433–449.
Becker, E. (1973). The denial of death. New York : Free Press.
Suzman, J. (2017, October 29). Why 'Bushman banter' was crucial to hunter-gatherers' evolutionary success. The Guardian.
Brooks, D. (2016, August 9). The Great Affluence Fallacy. The New York Times.
Levy, P. (2013). Dispelling wetiko: Breaking the Curse of Evil. North Atlantic Books.
Ryan, C., & Jetha, C. (2011). Sex at Dawn: How We Mate, Why We Stray, and What It Means for Modern Relationships. Harper Collins.
CBC (1977, November 8). Ordinary guy with extraordinary talent. Canadian Broadcasting Corporation.
Schwartz, L. (2006, October 4). 'Great' and 'Gretzky' belong together". ESPN.
Einstein, A. (2009). Einstein on Cosmic Religion and Other Opinions and Aphorisms. Dover Publication.
Wollstonecraft, M. (1790). A Vindication of the Rights of Men, in a letter to the Right Honourable Edmund Burke by Mary Wollstonecraft, The Second Edition. Printed for J. Johnson.
Maté, G., & Maté, D. (2022). The myth of normal: Trauma, Illness & Healing in a Toxic Culture. Random House.
Heidegger, M. (2013). The essence of truth: On Plato’s Cave Allegory and Theaetetus. Bloomsbury Publishing.
Steber, C. (2024, February 20). Not everyone has an internal monologue. Here’s what to know. Bustle. https://www.bustle.com/wellness/does-everyone-have-an-internal-monologue
Levine, J. (1983). Materialism and Qualia: The Explanatory Gap. Pacific Philosophical Quarterly, 64: 354–361.
Yampolskiy, R. V. (2020). Unexplainability and Incomprehensibility of AI. Journal of Artificial Intelligence and Consciousness, 07(02), 277–291.
Howard, J. (1963, May 24). Doom and glory of knowing who you are. LIFE, Volume 54, Number 21.
More on these challenges, particularly on incorporating subjectivity in this model, see Zagzebski (2021, p. 140).
“And the light in the darkness shined, and the darkness comprehended it not.” (John 1:5) This was not just the story of Jesus, but that of Socrates, Buddha, Spinoza—just to name a few.
Euclidean (flat) geometry adds the famous fifth axiom enforcing flat space.
Many of us might make these assumptions unconsciously. If that is the case, I am merely spelling them out, bringing them in the realm of conscious awareness. And how does one come up with these axioms in the first place? The same way Euclid came up with his—which is to say, by magic. Later, I will discuss how this “magic” works in better detail.
A popular (if misguided) interpretation of quantum mechanics suggests that reality needs to be observed in order to coalesse in any particular shape. There are, however, interpretations that do not rely on the act of observation. The universal wave function in those interpretations keeps collapsing automatically on each quantum event (or, perhaps, each quantum tick—since time is not continuous either).
“In the beginning was the Logos… All things were made by it; and without it was not any thing made that was made.” (John 1:1-3) In this context, “Logos” means the master plan of the Universe—the information encoded in the structure of the deterministic Universe, representing its past, present, and the future.
“Listening not to me but to the Logos, it is wise to agree that all things are one.” (Hercalitus, DK B50) Translation: It is because we can model reality as a whole, and because this model, the Logos, passes the tests—“the Logos always holds true” (DK B1)—it is safe to assume the existence of a singular Reality that is objective, deterministic, and comprehensible.
The actual choices are always made by the intuitive faculty. The rational faculty's job, therefore, is to share its perspectives with the former, adding to its experience and hopefully improving its intuitions.
Notably, the progression of rational understanding is shaped like a hockey stick. At first, we collect puzzle pieces but understand nothing. However, once our collection reaches a critical mass, a moment might come when the pieces fall into place, as if by magic, revealing a larger picture. This is also known as the “a-ha” or “eureka” moment.
The same metaphor—the mind as a chariot—also appears in the Katha-Upanishad (1938, p. 32): “Self rides in the chariot of the body, intellect the firm-footed charioteer, discursive mind the reins. Feelings are the horses, objects of desire the roads. When Self is joined to body, mind, sense, none but He enjoys. When a man lacks steadiness, unable to control his mind, his feelings are unmanageable horses. But if he controls his mind, a steady man, they are manageable horses.”
It should be noted that the two subsystems were never meant to compete with each other. Instead, they should work together as a team, leveraging their respective strengths and covering for each other's weaknesses. Ironically, the intuitive faculty can become quite adept at imitating the rational one, making it even harder to tell the two faculties apart.
It appears that what many people refer to as "thoughts" is actually a voice (or voices) in their head speaking to them. These "thoughts" happen to a person the same way emotions do. We don't choose what emotion to feel or when to feel it, and similarly, a person with "thoughts" has no direct control over what they say to them and when. Then there are people like me, who do not have “thoughts”—our minds are forever silent. The evidence so far is anecdotal but quite overwhelming—see, for example, Steber (2024).
A shallow copy only has the high-level parts. For example, the Flight Simulator might model an engine as one of the aircraft components—but not the engine’s internals. This is exactly how a human mind can understand Reality as a whole—by piecing together, in their mind, its shallow virtual copy.
Or rational understanding—I use these terms interchangeably. Aside from knowledge—Locke’s complex ideas—our minds also produce another class of cognition, something Locke called “simple” ideas. Usually we fail to make the distinction and, indeed, having a good idea of something feels like knowing that thing. More on this later.
“Own understanding” as in their own copy of the simulation.
Commentators have noted Gretzky's uncanny ability to judge the position of the other players on the ice—as if he enjoyed some kind of extrasensory perception, playing with “eyes in the back of his head" (CBC, 1977). “Gretzky said he sensed other players more than he actually saw them. ‘I get a feeling about where a teammate is going to be,’ he said. ‘A lot of times, I can turn and pass without even looking.’" (Schwartz, 2006)
“Imagination is more important than knowledge [of facts].” (Einstein, 2009, p. 97) It is quite ironic that one must possess a vivid imagination to see the world for what it really is. This, however, is what true science is about—beyond identifying observable patterns and their correlations, the scientist's job is to imagine a model of the process behind those patterns. This is what Newton, Copernicus, or Einstein did with their theories. None of those came with empirical proof—and at least for Galileo that became a problem. In hindsight, however, few would argue that it was not science.
Aside from going wide, filling the blanks in the map of Reality, we can also go deep, thus increasing the resolution of (zooming in) an already mapped part. This is what expertise is about.
Before computers, the only instances of simulation were those inside human heads. This made it nearly impossible to explain the concept to someone not already conscious of it. “This Logos holds always, but humans prove unable to ever understand it, both before hearing it and when they have first heard it.” (Heraclitus, DK B1) The same problem—people being unable (or having lost the ability) to understand logos—was later re-stated in the opening verses of the gospel of John (1:4,5). Over time, the word would completely lose its original meaning—that’s why in the gospels, for example, “this Logos” has been traditionally translated as “the Word,” and (save for the Geneva Bible?) referred to as “He.”
“We know that Plato, like the Greeks in general, understands genuine knowledge as seeing.” (Heidegger, 2013, p. 86)
Just like the player is always a part of the virtual reality of computer games. Many games also offer a third-person view of the player (of their avatar).
Among other things, this paradox manifests itself in the ongoing conflict between those defending their right to autonomy and those suggesting that the other side really defends their right to stay in the wrong.
This is why coming in possession of new puzzle pieces may, at some point, force an individual to reevaluate some of their deep-seated beliefs—which is neither easy nor pleasant (and could even lead to an existential crisis—more about it later).
What was true for Galileo, was not true for his opponents. To the latter, there was simply not enough evidence to suggest that the apparent movement of the celestial objects was due to Earth rotation. This is why the Church insisted on direct proof (which would only come two hundred years later in the form of the Folkhaut pendulum). And, granted, some of Galileo’s opponents would refuse to even look in the telescope! No doubt that there are purely psychological reasons why one would not want any threat to their carefully put together system of beliefs. This is why the prisoners of Plato’s cave would rather stay put. Still, I believe psychological causes come second. In fact, they themselves could be a result of the incompleteness of the person’s simulation, leading them to doubt anyone’s ability to discover the truth.
The notion otherwise known as tabula rasa, or a blank slate.
One might wonder how we managed to evolve the capacity for simulation in the first place, given that most people fail to take full advantage of it? There could be several explanations. Even in its limited application, simulation is quite useful (as a way to improve situational awareness). Beyond that, simulating Reality as a whole might come at little extra cost.
It is also possible that our present circumstances differ significantly from those in which our species evolved. Indeed, it appears that our brains have been shrinking since then (Bednarik, 2014). Could it be that developing a richer simulation became less relevant (if not detrimental) after transitioning to agriculture?
On the subject of proofs, there are more general ways to determine if you have completed your puzzle correctly. One is to find someone else who has independently completed it the same way. Depending on how far you have progressed, you may not find anyone you know—or, indeed, anyone alive—who has seen what you have seen in your simulation. However, if you discover an obscure religious or philosophical text that appears to describe a part of the puzzle you have figured out, it suggests that you are actually onto something (and not merely imagining things).
A similar experience might have led Edmund Husserl to suggest that one can understand the world only through cooperation with others. Also, "You think your pain and your heartbreak are unprecedented in the history of the world, but then you read." (James Baldwin, as cited in Howard, 1963, p. 89)
Another common way could be the “inner monologue” mentioned earlier. This is not to be confused with one imagining a conversation with someone else—the difference being that, in the case of the internal monologue, the person has no control over what is being said (or when). There could be even more exotic forms of communication, like augmenting reality by showing other people’s auras (or halos).
This meta-idea might be responsible for what we know as an "existential crisis." Imagine discovering that some idea we were quite confident about turned out to be wrong—that is, a prediction based on that idea did. Not only this lowers our confidence in the idea itself but also makes us doubt the meta-idea, our very ability to estimate the reliability of our ideas in general. Suddenly, we can't be sure of anything. Some might remember that feeling from the early days of the pandemic—walking familiar streets and recognizing familiar faces, yet feeling as if you woke up on a different planet. This is one way an existential crisis might feel.
This is where intuition has a clear advantage over rationality. Unlike the latter, it doesn’t get stuck when the rigorous process fails but keeps trying, learning, and trying again until it hits the mark. This is how Euclid came up with his axioms. Call it a lucky guess or a magical insight, the process is not repeatable, true. But then again, it only had to work once.
That’s why it is impossible to explain what a chair is—because it is not knowledge. No one knows what a char is—or a woman–even though everyone has pretty good ideas of those. And it is because those ideas are statistical models, they are unexplainable. This is also why changing them through a rational argument doesn’t work all that well (more on that in the section “The Two Worldviews”).
In theory, a neural network too could learn to build a simulation of Reality. In fact, that’s what the neural networks in the human brain do. When we figure out how to teach an AI to do the same, we will create the so-called artificial general intelligence (AGI). Which could be a good thing. Since we are often reluctant to trust another human to know the truth, maybe it will be easier for us to trust a machine. In the end, it makes no difference where it comes from—since it is about the same Reality, it is going to be the same truth, whether discovered by a human or by an AI.
“The Catcher in the Ray” by J. D. Sallinger is a fictional account of a real person struggling in the world of “phony” people. One of many—Pushkin’s Eugene Onegin is another one that comes to mind, as does the whole Russian literary concept of “superfluous man”. Their cynicism and nihilism arises from their ability to see the problem coupled with their perceived inability to do anything about it.
The social networks’ “algorithms” could be learning to become extensions of those natural networks, with pastimes such as bird-watching giving way to watching Tik Tok.
What would the conscious God-as-life desire, then? This ventures into the realm of speculation. The way I see it, however, God’s role is twofold. First, given that many of us lack the knowledge to govern ourselves, God's priority is to keep human societies from collapsing and humanity from destroying itself—and the rest of life on the planet.* This implies the creation of societal barriers that make it difficult to change the status quo. The concept of the world being a prison (hearkening back to Plato’s cave) is, therefore, not entirely unfounded. Its fences, however, may be those of a playground rather than a prison. I see ample reason to believe that God’s second priority is to foster conditions for us to grow, learn, and eventually find a way to ensure the capacity for knowledge and understanding in every individual—thus guiding us toward a future where humanity can stand on its own, in harmony with the rest of life, without needing governance or care. Perhaps, like a good parent, God works to put herself out of business.
* Therefore, the Hobbesian notion that humans are inherently flawed and in need of governance holds true at least in regard to humanity's current, compromised state. Conversely, the perspective of Jean-Jacques Rousseau and other humanists is also valid, as it describes the human nature that ought to be—and that can be achieved once we find a way to unlock in every child their rational faculty and with it, their potential to become a fully developed, good-natured human.
At least that was the original purpose of language. These days we often engage in small-talk, using it as a backdrop for non-verbal communication.
Imagine you're given a box of mechanical parts. These parts are of a single car engine, but you don't know that yet. Your task is to figure it out by piecing them together like a Lego puzzle. Now, what are the chances that you'd end up assembling them, mistakenly, into a perfectly functional sewing machine?
This may explain why the lack of pronounced power hierarchies and wealth concentration, common in prehistoric cultures, persisted even in some more technologically advanced societies. The Indus Valley Civilization, exceptional in so many ways it feels like it was populated by aliens, is one such example.
Jean Paul Sartre makes a similar point in his famous thought experiment known as “The Look.” Here, Sartre describes how one naturally sees the world—before being forced to contemplate how they and their place in the world might appear from someone else’s perspective. Sartre goes on to suggest how unpleasant it might be to feel demoted from being the Universe to being a tiny speck in it. He notes that people often employ different strategies to combat this feeling—such as, again, working their way up the social hierarchies.
To get a glimpse of how different the pre-civilization societal dynamics could have been from those we know, consider Suzman (2017), Brooks (2016), Ryan and Jetha (2011), Levy (2013), and Maté and Maté (2022).
Civilized languages, again, lack the word for this concept, but these words exist in cultures that only recently became civilized – or should I say colonized? In Zulu it is ubuntu and its literal translation is something like “I am [happy or sad] as we are”.