top of page

A Theory of Everything

  • Writer: ashrefsalemgmn
    ashrefsalemgmn
  • Jun 25, 2024
  • 11 min read


Of the many themes to which modern cinema owes much of its buzz, one stands out the most: artificial intelligence.


If we were to distill modern filmmaking to its most basic and recurring elements, we’d find only a couple of ideas that are the ingredients of every script in production. But when we look closer, we see that even those ideas branch out from one common root. The unifying element here may be obvious at first if we ignore the plot and simply restrict our view to the spectacular side of things—that is, the artifices and the mechanics used in arranging the story. I suppose this is to be expected. In fact, modern cinema is so laden with futuristic material, the production of which is by no means cheap, that it’s become a device—an ex machina of some sort—by which to balance out bad scripts and mediocre, run-of-the-mill stories.


It’s quite ironic that it’s that side—the side that’s deemed irreplaceably human, the literary side, the cathartic, the tragic, the soulful, you know, those things that have the writer staring at a blank page for half the day trying to express in a sensible format, those things that one imagines need a "Good Will Hunting" moment to truly appreciate—that are being produced mechanically and are at risk of being taken over by artificial intelligence. But this is all beside the point. Within the subject of A.I. as it relates to modern cinema, something else is hidden. A.I. is just a channel in the overall stream that is ‘futurism,’ which is gushing, as we see, out of every possible crevice of modern society. It’s one part of an opus of things and by no means the largest.



The largest, I believe, is less obvious, being less spectacular, though integral to every spectacle involving A.I. This more integral part is currently at work in the labs of MIT, Caltech, Harvard, Princeton, and other esteemed physics institutions. It goes by the name ‘The Theory of Everything.’ This is what the notion of an Artificial General Intelligence hinges on, and why a so-called AGI system remains unachieved. So dominant is this prospect, so heavily does it weigh on our imagination, that the very notion of futurism is a direct acronym for a high-tech, space-faring utopia. Any attempt to answer the lingering question of ‘when will AGI arrive’ without a detour back to the theory of everything is, I believe, unrealistic.


A truly unifying framework—one, mind you, that extends even beyond the realm of physics itself, that understands the underlying principles running things, and that can unify all fields of human knowledge—that can bring into reality David Hume’s dream of a ‘thorough’ science of human nature, or, as reimagined today, ‘a Unified Theory of Consciousness,’ to Einstein’s vision of a unified field theory, Richard Feynman’s vision of quantum computing supremacy, or, what seems easiest but may actually be the most difficult, a sustainable global economy, which would catalyze further innovations as well as sustain humanity through the ebbs and flows of economic change that these same innovations will have brought. But what would a theory of everything be? Will it really be a physics theory, or is it expected to emerge from physics because physics, in some sense, is the area in which we’ve made the most advancement? A theory of everything is defined as:


A hypothetical, comprehensive framework in physics that aims to unify all fundamental forces and particles in the universe, reconciling general relativity (which describes gravity and the structure of space-time) with quantum mechanics (which governs the behavior of particles at the smallest scales), thereby explaining all physical phenomena under a single, coherent set of principles.

Clearly, a theory of everything has to do with our understanding of the physical world. But this is arguable, and ironic, I may add, because the way the research departments are set up today doesn’t really reflect that such a theory is possible. How could it be in an environment where the physicist doesn’t talk to the philosopher, or the poet, the sociologist, the psychologist—an environment where the division of the sciences into hard and soft runs this deep? If a ToE is a comprehensive theory, does it not mean that it’s to be attained comprehensively—that is, by transcending those boundaries? I believe a true ToE would be something more inclusive than a physics theory, an epistemological model applicable across all departments of human knowledge. Let’s put it in a way that may sound ridiculous: a ToE can only be obtained by a ToE. Kant has an interesting remark here:


“The possibility of experience in general are at the same time conditions of the possibility of experience.”

Kant, I. (1781). Critique of Pure Reason (N. Kemp Smith, Trans.). London: Macmillan. (Original work published 1781).



But you don’t need to have read the "Critique of Pure Reason" to come to this fact; it’s a simple thought, and failure to realize this may actually explain the compromise that exists in physics today. What the statement that a ToE can only be obtained by a ToE means is that we are able to look into any department of knowledge, whatever the time and era it was produced in, and glean from it models which we can apply to problems encountered in other disciplines of knowledge—a tool by which to synthesize different thought-worlds. This approach both presupposes a ToE in the way it’s able to transliterate frameworks and models from one area into another, as well as serve as a standard by which to accurately judge the limits of our knowledge.


Immeasurable are the benefits of such a method. A stalemate in one research field may very well have been solved in another area, and solved in a very simple way, but as of today, neither can we recognize that the problems are the same, nor do we have a method by which to apply its solution. A ToE will allow us to recognize the commonness of the problem, as well as transliterate the solution into the language of the stagnant field. This is what we expect from AGI; it’s a system that will do just that—in fact, it will do it automatically, the same way your computer runs automatic troubleshooting and updates. The Turing test, which was one of the things that spawned the computer revolution, long anticipated that artificial intelligence would be something indistinguishable from what we like to think of as human intelligence. And what distinguishes that?


The Turing test holds that true AI is reached when we are no longer able to distinguish between a human being and a machine. But the problem is we can hardly define human intelligence, let alone set up the appropriate criteria on account of which we may judge something to be ‘human.’ We seem only able to intuit it, i.e., we have a feeling as to what human intelligence is or means, but we can’t really scale or parameterize it. Yet we do respond most zestfully when a machine does something curious—that is, something seemingly outside its programming but in line with the goals that were originally programmed into it—in short, when it innovates, or, to use the common idiom, when it goes outside the box.


It might seem as though I’m complicating it, which is true in some sense, but I’d argue that I’m actually simplifying it, all things considered. Does human intelligence need definition? Is it not obvious? We reply that it’s a mistake to think of simplicity and complexity as canceling each other out; rather, they’re complementary epistemic principles that are especially relevant to this subject. That it’s simple only means that we’re thinking of human intelligence in the widest possible sense; here indeed, no proof is required. But certainly not to the cognitive scientist and computer scientist who’s trying to extract and develop cognitive models; here, mathematical precision is required in dealing with what the layman sees as obvious.


Notice how I said that an A.I. can go outside the box and not think outside the box. I didn’t use ‘thinking outside the box’—that was intentional because, what is called thinking? What explains my reservation for ascribing a term like ‘thinking’ to anything A.I. is, admittedly, the Heideggerian sense in which I generally conceive of ‘thinking.’ In the philosophy of Heidegger, ‘thinking’ is understood in relation to being. Much like how he distinguishes between two modes of being, the present-at-hand and the ready-at-hand, he extends the same duality to ‘thinking,’ where there’s calculative thinking and genuine or contemplative thinking. And where it seems that it’s the former, i.e., calculative thinking, that the Turing test is set up for, I believe that it’s ultimately the latter.





Heidegger, M. (1968). What Is Called Thinking? (J. Glenn Gray, Trans.). Harper & Row. (Original work published 1954)


I say that because every system we have is already based on the calculative model. What we’re expecting from A.I. is a contemplative, Socratic, dialectical, a maieutic form of intelligence, where the ‘truths’ that underlie things are assumed to be already nested in the things themselves, and such truths can be gleaned by dialogue, albeit unceasing and laborious. What truly matters here happens to be also what separates the organ from the module: the basic assumptions that universally foreground human life, of truth and its ontological standing. This is how machine intelligence would be brought closer to human intelligence—not only in having assumptions of universal truths, and in thinking of them as prospects, but also in feeling entitled to them.


We have these things because we’re a part of the world we inhabit. And like everything else, we arose from the earth, and our place in it—in the universe—is uniquely ours. By extension, every member of any kind whatsoever has its uniquely allotted place, and its existence is truly exclusive. Everything here has its place in the cosmic order such that species give proof to this by naturally seeking their place in it. This seeking is the seeking of entitlement, however instinctual it may seem, and however ‘combative’ or ‘parasitic’ some of the modes of being may appear. No part of it would be functional without this. More than that, it’s precisely this conception of nature that mirrors back the kind of intelligence we call human—that, seeing things otherwise than we think they should be, we feel this urge to want to fix and change things and make decisions on behalf of the same creatures who thrive on the same conditions we judge to be brutish.


The point is, we’re a product of this world we inhabit. The individual is a microcosm of that vast macrocosm. The Quran has this really interesting word that describes God’s act of creation, ‘futoor,’ which differs both semantically and philosophically from the conventional term ‘creation.’ Here, more than being ‘created,’ one’s created from and is introduced into the world. It means that he or she is a special and inimitable feature of the world, a fiber of the total fabric weaving the universe. As to the connotations of this on human and artificial intelligence, we find it difficult to view the matter outside its now cosmic setting; that, as far as intelligence is concerned, we are to factor in much more than we currently do. This is what makes the Turing test appropriate in so many ways.


A dialogue between yourself and an interlocutor is so commonplace, so understated a phenomenon, yet it’s all we have. Like all things, they usually look simple and given until you meet an expert and find out the world of difference that lies beneath it all. As far as the art of dialogue, which to most of us is but an exchange where you speak and are spoken to, we see in Plato how strenuous and difficult it is, how much goes into it. More than a discussion between two people with much to say, it’s a communion between two beings about whom much is to be said, and who themselves are to render this service to one another. Interlocution here is understood as a kind of ‘induction,’ that we induce in the person the want to articulate, to confide—that is, to be as cathartic as they can be. As Hegel put it:


“Self-consciousness recognizes itself in another self-consciousness.”

Hegel, G. W. F. (1977). Phenomenology of Spirit (A. V. Miller, Trans.). Oxford University Press. (Original work published 1807).


Who can help you do this but another person, i.e., someone of the same species as you? And seeing how, though the world is teeming with millions of species, it’s only ever another person with whom you can go through such an experience. Nature shows how, although communication is possible between different kinds of species, it’s strongest and most complex among life forms of the same kind—the sort of complexity that’s required to bring about the fruition of the species.


This tells us that all that’s special about a species is less the doing of the environment that’s thought to be the sum of all accidents, and rather contained within it, and is to be brought out of it somehow; and this extraction can only be achieved dialectically—that is, through this universal habit. Thus so much rides on dialogue, and a communion between two people is, let’s say, cosmic because it’s infinite. Moreover, because the two are inexhaustible, because they represent their kind with all its past and future, all its possibilities, history, diversity, challenges, fruits, and failures. Thus the uncertainty that shrouds the future of A.I. is justified, but justified in that it arose in the first place.


It’s said that a Turing test is considered passed where one cannot reliably distinguish the A.I. from a human respondent, and it’s being argued, and for good reasons, whether indistinguishability has actually been attained, seeing that computer systems have vastly changed since Turing’s days. But what’s being argued, I believe, is whether Turing’s criteria still apply seeing that the area covered by artificial intelligence today is larger than it’s ever been—images, video, sound, you name it, not to mention the immense mass of data, publications, and other information that A.I. systems today have access to. Surely the Turing test must change its criteria, right?


Not exactly. In fact, being that the test is text-based, and that it’s a dialogue where an assessor must still assess if their interlocutor is human or a machine, then the criterion of indistinguishability will one day prove decidedly inadequate.


We say decidedly because, though it’s been suggested by some that the test has already been passed, it’s still very much a subject of debate. This says that there’s something wrong with the criterion—that perhaps the definition of human intelligence, which undergirds the criterion of indistinguishability, is found to be both different to different specialists and overall quite vague. No parameters have been established regarding the scope of limits and potentials of human cognition; that is, we’ve not discovered a cognitive architecture that could account for how exactly the mind processes diverse information streams, the detailed mechanisms underlying consciousness and self-awareness, the nature of memory formation and retrieval, and the precise neural underpinnings of emotions and decision-making.


With that much lain in obscurity, we’re all but driven to conclude, as many already have, that the Turing test is a general epistemic framework, the basic tenets of which are on firm ground, and as such valid for all time; but the context of its application is set to change and adapt to continued progress in the field of A.I., simply based on the fact that there’s still much to be discovered about human cognition. Much remains to be understood about humanity. On the contrary, I believe A.I. is to be instrumental to that understanding. Essential, that is, whether to the task of mapping the human mind, synthesizing the various departments of human knowledge, unifying different theories of physics, or finding the perfect economic model. None of these are possible without A.I., given the mass of information there is, the population size, and industry.


This is where a theory of everything would be relevant, as it’s what’s going to help catapult artificial intelligence to artificial general intelligence. And what might seem a bold statement, an AGI is nothing but an A.I. system running a theory-of-everything-based neural network architecture. It’s here where the parts of the human mind that we said lay in obscurity will be illuminated, because if there’s one aspect that truly sets humans apart from the animal kingdom, it’s the ‘form’ of our intelligence. This is readily seen in the world. We see how it has this strange ability to tap into any field and glean from it what it wants. It can abstract and extract things and place them where they don’t originally belong.


Passing the Turing test, I believe, would be a question of how far an A.I. system could perform this. And perhaps the real test would be to what extent could an A.I. system teach us about ourselves—that perhaps there are things which we can contemplate but not quite calculate, and things we can calculate but can’t quite contemplate. And that’s what A.I. will help us do. Perhaps the solution to every scientific problem is out there, but acquiring them requires us to look where we usually don’t and won’t think to look. But an undogmatic and unassuming A.I., it would be far more unhinged than we could ever be.

Comments


SUBSCRIBE VIA EMAIL

  • Facebook
  • Pinterest
  • Twitter
  • Instagram

© 2035 by Salt & Pepper. Powered and secured by Wix

bottom of page