Agnostic-ish: My Search for Faith in a Scientific World

Christian Keil
Pronounced Kyle
Published in
124 min readApr 9, 2019

--

Prologue:

IN THE BEGINNING, I honestly had no idea what I was doing.

I was 22 years old, a recent graduate from the University of Michigan where I had earned degrees in economics and psychology. I was kicking off my professional career as a management consultant, a demanding job that required weekly travel across the country — and as a non-business undergrad, I was working hard to become fluent in the ways of the corporate world.

In other words, I was a young adult with a liberal arts education and a lack of free time. Why, then, did I decide to write a book about science and religion? Who was I to talk about either subject? And how would I learn how to write a book when the longest work I had completed previously was an economics term paper on decision theory? The questions of “why write” and “who was I” will be answered in detail in the sections to follow, but the short answer to both is that I never really made an explicit choice to write; this all just sort of happened.

I was born into a Christian family but fell out of faith as I fell in love with the scientific method. I quit going to church when I stopped going to piano lessons, and while the world didn’t blush at the latter decision, I felt incredible (and, over time, increasing) social and cultural pressure to identify with either atheism or Christianity. So, as I aged into young adulthood, a new chapter of my story began.

I don’t want to give away any spoilers now, but the story you are about to read is equal parts coming-of-age and intellectual enlightenment. At the outset of my journey, I didn’t fully understand the gravity of the questions with which I was about to tussle and I surely didn’t expect to end up writing a book. But, as I worked through the frontiers of research on psychology, biology, physics, and faith in pursuit of the objective truth, I realized that there are many others — perhaps people like you — that have been (or will be) on a similar path. So I put pen to paper, wrote about a subject that probably should have required parental supervision, and ended up with this book.

Note for Medium: I first self-published this piece as a hardcover, crowdfunded book in April 2016. I’m re-“publishing” here so I can share it more easily and fix a few of the typos that have been nagging me for years. (And, secretly, because I want to see how long Medium thinks this will take to read.) Some formatting will inevitably be broken in the transfer from book to Medium, but I hope you’ll bear with me — and enjoy Agnostic-ish!

In the beginning the Universe was created. This has made a lot of people very angry and has been widely regarded as a bad move.

― Douglas Adams

Part One: Light

And Shadow, and Doubt

EINSTEIN WAS WRONG when he said that “the most incomprehensible thing about the universe is that it is comprehensible.” It’s crazy that our infinite universe — from its black holes to its spiral galaxies — can make sense to our finite brains. But isn’t the incomprehensible part of that the brains, not the black holes? There are more connections between neurons in your brain than there are galaxies in the observable universe. That is truly incomprehensible: why should we have inherited such powerful minds in the first place?

Logic says that we shouldn’t have. What’s the practical use of understanding the stars? We look to the night sky and spend millions of dollars to launch massive telescopes into orbit just because we are curious. We even have gone so far as to stick the stars to our children’s ceilings. Kids don’t need to know Cassiopeia from the Big Dipper, but we teach them the difference all the same. Why? Shouldn’t we pick battles with problems our own size? Like learning how to do taxes, for example. It will never change my life to know that watching a supernova explode would be one hundred billion times brighter than a hydrogen bomb exploding on top of my retina. It might change my life, conversely, to learn how to take off my TurboTax training wheels. It probably isn’t strictly logical to care about understanding the universe — but if you’re selling stars for taxes, I’m buying.

I’m a pathological wonderer — one who always wanted to know “why,” not just “what” — and I always have been. Born to Dave and Jenny in the land of tumbleweed, tornados, and college basketball (Kansas), I started asking questions as soon as I could speak. At four, on a cross-country road trip to visit our future home, I awoke with a startle in the middle of the night to confirm with my parents “that my teacher in Minnesota is gonna speak English.” When I finally met Mrs. Reed a year later, I wondered aloud why “Europe” sounds like it does, when it looks like “yuh-rope.”

In short, I was that kid: a little nerdling, a ball of undying precocious energy that got in trouble, even in school, for wondering a little too much.

Maybe my most wonderful question of all came when I was five, and I got to listen in on the “big-kid sermon” at church after a particularly non-disruptive week in kindergarten. I tapped my mom on the leg. “Mom,” I whispered, “Why is God so magic?”

She didn’t know, she said. He just was.

“Oh,” I said. “Can I have a popsicle when I get home?”

This book, really, is about God’s magic and my questions. Twenty years later, I’ve had some time to explore science, religion, and the world in between; it will be your job to judge whether I’ve gotten much closer to figuring out how it all works. And, I suppose, if I deserve a popsicle.

My journey has taken me from non-overlapping magisteria to quantum physics, from the God of the Gaps hypothesis to evolutionary biology, and beyond. I started my exploration in earnest as a senior in college, but the real story begins long before then: in Minnesota, in 1998, with the nerdy little kid who just wanted to understand the stars.

Falling Slowly (Out of Faith)

As my family settled into our new home, the years, mostly memorable for their distinctly Minnesotan winters, passed. They brought me a brother, Evan, a sister, Kelly, and the newfound eccentricities that come with being a card-carrying teenager. I got braces; I wore exclusively Hollister; I played Halo 2 every day after school under my gamertag Phoenix43. (Named, of course, after my favorite scent of AXE deodorant.)

Even beneath it all, however, I was still fundamentally me.

I read everything I could get my hands on: Matt Christopher, Magic Treehouse, Harry Potter. I read the latter religiously, usually as soon as I got home from the midnight release parties (and washed the lipstick-lightning-bolt scar off of my forehead). Academically, I was a serial monogamist, falling in love with new subjects at random. One year: pharaohs. The next: dromedary camels. My parents, who bore the brunt of my non-stop curiosity, fortified their defenses with the most powerful weapon in their arsenal: religion.

My parents believe in God. My dad is one of the happiest people who I’ve ever met, and he draws his joy from his belief. (His refrain every morning of our family vacations: “It’s a Glorious Day!”) He once told me that the happiest people he knows are strong believers, which makes sense: believing you’ll experience infinite happiness upon your bodily death has to be uplifting. It is for my dad: my church youth group leader, and my model for how to live a joyful life. My mom also believes, but in a different way: she thinks that God’s power is more for now than later, and her Christian faith has helped her weather the storms of life. (Even if recently, she’s loving the Zen of her yoga practice. Don’t tell on her.)

My parents are religious, as were their parents before them. That’s how it often happens, I suppose, and that’s how it was supposed to happen for me and my siblings. When my brother Evan, now a junior in college, was a teenager, he believed. He has since gone on a number of mission trips, and is the current President of the Campus Crusade for Christ (or CRU, he and his buddies call their rental home the “CRUplex”). Kelly, now 17, also believes. She’s still figuring everything out, but, like my mom, she finds strength and solace in her growing faith. I assume her faith will only strengthen as she matures into young adulthood; she and Evan both received the classic parent-child faith inheritance.

And then, me. Christian (n): follower of Christ.

By nurture, my siblings and I were identical. We grew up in the same neck of the ‘burbs, we had the same role models, we went to church on Sundays and small group on Wednesdays and said the same prayer before every family dinner. If anything, I should have had more access to trickle-down faith than my siblings, given the undivided attention that comes with being a first-born child. And yet, something about my nature made it so that believing in God was never easy for me.

It’s not that I never believed. I suppose that as a child I believed just as meaningfully as the next kid. I very distinctly remember “pledging my life” to God as a seven-year-old when John Jacobs and the Power Team, a group of bodybuilders (or, equivalently, superheroes), came to town and promised that if I believed in God, I, too, could rip phonebooks in half. Naturally, I was all in. After those earliest years, however, I underwent a transformation that was unique in my family: I transitioned out of faith, and out of believing in the God of my role models, siblings, parents, and their parents before them.

If I had to say what finally did it — what “broke” inside me and caused me to stop believing in God — it was the realization that it didn’t matter whether I enjoyed believing, but whether or not what I believed in was true. In other words, as faith became less about superpowers and more about facts, it understandably lost its luster.

And those religious facts — as I came to understand them from the Bible — seemed questionable, at best. There was the story of creation, in which a persuasive snake made Adam and Eve wear clothes. There was the story of Noah’s Ark, about which I had plenty of questions. How did they find all those species to begin with? And what about asexual animals — did they still take two on board? How early did the snails living in North America have to leave to make it to the Middle East before the flood? The moral stories of the Bible seemed equally dubious: when God asked Abraham to kill his son Isaac, he was like, “OK,” and when a woman looked back at a city burning in literal hellfire, God turned her into a pile of salt. If the truth of the Bible was what mattered, the stories that it held made religion a tough sell.

What I was buying, conversely, was the scientific method. I loved my science classes in middle school. The facts of science worked for me because they were directly observable — and simple. Scientists didn’t ask me to take on faith that the moon was made of rock instead of cheese — they went up there and let Neil Armstrong confirm as much in person. Scientists also never told me that the answers were too complicated or beyond my understanding, as the church often seemed to do. Science may not have seemed simple originally, but simplification, I learned, is what science does best. Bill Nye taught me this one:

Gravity, perhaps the most basic theory of them all, makes the unfathomable prediction that everything — from the stars in the sky to the soles of your shoes — interacts. Gravity is strong enough to hold your soles to the pavement, and amazingly, it’s far-reaching enough to make those same soles alter the motion of the North Star. The effect is minuscule, but exists nonetheless because gravity is infinitely extensible. That all sounds complicated, so it would stand to reason that explaining it scientifically should be equally complex. And yet, Isaac Newton figured out how to describe it all using just two letters, two numbers, and three lines:

That’s it. The f-orce of gravity equals one over the squared d-istance between you and the Little Dipper. Incredible complexity in a simple equation: the inverse-square law of gravitation. Science rules! (Bill — Bill — Bill — Bill!)

The scientific method took the incomprehensibly big universe and explained it to me: a skeptical little nerd who had just recently learned the truth about the big jolly guy who does his shopping in December. As I aged into high school and even into college, that idea — that science, not religion, might be my ticket to understanding the stars — was corroborated over and over again.

Science had answers where faith had mysteries, and facts where faith had stories. The difference, really, was that science could work for me without asking for anything in return. Supposing that I found a way to the moon — looking at you, Elon Musk — I could confirm the discoveries that Neil Armstrong made in 1969. If I found a way to the top of Mount Sinai, in contrast, I couldn’t independently happen upon the Ten Commandments (barring a gift shop at the summit). Why take answers on faith when science could help me discover the real truth?

As I grew up, that “real truth” of the world started to crystallize. As every young adult does, I had it all figured out; the story as I knew it went a little something like this.

In the beginning, God spoke the truth. He whispered into the void and brought about the heavens and the earth. God’s universe was full of wonders and majesties beyond explanation, and even though we didn’t understand them, we could count on them. The sun rose and set each day, rainbows followed storms, the tide went in and out. We were at the center of the universe, and His gifts — sunsets, rainbows, and crashing tides — were signs of His everpresence and good will. God kept the wheels turning, so He was worthy of our faith.

But then, science happened.

In 1543, Copernicus shattered the “celestial orbs” by discovering that the Sun, not the Earth, is the center of our solar system. In 1859, Darwin published the Origin of Species, and showed that life’s diversity can emerge from simplicity. In 1900, Freud published The Interpretation of Dreams, and made clear the opaque world inside our own brains. This scientific enlightenment, however, came with a considerable shadow — one that fell squarely over God’s truth of how things worked. Copernicus displaced humans from the center of the universe, Darwin displaced humanity from our place atop the Earth, and Freud displaced us from our own brains. Modern science paints humanity not as a prized creation of a loving God, but as one of nine million species living on a randomly-chosen planet among the infinitely many others in the universe.

God may have explained the world in the beginning, but science ultimately found better explanations. Led by science, humanity stepped out of the shadows and into the light.

As a teenager about to make that fateful transition into skepticism and out of faith, I felt that I was doing the same. I knew enough about religion to doubt, and enough about science to trust; the balance in the Force had shifted. I had a deal with my parents: make it to Confirmation (akin to youth-church graduation) and I could decide for myself if I wanted to remain a part of the church. I counted down the days — passing the time each Sunday by reading Revelations, by far the craziest, most inexplicable chapter of the Bible — and when Confirmation finally arrived, my decision was made: I stopped going to church faster than I stopped taking piano lessons.

For a long while thereafter, I didn’t look back.

Family Dinner

In those earliest years, I was a nerd — as you know — but at least part of me was still in denial. I didn’t just love books; I was an athlete! I played basketball, baseball, football, fútbol, golf. You name it, I tried it. I was even moderately decent at half of them. (Note to self: potential resume bullet: “moderately decent half of the time.”) Unfortunately, my illustrious sporting career came to an early demise when I got to high school and still stood roughly five feet tall — the closest I came to athletics during my freshman year was when my math teacher, the football coach, explained what a parabola was by mapping the trajectory I would follow if my teammates threw me over the line to block an extra point.

With sports out of the question, I knew that I needed a new way to channel my vast stores of competitive energy. After some aggressive prodding from my parents, I found myself at a tryout for the Lincoln-Douglas debate team. It was a curveball, but I immediately found out that, yes, ye with middle-school senses of humor, I was something of a master debater. Sports might not have worked out for me, but as it turned out, I was pretty good at arguing. Debate was addictive: intellectually stimulating, intensely competitive, and surprisingly physical (most competitive rounds featured debaters speaking at over 300 words per minute). I was hooked.

Nowhere was this more evident than during family dinners. We’d say a prayer, allow a grace period of unadulterated chowing down, and chat about whatever happened that day. Mostly, it would be the usual: what fish my brother caught on the lake, how my sister’s musical was going. But sometimes when I got bored, I’d turn our casual conversations towards confrontation. Exactly the way I wanted it.

During one table-side debate during my junior year of high school, I remember very clearly thinking that I was about to score a major victory. My dad was arguing against evolution, claiming that while micro-level changes within species were possible, they could never add up to a real macro-level change between species. I was arguing for evolution, of course, because I felt that I knew the science. What I really remember, though, was not my inevitable victory and subsequent crowning as the King of the Table, because that never happened. I realized about halfway through our argument that no matter what I said, my dad wasn’t going to admit defeat. He wouldn’t even budge. I told him that scientists had confirmed speciation with viruses in the lab — and he didn’t care. I said that scientists had observed evolution creating two species from one that became geographically separated by a natural disaster. Nothing. Example after example, argument after argument, and he didn’t waver.

My dad’s steadfastness was frustrating and mirrored what I had seen on a grander scale in public debates on YouTube: atheists winning arguments about science, and believers holding fast to their faith. The debates between my dad and I were always respectful, but the ones I watched online were anything but.

In the debate between religion and science, the New Atheists — a team of vocal non-believers led by Daniel Dennett, Sam Harris, Christopher Hitchens, and Richard Dawkins — were out for blood. I disliked their tone, but I sided with their argument. The New Atheists claimed that the religious faithful were blind — that they believed without evidence and even in the face of contradiction. Harris, a neuroscientist, wrote:

Tell a devout Christian that his wife is cheating on him, or that frozen yogurt can make a man invisible, and he is likely to require as much evidence as anyone else and to be persuaded only to the extent that you give it. Tell him that the book he keeps by his bed was written by an invisible deity who will punish him with fire for eternity if he fails to accept its every incredible claim about the universe, and he seems to require no evidence whatsoever.

While believing strongly, without evidence, is considered a mark of madness or stupidity in any other area of our lives, faith in God still holds immense prestige and power in our society.

That was the real problem that I had with my dad, and I told him as much. How could he deny scientific evidence? Who was he to doubt the experts?

Surprisingly, he turned that accusation right back at me. He said that he didn’t think the evidence existed — even though I knew that it did — and that if I cared to look, I would find plenty of scientists who believed in God.

I left that debate indignant. I hadn’t changed my dad’s mind — not even after I showed him some (admittedly sketchy) scientific papers (from Wikipedia) — which proved to me that his spiritual defenses were all but impregnable. My dad was an immovable object, and not even the unstoppable force of science could inch him closer to the truth.

Road Trips

As my high school years drew to a close, I had to leave debate even if debate never quite left me. I packed my bags, loaded them — and my family — into our minivan, and drove to Ann Arbor, Michigan: the best college town in America. I was a legacy at UM many times over — my mom and dad met there, and members of my extended family are scattered all over Michigan’s mitten — so I knew what was waiting for me in Ann Arbor.

A2 had the Big House, the Law Library, the Cube. It had fraternity life (I pledged Sigma Chi; my parents were the social chairs of their respective houses), the greatest tailgates in the Big Ten, and a football team that, despite being just moderately decent half of the time, had Denard Robinson. That I would rush, tailgate, and attend every remotely-drivable football game were all but given as I came to Michigan as a bright-eyed, bushy-tailed freshman.

What was not given, however, ended up being the highlight of my college career. Although I had never been in choir and wasn’t a singer, when a friend from my dorm asked me to audition for her a cappella group, I decided to give it the good ol’ college try. I sang the only a cappella song I knew (“Sweet Child O’ Mine,” from that car scene in Step Brothers), and by some miracle, the group — the Compulsive Lyres — liked what they heard. As with debate, I had no idea what I was getting into — but after my first a cappella retreat, I knew that a cappella would end up being one of my favorite things in the world.

a ca·ppella re·treat (noun):

A twice-yearly weekend away at a rental home in northern Michigan. Common activities include: learning music, making sherberbalert, playing Drenga.

Retreat was unforgettable. We sang and fiesta’d, as advertised, but also had the in-between moments in which friendship is made. We relaxed while watching a Michigan basketball game, played catch in the side yard, built snowmen, watched Office Space. It felt like we were all present, which is a luxury nowadays. Being locked in a cabin together in the tundra of Northern Michigan (and far away from cell service, at times) could do that to you.

One semester, I rode up to retreat with Lee Gunderson, a bass and future attendee of Princeton’s Graduate Program in Plasma Physics, and Charlie Frank, a tenor and future Michigan med student. About an hour into the trip, our small talk got pretty big, and we turned to life’s largest questions — animal rights, capital punishment, Obama. Eventually, we landed on the main event, a debate topic to make the Liberal Arts Admissions Committee proud: free will.

Charlie argued that it existed, Lee disagreed, and I was just enjoying the ride. The argument started casually enough, but as the debate picked up steam, none of us were particularly willing to let it go.

“Where is my free will?” Lee yelled, exasperated. “It’s not in any of my individual atoms. It’s not in any one of my brain cells. Where is it?”

“It’s the whole thing — the combination,” Charlie countered. “Are you seriously trying to say that you don’t think you’re freely choosing to argue? You have free will, right now.”

Lee, visibly frustrated, paused — so I jumped in and paraphrased Friedrich Nietzsche, a German writer I knew from debate as the man with a philosophy as confusing as the pronunciation of his last name:

“Do we have free will right now, though? We can’t choose to be back in Ann Arbor. We can’t choose to play basketball in the car. Our choices now are constrained by the choices we have made in the past, which were themselves constrained by choices in the deeper past. Take that all the way back, with each moment depending on prior decisions and actions. When was that first, original choice? We didn’t choose to create ourselves, so isn’t everything else constrained by that creation? Doesn’t that mean we don’t have free will?”

(Aaaaaaaand like I said, Admission Committee’s dream. I said that, in real life.)

“That actually makes sense,” Charlie said, after a moment. “But it’s literally the first thing you’ve said that has made any sense on this entire road trip.”

Ouch. Lee confirmed that he too thought I wasn’t making a ton of sense. I left them to finish their argument without me, electing to brood instead. Was I really so unpersuasive?

After we got to the rental cabin, all was, obviously, forgotten. Retreat waits for no man. We had a great trip — complete with an epic snowball fight, during which I accidentally pegged a soprano from our group in the face (sorry, Jessica) — and were thoroughly exhausted from the long days and longer nights by the time it was over. I drew the short straw to drive home on Sunday morning, and geared up for the long, lonely drive.

There are Two Immortal Truths of the Hungover Road Trip:

At some point, everyone else in your car will fall asleep; When they do, you must have a plan in place for staying awake.

My plan was to let my mind wander. And I mean truly wander. I had time to think about everything — the fun of the past weekend, my excitement for the upcoming semester, the prospects for a Future Mrs. Christian Keil. My mind eventually wandered to less exciting memories, like the fateful debate on the car ride to retreat. Why should I have been embarrassed? Charlie’s rebuke was just a throwaway line. It shouldn’t have bothered me the way that it did.

In a (rare) moment of self-honesty, I realized why I had been so affected: Charlie was right. I wasn’t being very persuasive — or intelligible — at all. I could wax philosophic all day long, and I could surely sound like I had something to say. But I was faking it. I really had no idea why I believed what I believed, and Charlie and Lee both knew it. I thought back to similar conversations about faith and how I couldn’t add anything personally meaningful to them. I thought back to when I accepted a role as the Vice President of the Young Republicans club at my high school before I even knew what it meant to be a Republican. Although I was characteristically, if blindly, confident in my ability to make sense of topics like religion, politics, and science, I was woefully outgunned when it mattered — because I didn’t really know what I was talking about.

Was this soul-searching a rational response to an off-hand comment? Probably not. Definitely not. It changed my outlook, though, all the same. After the table-side debate with my dad, I was supremely confident that I was right about science, God, and the real truth of it all. After getting called out by my college friends and having some time to introspect, however, I started to doubt.

In retrospect, maybe I should have just woken up one of my sleepy passengers.

The Final Straw

The truth, as I understood it, was that science had replaced God. Gravity held the planets in orbit; natural selection brought about new species; science had assumed its role as Atlas and held the Earth in place — and so, God was unnecessary.

One step more fundamentally, I also thought that the tradeoff between science and God was unavoidable. Science was a worldview — a sufficient explanation of the world. Religion was another. By the nature of a worldview, then, you couldn’t believe in both. How could you believe in miracles and the laws of physics? Or the seven days and the Big Bang?

But did I really know these things, or were they just blind opinions? I hoped they were the former. But I had to find out. I was still milking a free AmazonPrime membership, so thankfully, edification was only $20 and two days away. I bought a few books online, and figured that they would be everything I needed. I’d read them, come to the inevitable conclusion that science and religion were at war and science was the winner, and move on with my life.

But, I figured incorrectly. Oh boy, did I figure incorrectly.

To my surprise, my science vs. religion narrative was — to put it bluntly — wrong. In the very first book I picked up, I was introduced to the “non-overlapping magisteria” hypothesis: the well-known, if pretentiously-named, contention that the bubbles of science and religion do not form a Venn diagram. Instead, the two domains (or “magisteria”) are entirely independent. In the words of Stephen Jay Gould, a Harvard paleontologist, “NOMA” gives science the age of rocks, and religion the rock of ages. Two worldviews, peacefully co-existing.

In my eyes, though, that co-existence was impossible; I rejected NOMA outright. I remember thinking that the idea was based on the same sort of scientific ignorance that I recognized in my dad during our table-side debate. I understood the intuitive appeal of giving morality to religion and facts to science, but the downstream implications were all sorts of unacceptable. Science wouldn’t knowingly distance itself from morality and beauty — what of the evolutionary study of moral behavior, or the science of aesthetics, or the universally acknowledged power of simplicity (an artistic quality) in scientific theory? And on the flip side, would religion really want to distance itself from the facts? What if scientists discovered DNA evidence that linked Jesus to the physical places in which the Bible said he ought to have been? Could believers really say “sorry, no, wrong magisterium”?

The above are decent arguments against NOMA. (Yay, debate!) But, no matter how many counterpoints I could dream up, there was one fact about NOMA that bothered me: it was popular. Both believers and scientists supported the hypothesis.

After my conversations with Charlie, Lee, and my dad, my straw-laden camel was struggling but his back remained unbroken. I still thought that anybody who understood science could never be a believer in God. That belief was strong, and while stewing in my liminal state between agnosticism and atheism I had no real reason to doubt. Until now. Enter: the final straw. Enter: the scientist-believer.

According to a study conducted by the Pew Research Center, more than half of all scientists believe in God or a “universal spirit or higher power.” More than half. What?! Could that possibly be true?

I hoped not. If it was, I was really in the wrong, so I thought of every conceivable way to reconcile my cognitive dissonance. Maybe those scientist-believers were just masters of compartmentalization: scientists by day, believers by night. Maybe they were believers like Albert Einstein, who was commonly cited as a believer, and probably believed in some “divine” personification of mathematics, but who always denied accusations that he believed in a personal God. That last point might have done it, if not for the pesky facts: other studies found that about forty percent of scientists believe in a personal God.

I even wondered if maybe this unseen majority of scientists was entirely comprised of those dentists who don’t think that Colgate helps fight gingivitis — i.e., total hacks. Alas, that defense, too, was permeable. In fact, the more I searched, the more I realized that these scientist-believers actually had some total geniuses among their ranks.

German physicist Werner Heisenberg, the godfather of quantum physics, said,

I had the feeling that, through the surface of atomic phenomena, I was looking at a strangely beautiful interior, and felt almost giddy at the thought that I now had to probe this wealth of mathematical structure nature had so generously spread out before me… If nature leads us to mathematical forms of great simplicity and beauty… we cannot help thinking that they are “true”, that they reveal a genuine feature of nature.

Erwin Schrödinger, the Nobel Prize-winning quantum physicist perhaps best known for his cat, wrote:

I am very astonished that the scientific picture of the real world around me is very deficient. It gives a lot of factual information, puts all our experience in a magnificently consistent order, but it is ghastly silent about all and sundry that is really near to our heart, that really matters to us. It cannot tell us a word about red and blue, bitter and sweet, physical pain and physical delight; it knows nothing of beautiful and ugly, good or bad, God and eternity.

There were also John Lennox, a Fellow in Math and Philosophy of Science at Oxford, and Allan Sandage, winner of the Nobel Prize-equivalent in Astronomy, who believed. John Polkinghorne was a former Cambridge physicist that now serves as an Anglican priest. I even learned that Francis Collins — the leader of the first team to map the human genome and the current head of the National Institute of Health — believes in God. He wrote:

…those of us who are interested in seeking harmony here have to make it clear that the current crowd of seemingly angry atheists, who are using science as part of their argument… do not necessarily represent the consensus of science; the assault on faith, which has been pretty shrill in the last couple of years, is coming from a fringe — a minority — and is not representative of what most scientists believe.

Back: broken. My belief in the power of science was the main reason that I didn’t believe in God. But if most scientists could find a way to reconcile the two, then why couldn’t I?

I did the math. Estimates put the number of scientists in the United States at approximately six million. Half means three million. Three million means approximately thirty Big Houses full of experts that were better educated than I was — all of whom disagreed with my assessment on science and religion. Was I was really trying to say that those three million scientists — and my family, friends, and others whom I loved and respected — were wrong?

The truthful answer is that I just didn’t know. I didn’t want to believe that I had been so wrong, but the evidence against my truth was mounting. I realized that unless I wanted to either admit defeat or ignore the new evidence that I had found, I would have to do something about it. So, I did. And here we are.

Implausible Deniability

As I look back over the past four years, I get nostalgic. Nostalgic for sports, debate, frat life, and a cappella, of course, but also for the way that I used to see the world. As a 21-year-old about to graduate from college, I had discovered that I was largely alone in my belief, or lack thereof. I wasn’t a believer, because I thought science was the answer. But I wasn’t an atheist, either: not only did I reject the label (because of my upbringing, atheist sounded to me like “heathen” or “satanist” might sound to you), but I also didn’t know if I was totally confident in my disbelief.

So, what was I? The default answer then seemed to be “an agnostic,” but that label felt hopelessly wimpy. I wasn’t one to accept anything by default and agnosticism, to me, meant thinking that I could never fully understand the truth. And that definitely wasn’t who I was: I could handle the truth.

I didn’t fit into any existing category. Hence, my new one: agnostic-ish. (You guessed it!) If you’re undecided, but also curious, optimistic, and committed to figuring out what you really believe, you too are agnostic-ish. Welcome to the club; I’m not sure how many of us there are, but rest assured that at least we are in this together. We are committed, my fellow -ishers, because we know that we have to know the truth — and we believe that we can find it.

I still hoped to find a scientific truth, granted, one that confirmed what I knew and disproved God, but truth it would have to be, above all. I needed defensible answers and opinions worth believing in.

And I knew that at this point, there was no turning back. In the words of my old buddy from debate, French philosopher Jean-Paul Sartre:

What is not possible is not to choose. I can always choose, but I must know that if I do not choose, that is still a choice… there is not one [choice] which is not creative, at the same time, of an image of man such as he believes he ought to be. To choose between this or that is [to] affirm the value of that which is chosen; for we are unable ever to choose the worse. What we choose is always the better; and nothing can be better for us unless it is better for all.

There is an alternative universe in which I never wrote this book. A universe in which I decided to remain agnostic, without venturing out into the -ish where I found myself at the outset of my journey. A universe in which I would never know what I truly believed, or why I believed it. Looking back now, I would never wish such a universe upon myself — or anyone else. To that aim, it’s now time to journey forth together.

In my eyes, an unexamined faith — whether in Christianity, atheism, or even agnosticism — is no faith at all.

Let’s rock and roll.

Our awareness is all that is alive and maybe sacred in any of us. Everything else about us is dead machinery.

― Kurt Vonnegut

Part Two: Psych!

And Consciousness, and soul power

THE WAY I SEE IT, the rise of consciousness is among the three most important things that have ever happened. Ever, meaning, in the history of the universe.

First, everything began. The universe was smaller than the period at the end of this sentence, then exploded and became so large that we aren’t entirely convinced that it’s not infinite. This one is a given. Without the beginning, nothing would have happened.

Second, life began. The universe designed a carbon-based, self-replicating 3D printer from scratch. Again: amazing, enabling, self-evident. A natural second on our list of the most important things ever.

But even after those two things had happened, our universe was still only dead machinery. It was big and, at least on our planet, thriving — but it was just a machine. Nobody was home. The universe had expanded into infinity, but it couldn’t appreciate it; the universe had designed life, but it wasn’t truly alive. Until it designed consciousness.

It’s a relatively controversial third item, to be sure, but a natural one if you think about it. Once conscious, the universe figured out what it was. For the first time in billions of years, the universe had a mirror — a way to look back at its creation and to reflect upon how incredible its infinite expanses had become.

When I was in high school, I fell in love with psychology and the study of the brain, that miraculous bulb of dread and dream that so fundamentally changed the universe into which it was born. I learned the classics in AP Psych: Milgram’s shocks, Stanford’s prisoners, Pavlov’s dogs. At Michigan, I declared a psych major and focused on cognitive science and psychopathology. The latter — the study of broken minds — was fascinating, even if my own consciousness got in the way of truly enjoying it: I caught a bad case of “med school student syndrome,” the disorder that causes you to diagnose yourself with each new disorder you learn. Oops. (The great irony is that med school student syndrome is, of course, one of those very disorders.) Despite my reflexive pathologies, I loved my time studying psych at Michigan. The brain is fascinating, and seemingly infinite.

Most college love stories are short-lived, however, and my affair with psychology was no different. Psych was academically fascinating, but the career prospects were, shall we say, less than. So, when I left college, I chose business — exploiting my economics major and statistics minor to secure an offer in management consulting — and more or less left psych in the dust.

As a newly minted consultant, life was decidedly non-academic. From day one, you’re asked to head to a client site and advise people who have been in their careers longer than you’ve been alive. You have to get up to speed fast, no matter how complicated the industry or business problem in which you find yourself. I loved consulting — it was fast-paced, and I learned a whole lot in a very short period of time. But, even so, I still needed an outlet for my academic energy. I still read every night before bed — no matter how late I got back to my hotel — and despite posting periodic life updates on LinkedIn, I had yet to fully quench my thirst for reading, writing, and wondering. When I realized that I could rekindle my relationship with psychology through the process of writing this book, I was pumped! My degree would come in handy after all, and in a (nearly) real-life context, no less.

My realization that psychology and the study of consciousness had a lot to do with religion came, strangely enough, from the Bible. I had just finished John Lennox’s Seven Days That Divide the World, a short book about how a modern scientist might interpret, and still believe in, Genesis. The book was intriguing enough to make me want to do my own research, and I ambitiously set off to read the entire Bible, despite not having picked up the good book in years. One night after work, I started in the beginning and was immediately blindsided by inspiration. In fact, I got so side-tracked that I never made it further than the 26th verse of Genesis 1:

Then God said, “Let Us make man in Our image, according to Our likeness; and let them rule over the fish of the sea and over the birds of the sky and over the cattle and over all the earth, and over every creeping thing that creeps on the earth.” God created man in His own image, in the image of God. He created him; male and female He created them. God blessed them; and God said to them, “Be fruitful and multiply, and fill the earth, and subdue it; and rule over the fish of the sea and over the birds of the sky and over every living thing that moves on the earth.”

I was struck by the sheer insistence that we were made “in God’s image.” That it was thrice repeated was enough to make me wonder: what is God’s image? What do we have that other animals do not?

The only logical answer that I could surmise was that our conscious minds distinguished us. Was there really any alternative? We are, as far as we know, the only conscious beings in the universe. Other animals are more prolific (bacteria, ants), longer-living (turtles, Aspen trees), and larger (blue whales, fungi). Some animals even have larger brains — but our brains have more gray matter than any other species. That’s the brain stuff that comprises our cortexes and is the seat of our higher functions like, say, consciousness. More simply, it seems to me a patently ridiculous idea that God would have made us superficially look like him — I don’t think that God has arms or legs or a beard — but it was more believable that we should think as he thinks: with intelligence, with a moral sense, and with conscious self-awareness.

Whether or not we are “made in God’s image” is a question that science probably could never answer — but whether we are conscious is an entirely different story. That is a question that modern psychology can handle, and from my perspective, a weak point in the armor of the religious story.

What if I proved that consciousness could be entirely explained by science? In that case, science wouldn’t need God’s help, and the claim that a supernatural being had implanted self-awareness in our brains would be fanciful, but untrue. Similarly, what if consciousness were purely physical? Part of the religious claim also seemed to be that this God-like mental resemblance could survive our physical deaths. Everyone who believes in God “shall never perish, but have eternal life.” If our selves were confined to this purely physical, materialistic consciousness, would that disprove the claim that God staked over our immortal souls?

I wasn’t sure, but the possibility of finding hard physical evidence against such crucial pieces of the religious belief system — and in a subject that I understood well, no less — was enticing. So, just like that, I jumped back into the science of psychology that I loved, intending (perhaps naively) to disprove God’s story.

The Delusion

The story of religion, science, and consciousness starts back in the day. And I mean way back in the day, when we thought that brains were pretty worthless.

The ancient Egyptians famously pulled out their Pharaohs’ brains before mummification. In the thousands of years after the last Pharaoh (Cleopatra), humanity managed to discover algebra, refraction, supernovae, magnetism, and circulation, but even then, our barbarically incorrect understanding of the brain persisted. In 1377, the Bethlehem Royal Hospital (better known as “Bedlam”) became one of the first institutions to treat mentally ill patients. Patients at the hospital were treated more like prisoners of war than people in need of help: doctors would strap patients to “The Chair” and spin them until they passed out from dizziness, or subject patients to “trepanation,” drilling holes in their heads à la Saw VI (just because).

Perhaps the first notable breakthrough in understanding the brain mercifully came in the 17th century, courtesy of Frenchman and philosopher René Descartes — the man behind cogito ergo sum: “I think, therefore I am.”

Descartes was a dualist, which means that he thought that the mind and the body were two fundamentally different substances. The body was tangible and interacted with the world through feet on pavement and fingers on guitar strings. The mind, conversely, only thought — to Descartes, thinking was the “principal attribute” of the mind. Minds were not physical objects, then, but ephemeral thought-bearers.

As it turns out, this “one idea per millennium” quota for consciousness research seems to still be the trend. If you agree with Descartes’ dualism, even today, you are in the majority. One Belgian study found that 60% of people believe that the mind and body are distinct.

I imagine that even more people implicitly believe in dualism than would admit it outright — take the modern perceptions of mental and physical health, for instance. On a work project a few years ago, a Senior Consultant on my team sprained a bone in his foot and had to wear a cast over his business casual attire for a couple of months. My team was very supportive of (let’s call him) Kevin and his limited mobility: we’d drop him off at the front door after our daily carpools, and we’d (normally) remember to take the elevator with him instead of the stairs. Contrast that with how a modern business might handle a breakdown in mental health. I have no personal anecdotes for this one, but I can’t imagine that most companies allow for “depressed days.” Even family members and otherwise loving, supporting friends might tell someone who suffers from anxiety to just “shake it off.” I can’t imagine what would have happened if we had said the same thing to Kevin. The stigma of mental illness is almost surely a result of an implicit dualistic mindset; how else could the two types of disease be treated so differently?

The way that I saw it, modern psychologists, in contrast to the general public, thought that while it is definitely a special kind of matter, the brain is still just matter. Mental illnesses are physical illnesses because the brain is just as physical an object as your femur. Neurotransmitter levels can be out of whack just as your sinuses can be clogged. This faux-distinction between matter and mind is a “dualistic delusion”: we think our brains are distinct from our bodies. But time has taught the scientific community that dualism isn’t true.

As far as modern science knows, there is no way for an intangible mind (or “consciousness,” or “soul”) to interact with a physical brain. There isn’t a receiver in our brains that can tune in to the channels of an intangible ether. Psychologists know that the brain is such a highly dispersed and decentralized network that there isn’t even a command center that could use “soul signals,” even if they existed and could be received. Scientists used to think that there was perhaps a “homunculus,” or “little man,” observing the internal theater of our mind’s eye and calling the shots. If there were, how could that brain-based little man even affect the body?

Descartes realized that such a question could be a problem for dualism when he uncovered what is now known as the “mind-body problem”:

[It is unclear] how the human soul can determine the movement of the animal spirits in the body so as to perform voluntary acts — being as it is merely a conscious substance. For the determination of movement seems always to come about from the moving body’s being propelled — to depend on the kind of impulse it gets from what sets it in motion, or again, on the nature and shape of this latter thing’s surface. Now the first two conditions involve contact, and the third involves that the impelling thing has extension; but you utterly exclude extension from your notion of soul, and contact seems to me incompatible with a thing’s being immaterial.

The mind-body problem is a devastating philosophical problem for dualism, and the trouble it causes is supported by physical evidence. Some “neural” activities, for example, never even make it to the brain. If you put a hand on a hot stove, your reflexes will pull your hand away before your conscious mind has even recognized that your body is in pain. Mr. Homunculus would be a powerless dude, if he did exist.

The progression above is more or less the history of psychology as I understood it, painted with extremely broad strokes. And, if you squint, it seems to suggest a diverging trend.

On one hand, time has taught us that our brains (or more generally, our nervous systems) are amazingly powerful. That trend is uniformly positive, and shows the perceived importance of our brains steadily rising over time. On the other hand, it feels like consensus over what is happening between our ears is following a parabolic curve. Scientifically, we know that our brains are just stuff. Intuitively, that idea makes absolutely no sense. How could my conscious experience of the world be the inevitable result of neurons bumping around in my head? How could dumb atoms careening around my cranium form anything resembling free will, or a consciousness, or a soul that could survive physical death?

Those questions, it turns out, are extremely hard to answer. In fact, the latter question has been appropriately deemed the “hard problem of consciousness,” and stems from the simple fact that, to you and everyone else in the world but me, my brain is ineffable.

What Is It Like?

Matt Murdock understands ineffability. He is a lawyer by day, a Marvel hero at night — and he’s blind. In the premiere season of his show Daredevil (highly, highly recommended), Matt, questioned by his soon-to-be love interest, explains both his blindness and his other super-senses in one fell swoop:

I guess you have to think of it as more than just five senses. I can’t see, not like everyone else, but I can feel. Things like balance and direction. Micro-changes in air density, vibrations, blankets of temperature variations. Mix all that with what I hear, subtle smells. All of the fragments form a sort of… impressionistic painting.

“Ok, but what does that look like? Like what do you actually see?”

A world on fire.

Murdock’s world is difficult to imagine. We intuitively understand super-sight, or super-hearing, or super-touch — but to be able to do all of those so well that we could not only walk around a busy downtown street but also hear the heartbeat of the guy we’re chasing while jiu-jitsuing a charging baddie is enough to challenge our understanding of what it would be like to be Matt Murdock.

His metaphor helps. We can feel the warmth of the flames, understand the blur that must exist around the edges, and hear the subtle pops and crackles that give us hints of what is engulfed before us. But even then, it’s just a metaphor and inevitably falls short of the truth. As a potential consciousness-shifter, consider what it would really be like to be blind. It’s normal to imagine blindness as darkness, but close your eyes (after finishing this paragraph). Blindness isn’t the darkness that you see in front of your closed eyes; blindness is the absolute nothingness that you “see” out of the back of your head. Or out of the bottom of your feet. Blindness isn’t like darkness at all; it’s like nothingness.

All that is caveated with “as far as I know.” I only know what it is like to be a seeing, hearing, smelling, touching, tasting Christian Keil, and that’s ineffability: I can never know what’s going on in your head, and you can never know what’s going on in mine. I can guess what it’s like to experience something totally foreign to me, like, say, echolocation, but I will never experience how it feels to be a bat. As silly as that sounds, I found that scenario to be more or less the foundation of modern consciousness research. Thanks to NYU professor Thomas Nagel, psychologists now wonder: “What is it like to be a bat?”

Bats echolocate; sharks detect electrical fields; dragonflies see ultraviolet light. Examples of animal super-senses are well-worn. We all intellectually know that other animals are sensitive to the world in ways that we are not, but again, the distance from between intellectual and experiential is vast. Even if I can imagine that echolocation is similar to being really good at hearing, I’ll never really know the ease of flying through a dark cave without a second thought. That ease is a “something extra” that only comes with the conscious experience of being a bat. Scientists usually refer to these “somethings extra” as “qualia,” the unfathomable, indescribable extra feelings that accompany conscious perception. Qualia are notoriously difficult to succinctly define, which is the point — and the problem. Philosopher and cognitive scientist David Chalmers explains,

When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience. …Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? […] Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.

All in all, the story of psychology was already becoming more convoluted than I originally bargained for. I wanted to find the simple truth about consciousness and exorcise any supernatural influence from the world between our ears. But there were just so many open questions posed by the complicated story above.

How could physical phenomena — just light frequencies bouncing against your retina — bring about ineffable qualia? Any physical explanation of the redness of red seemed doomed to fail. But wouldn’t that mean that the scientific explanation of consciousness as merely physical was lacking in some way? Would that mean that dualism really wasn’t a delusion at all?

What I really wanted to know was whether what I had identified as the scientific answer — my grand history of psychology that culminated in a rejection of dualism — was really what all psychologists believed. What was that one, unifying truth of consciousness that I could use to contrast, and ultimately disprove, the religion story?

In my subsequent research, I found four candidates.

Consciousness is Computation

The first theory I happened upon was inspired by Alan Turing, the mathematician who broke Nazi codes, became the godfather of the modern computer, and, in his free time, dreamed up one of the best thought experiments in the history of consciousness research. In 1950, Turing posed a difficult question in the philosophical journal Mind: “Can machines think?” Turing wasn’t the first to ask that question, but he was the first to devise a simple way to answer it.

The “Turing Test,” as it would become known, is straightforward. On one side of a chatroom — think AIM — is a panel of judges, and on the other, both chatbots and real people. The goal is for the bots to convince the judges that they are human. This may seem impossible, but amazingly, the Turing Test has already been passed. Eugene Goostman, a bot claiming to be a young boy from Ukraine, convinced a panel of judges that he was a real boy. The question to ask, then: is Eugene an “it” or a “he”?

Turing argued in his paper that if a bot were to pass his test, then we couldn’t say that the bot couldn’t think. The double-negative is annoying, but necessary: it’s the conclusion of ineffability. Even though we know the code that built Eugene, we don’t have any evidence to deny Eugene’s claim that he is conscious. How could we tell him that he isn’t just a normal boy, when he says that he is? And — perhaps more interestingly — why doesn’t his opinion matter?

The ultimate proposition is that if computers can be conscious, perhaps we are just very complex computers running the program of “consciousness” for ourselves. Those computers — we — would be nothing more than physical stuff, even though we call ourselves conscious.

What, really is the difference between us and Eugene? Eugene uses his chat window as an input, and his complex background programming translates those inputs into the action of writing a response back to his interlocutor. Our brains take in sensory inputs, translate them into thoughts, and turn those thoughts into actions. We are more complex than Eugene, of course, but is that a difference of degree or of kind? Could we just be computers, following the instructions of our DNA and experiencing consciousness as any sufficiently complex machine should?

Some believe that machine-consciousness is impossible, like John Searle — a philosopher who invented a thought experiment to prove his point.

Suppose that you are locked in a room with nothing but a set of instructions to process written Chinese text. Someone slips you a page of Chinese symbols under the door. Your job is to follow the instructions, identify symbols simply by their shapes, and thereby write a letter (in Chinese) that you can then slip back under the door in response. From the point of view of your Chinese compatriot outside the room, your answers are indistinguishable from a native Chinese speaker; nobody looking at your answers can tell that you don’t speak a word of Chinese.

The point, according to Searle, is that a machine might be able to process instructions without actually understanding them. And it’s a compelling argument — at least on first blush.

If I ever met Searle, however, I’d ask him one question: what is the analog to consciousness in his room? Is it just the person — who, granted, does not speak Chinese — or “the room” as a unit? If the latter, then I would say that the machine of “the room” (including both the instructions and the person) does understand Chinese. When you bring the analog back to the brain, it sounds silly to say that just because one part of the room (the person) doesn’t understand Chinese, therefore the whole room doesn’t understand Chinese. As we know, there is no homunculus. Our brains are distributed systems, with knowledge residing in particular regions of the brain but still wholly integrated with the rest of the system. If a brain can be said to understand Chinese, it would appear that Searle’s Chinese Room can as well. And, if so, computers are no better than brains at avoiding ineffability.

Perhaps computers don’t have qualia, as Searle would undoubtedly argue. Perhaps Eugene really is just an impostor, and doesn’t have an interior life of his own. But, if you ask him if he has qualia, he says yes, and you believe him, where is the disconnect? And wouldn’t you be mad if someone accused you of being a computer?

Turing suggests that there might not be a difference between computation and consciousness, and if that is so, the idea that we are somehow more than physical stuff — a hypothesis in which religion holds a serious stake — might be in serious trouble.

Consciousness is a Quantum Decoder

This second theory requires a two-minute intro to quantum mechanics: please make sure your seatbelt is fastened, and keep your hands and arms inside the vehicle at all times.

Let me start by saying that quantum mechanics makes no sense, and somehow, that’s the point. It shouldn’t, not to us macro-people living in a macro-world. Our world behaves according to the “classical” laws of physics that we all know and intuitively understand — force equals mass times acceleration, momentum is conserved, etc. This makes the normally-sized world predictable, or “determined”: we know how pool balls will bounce off of each other once we strike the cue.

If you shrink those same pool balls down to the size of an electron, though, predictability is lost. It’s difficult to say where they will be — or even where they are. One of the fundamental ideas of quantum mechanics — “superposition” — says that things don’t even necessarily have to be in one place or another. They can simultaneously be here, and there, and traveling from here to there — with all states “superposed” over each other and equally true even when they’re mutually exclusive. This isn’t just analogy. At the quantum level, things can literally be in multiple places (or in quantum-speak, exist in multiple “states”) at the same time.

The ultimate implications of superposition, and really all other quantum phenomena, are largely unknown even to those who know the most; we Muggles should probably be excused for mistaking quantum physics for hieroglyphics, e.g.,

Amazingly, though, scientists have started to make some sense of this counterintuitive quantum world. Quantum mechanical calculations are now present, according to biophysicist Werner Loewenstein, in “transistors, tunnelers, magnetic resonance imaging in hospitals, [and] superconductors.” He estimates that thirty percent of the U.S. gross national product today depends on our understanding of quanta.

So, some of it must make some sense, right? As it just so happens, superposition is an intriguing possibility for explaining consciousness.

Alan Turing’s famous Nazi-code-cracking machine defined the standard computing units as “bits” — the ones and zeroes behind the digital world. At the most basic level, computers are just bit-manipulators. All else is interface. Bits are static and binary: they’re either “1” or “0”, and they stay that way until deliberately altered. In almost all conceivable cases, these binary and static “limitations” are hardly limiting at all: modern supercomputers can process nearly all conceivable computational tasks in a reasonable amount of time. Some extreme calculations, however, can still take more power (or more time) than our bit-based computers can handle, e.g., factoring huge numbers. The largest number ever factored, known as “RSA-768,” is 232 digits long and took over two years of computation by 1,000 processors to solve.

In contrast, scientists believe that fully quantum computers based on quantum bits (or “qubits”), could factor the same number around one hundred million times faster than a standard computer — meaning, in under one second. The secret is superposition. Where a bit has to be one or zero, a qubit can be both one and zero, with both states held simultaneously in place. This makes qubits exponentially more efficient than bits — a quantum computer with 30 qubits can represent the same amount of information as a classical computer with 230 (1.1 billion) bits.

Today, such super-powerful quantum computers are still impossible to build because we don’t yet know how to translate quantum bits into “classical” results. But just because we don’t know how to make them doesn’t mean they don’t exist. As crazy as it sounds, one modern theory of consciousness is that our brains are real-life quantum computers.

When I first discovered this theory, I was skeptical because we hardly understand quantum theory or consciousness, and “not understanding” is an unreliable common denominator. “We don’t understand quantum computing” and “we don’t understand consciousness” aren’t enough to conclude that the two are the same thing. In a way, this is the same sort of proof that Mary Shelley used when she created Frankenstein: we didn’t understand biological life in 1818, nor did we understand electricity, so she figured that perhaps they could be the same thing. We now know that electricity and sewn-together body parts do not a life make; a similar progression might eventually hold for quantum computing and consciousness.

As I read more, however, I saw that the theory was receiving some scattered support from the physics community. Results are still inconclusive, but scientists are actively searching for a link between decoding superposed qubits and consciousness. Even Roger Penrose — mathematical physicist, member of the Royal Society, and best bud of Stephen Hawking — proposed his own theory of “microtubules” within cells that could do the necessary quantum decoding within the brain.

In a remarkably full-circle way, the brain-as-quantum-computer theory could actually help explain the ineffable sensation of conscious free will — and offer a response to the common objection to the theory that we are simple, non-quantum computers. That first theory, you may think, doesn’t do justice to our brains because consciousness certainly feels like more than just computation. But could the unpredictability of the quantum world explain that sensation? What if our free will is just a uniquely human ability to collapse quantum states of superposition into classical physical results like “choosing to do a cartwheel”?

“Willfully decoding superposed cerebral qubits” might not sound like any idea of consciousness that you’ve heard before — but it has the scientific community excited. And if it’s true, the idea of a non-physical consciousness would then be false: we wouldn’t need a soul, because we’d have our “microtubules.”

Consciousness is a Universal Constant

And now: exhale. I promise that that’s the last of the super sciencey stuff — at least for now. Answer C is, shall we say, less than sciencey. There are plenty of folks who believe that it’s true, but it seems less like a real solution, and more like a “why not” kind of answer. This theory, as I came to understand it, is probably best explained by analogy.

I am a millennial, which is to say that I got a lot of participation trophies growing up. Baseball in the summer, football in the fall, basketball in the winter, golf in the spring; at the end of every season — thanks to the doting Baby Boomers who ran my sports leagues — I would invariably walk away with some hardware, no matter if I had won or lost every game. It was only later, somewhere around the early 2010s, when people started to realize that when everybody’s “special,” nobody is. If everybody has the same label (e.g., “participant!”), that label is effectively meaningless. This is a realization that society made together, uniting us all in our general disdain for the Age of the Participation Trophy. Those meaningless trophies can make kids entitled, less likely to work for distinction, etc. It’s enough to make you wonder whether giving out no trophies would be even more effective than giving them out to everyone.

Importantly, though, this logic only works in one direction. If you’re a kid who gets a trophy, you would be right to wonder if it means anything at all. But, if you’re a kid who doesn’t get a trophy, you would probably be wrong to think that you were a winner. Odds are good that you’re just a big ol’ loser. Tough luck! Maybe you’re in some progressive league that actually took my philosophy exercise on the last page to heart. But probably not.

The theory of consciousness as a constant is like the kid who didn’t get a trophy but called himself a winner. It uses absence as evidence of everpresence (try saying that three times fast) — which usually doesn’t work out in the end.

The way that this theory tried to make that argument goes like this: consciousness is very difficult to find in the world; we can’t pin it down. It’s not in any particular atom, it’s not in any particular area of the brain, and it’s not even necessarily in the brain at all. Yet we know that it must exist. So, proponents wonder, could consciousness just be everywhere? Perhaps we can’t isolate it because it’s everpresent — a constant in the universe like gravity, or magnetism. Eureka!

This idea is very new age and groovy. If consciousness is a constant, we don’t own it or control it; rather, we can tap into the force of consciousness that exists all around us. If that even sounds like a religious idea to you, you’re not alone. Just listen to how proponents talk about the theory, like quantum physics Ph.D. Amit Goswami:

The source [of creativity and intentionality] is consciousness itself… the subject that we become in a creative experience. The subject, that creative self, which sometimes we call by a very holy name, like the Holy Spirit — the spirit or the spiritual in us — is that which is the source to which the creativity become apparent. The insight comes, and the insight comes in the form of new meaning.

Or, if you prefer a PhD in systems theory, Dr. Ervin Laszlo:

We are bodies in the sense of what I call the external read-out. We are more than just the body because when we sense the world, we sense the world not as a body, we sense it through our body — but we sense it as a mind. It comes to us as our consciousness… We are part of a much larger whole and if you consider this newer approach from the new science, you will see that the individual body is a complex wave, very much part of the whole. It is just an illusion that we are separate.

In my eyes, the kid who assumes that what he can’t see must be everywhere is wrong. He just didn’t get a trophy (or a conscious mind). Why can’t the conclusion just as easily be that consciousness is nowhere? Yet, the theory has support — and is another candidate answer, in the end.

Consciousness is a Pattern

This final theory was proposed by my favorite nonfiction author, and I was quickly drawn to it because it wonderfully balanced its awe for the ineffability of consciousness with its scientific insistence on the verifiable truth. The idea: that consciousness is just a complicated, looping pattern of brain activity.

I met Douglas Hofstadter, a professor of cognitive science, by reading his best-selling novel Gödel, Escher, Bach. In his 700-page magnum opus, Hofstadter considers the “Eternal Golden Braid” that runs through minds, mathematics, machines, and music — or, less alliteratively and more directly, he observes that some interesting and counterintuitive conclusions can come from systems that loop back on themselves. The three geniuses in his title illustrate the universal perspective Hofstadter brings to this hypothesis: in the book, he unites the mathematics of Kurt Gödel, the impossible paintings of M.C. Escher, and the contrapuntal canons of J.S. Bach in one fell swoop. In his second book, I Am A Strange Loop, Hofstadter extends his analysis of self-reflexive systems to include our brains and the experience of consciousness. His idea, too simply, is that consciousness is just a strange loop of neural activity: unique in its pattern, but still fundamentally the same type of stuff as other neural behavior. If he’s right, we wouldn’t need microtubules or consciousness fields — our brains would be enough.

Although I strongly, strongly recommend that you all go out and buy GEB or I Am A Strange Loop to get the full effect of Hofstadter’s brilliant, playful, empathetic style, I’ll do my best to show my own version of his theories here — both as a teaser, and to show how his ideas have fared after marinating for a few years in my own strange loop. I did more research after learning his hypothesis, and happened upon a three-part structure as the best way to explain the story.

Part one is awareness. All species of life are aware of the world to some extent through their senses. Algae have photoreceptive spots, Venus Fly Traps have feelers, bats have echolocation. There are certainly degrees of awareness, and as species become more aware, they are (generally) more likely to thrive in the world.

Part two is cognition. Awareness gathers information from the environment; cognition processes that information into useful inputs that can guide an animal’s behavior. Cognition includes functions like memory, attention, judgment, problem solving, language, concept formation, and social behavior. All of these skills are externally-oriented and modular. I either have my long-term memory or I have Alzheimer’s, but not both. I can either speak Spanish, or no lo comprendo. The more cognitive skills that a species can gather, then, the more likely they are to thrive in the world.

Part three is consciousness, and is fundamentally different — if related — to parts one and two. Understood simply, consciousness (as a strange loop) is the ability of the brain to perceive, understand, and manipulate itself. Where awareness and cognition are external, consciousness is internal — it’s a reflexive recognition that I, a strangely-looping conscious mind, exist. Whenever we ask ourselves the question “Am I conscious right now?” the answer is always yes. But what happens when we don’t ask that question? It is perhaps easier to see that such loops exist in others than it is to see them in ourselves.

Young children can only recognize themselves in the mirror — or, they loop and think “I exist” for the first time — when they’re 15–18 months old. It takes even longer for children to realize that similar “loops” are present in others. This realization, that not only am “I” a person but also that other people have their own internal realities, is more simply known as the Theory of Mind. The next time you see your three-year-old cousin, tell her the “Sally-Anne” riddle:

Sally has a basket in front of her, while Anne has a box. Sally puts a marble into her basket and then leaves the room. While she is gone, Anne takes the marble from Sally’s basket and places it in the box. When Sally returns, where will she look for the marble?

Your cousin will almost surely say the box — which of course is incorrect, because Sally would assume that the marble is where she left it. Your cousin hasn’t yet gained the skill to know that everything in her own head isn’t in Sally’s head as well. Most children develop Theory of Mind when they are three to four years old.

Eventually, after years of Theorizing about Mind, most children (and adults) don’t even recognize that they are describing a looping thought pattern when they say “I think that Sally thinks that the ball is in the basket.” But, of course, those types of statements are loops. I know that you know that John stole the ball. John thinks that I know that you know that John stole the ball. And so on, ad infinitum.

Looping, it seems, is an infinitely extensible skill. There isn’t a “part four” for that very reason. Loops loop for as long as they need to, and are limited only by the computational power of the brain. Luckily, we only need one loop — this time an internal one — to realize our own consciousness. To paraphrase Descartes: I am a Strange Loop, therefore I am. Or, in Hofstadter’s words: “In the end, we self-perceiving, self-inventing, locked-in mirages are little miracles of self-reference.”

In full-disclosure mode: the above is largely Christian Keil Original Material. I couldn’t find any scientific studies to link “looping” of the Sally-Anne variety to Hofstader’s Strange Loops — or really any mention of Hofstadter’s theories in the mainstream consciousness literature at all. Even so, I loved the theory, which makes it worth mentioning here. I found its logic sound and its applications compelling; but perhaps the thing that I loved most about the theory was less proof and more poetry (as Hofstadter would love to hear, I’m sure): I loved the parallelism between the brain and consciousness itself.

The human brain is unique not because of its materials — 99% of the brain is made of just oxygen, carbon, hydrogen, and nitrogen — but because of its intensely interconnected and decentralized structure. As mentioned before, elephant brains are far heavier than human brains, weighing in at over eleven pounds, but hold only eleven billion cortical neurons. Human brains have over 23 billion cortical neurons despite weighing just three pounds on average. The power of the human brain lies in its efficient pattern, not its raw particles.

Isn’t that mantra — patterns, not particles — parallel to what Hofstadter suggested is true of consciousness? There might not be a special field floating in the ether, nor a bunch of microtubules decoding quantum mechanics, but we do have brains that can perceive themselves. And, as the construction of a brain from humble chemicals suggests, consciousness is no less exceptional for being just a pattern.

If you didn’t like the three theories above, this one may feel better — we aren’t just particles, we are patterns. Miraculous, golden patterns woven into the same braid as mathematics, machines, and music. We might not be any more than the physical patterns in our brains, but that doesn’t mean that we shouldn’t be in awe of (and humbled by) the miracle of our existence.

Or Are We Just Worms?

You may have noticed around the time that I started talking about parallelism and patterns, but the above theories, if interesting and in vogue, have a surprising dearth of hard, physical evidence. I set out to find one, indisputable theory backed by the weight of hard science — but was left with the above: four “soft” theories, and no clear way to weigh their relative merits.

The harder the science, the more concrete its results. The best science is based on objective observation by impartial observers: it uses repeatable experiments to generate evidence that can be used to support or deny falsifiable theories. Good science, then, can be said to reduce: to take things that are fluffy (e.g., “souls exist”) and make them solid (e.g., “human brains have more grey matter than any other species”). The science webcomic xkcd described the order of reduction in the sciences as moving from sociology to psychology to biology to chemistry to physics to math — author Randall Munroe did so with tongue in cheek, but he summarized how I basically think the chain would go. With each reduction, you move closer to the hard truth.

As I found, however, the study of consciousness is soft — what, with the Chinese Room, entirely fictional quantum computers, free-floating consciousness, and patterns trumping particles. This problem seems to stem from the problem of ineffability that I found even before diving into the four candidate theories detailed above. An old Yiddish proverb (that I once heard Malcolm Gladwell use) describes the challenge well:

To a worm in horseradish, the world is horseradish.

We are stuck in our own heads, which means that we are stuck inside the very things that we hope to understand. We can’t even imagine a non-conscious way of interacting with the world; we’re so seeped in consciousness that some even suggest that the world is consciousness! The father of modern psychology, William James, once likened the study of consciousness to “trying to turn up the gas quickly enough to see how the darkness looks.” We, as conscious humans, may simply be outgunned.

And I, also a conscious human (and not a chatbot, as far as you know) came away from my search for the one scientific truth of consciousness empty-handed. Religion had withstood this first advance almost entirely unscathed — I couldn’t say that psychology disproved the Christian idea that we (or at least our souls) are non-physical. If anything, my favorite theory suggested that perhaps it wasn’t the hard particles that mattered, but the patterns of our brain activity — patterns that theoretically could be copied to outlast our physical deaths.

Rationalizing this failure was difficult. Why couldn’t I find one answer to rule them all? After taking some time to reflect on my predicament, I (to use a psych concept) externalized the blame. It wasn’t a problem with my research, but rather with the study of consciousness itself. Psychology was just too soft.

If I wanted to really disprove the religious worldview, I thought that I would need to do so with a science that offered more concrete results: think indisputable theories, years of hard research, and scientific consensus. Or, think biology.

Biology was certainly a more concrete science than psychology; of the human anatomy, we surely know the least about our brains. Biology was also the epicenter of possibly the most direct (and epic) battle in the war between science and religion: evolution. With one faith-based answer in creationism and one scientific answer in evolution, I thought that biology might prove a more fertile battleground. And in retrospect, I was right. But not for the reason that you might think.

[I don’t] think we came from monkeys, by the way… That’s another piece of garbage. What the hell’s it based on? We couldn’t’ve come from anything — fish, maybe, but not monkeys. I don’t believe in the evolution of fish to monkeys to men. Why aren’t monkeys changing into men now?

― John Lennon

Part Three: Darwin

And the Mysterious Origin of Life on Earth

WE ARE ALREADY LIVING IN THE FUTURE. Thanks to modern medicine, we are living 50% longer than we were 100 years ago. Violence is at an all-time low. Transglobal communication is commonplace and often taken for granted. And, as Twitter ex-CEO Dick Costolo said at my college commencement, we now have the Internet in our pants.

That latter point shouldn’t be underestimated. Not only do smartphones give us the ability to pull from the collective knowledge of the entire world at any moment, this technology — that just 50 years ago would have taken up an airplane hangar — can now fit in the front pocket of your skinny jeans. What a time to be alive.

As my parents often remind me, however, there may be a dark side to the accelerated pace of modern life: our attention spans are shrinking. We no longer read or write long-form prose; instead, we process information 140 characters at a time. We don’t wait for anything anymore; we call ahead to skip the Chipotle line and binge-watch the new season of House of Cards in a single weekend. That we are losing our patience in an instantly-gratifying world is an easy argument to make. Far more challenging, however, is precisely diagnosing the problem. Is it purely sloth? Or has modern technology actually turned our brains into mush?

Regardless of the diagnosis, one thing is clear: with the world zooming past, it’s hard to focus on anything other than the immediate future and past. It took the phone system 75 years of laying cable to reach 50 million users; it took the first Angry Birds app just 35 days to reach the same milestone. I think it would surprise my sister to know that when I was born, the Internet didn’t even exist yet. The world is accelerating, and only very few among us can see further back (or further forward) than the length of an average iPhone development cycle.

I mention the futility of our modern condition thanks to pants-technology because it seems to me that it is becoming increasingly difficult to imagine what will happen (or has happened) to the world in the long term. As much has been noticed by people like Stewart Brand, founder of The Long Now Foundation — an organization that works to counter what Brand calls our “pathologically short attention span[s]” by encouraging the long view of the world. But I’m sure that even Brand would admit that the problem still exists: we can’t see into the past, or the future, because things are moving too fast now for us to care about the long term.

Of course, perhaps Brand and I are just projecting our own attentional shortcomings — but I am guessing not. It’s difficult to understand things that happen over thousands or (god forbid) millions of years. So, naturally, that’s what we will try to do in this chapter.

Evolution, the patron saint of scientific theory, is a hypothesis of change so gradual that it’s hardly noticeable over the scale of an entire lifetime. Understanding the 35-day proliferation of Angry Birds? Easy. Understanding the 14-billion-year development of real birds from inert chemicals? Far more difficult.

The challenge of diving into a subject so complex — and so politically loaded, given evolution’s never-ending debate with creationism — was daunting, but/ I took solace in my hypothesis that I had science on my side. There really could only be one truth of what happened at the beginning of life, and I felt that evolution was the clear answer. I’d just have to let science do the talking and I would be able to prove once and for all that there was no room for God in the explanation of the creation of life.

Theory and Fact

As I started parsing my way through the evolution-creationism debate, one maddening misperception jumped out at me — and it’s worth addressing immediately. Many folks on the creation side of the aisle didn’t understand that in science, “theories” are facts, not guesses.

President Ronald Reagan was one of those folks:

Well, if [evolution] is a theory, it is a scientific theory only, and it has in recent years been challenged in the world of science and is not yet believed in the scientific community to be as infallible as it once was believed. But if it was going to be taught in the schools, then I think that also the biblical theory of creation, which is not a theory but the biblical story of creation, should also be taught.

Calling something “a scientific theory only” is nonsense. To be accepted as a scientific theory means to be in company with things like “Cell Theory,” the controversial idea that cells exist, and the “Theory of Universal Gravitation,” the ludicrous idea that things fall when you drop them. To a non-scientist, “theory” might easily be confused with “hypothesis,” but the two are opposites. Hypotheses are formed at the very beginning of scientific inquiry, while theories are realized at the very end of repeatedly successful experimentation. That evolution has been deemed a Capital-T Theory should not bring about skepticism, but assurance. Think about it this way: if you believe that you have cells, you should also trust in the Theory of Evolution.

And yet, many still have their doubts. When I began my research, I didn’t understand why: I thought that the facts of evolution were all but undeniable, and in that respect I was correct. Nothing that I found in my research gave me reason to doubt my initial confidence. If you are a scientifically literate adult, you should believe in evolution.

That being said, my mindset on evolution has changed; I also learned that it’s important to be precise about what one means when they say “evolution.” Not all instantiations of “evolution” are made equal; this point is crucial. Here’s what I mean:

Evolution, in three words, is descent with modification. In more words, it’s the way that biological information changes over many interconnected generations. We used to have monkeys, and now we have people — no matter what John Lennon believed, that much is a fact. Genetic change has happened over time.

The real brilliance of the evolutionary hypothesis is in its falsifiability — by defending the existence of a single hereditary line from a universal common ancestor to the present diversity of life, evolution is committed to a very precise (and continuous) timeline. If that timeline could be found to fit with all of the evidence that we find, evolution would be plausible; but even one species out of place could be enough to disprove the theory’s bold claims. As geneticist J.B.S. Haldane said, “I will give up my belief in evolution if someone finds a fossil rabbit in the Precambrian.” In spite of this vulnerability, however, scientists have yet to find a reason to doubt the evolutionary timeline. Evolution’s story is verified by both the digital history of genetics and the analog history of fossils.

In the early 1950s, James Watson and Francis Crick discovered what would become the founding idea of the new field of genetics: the double-helical structure of DNA. DNA is sometimes referred to as the “language in our cells” because it’s “written” in a genetic alphabet of four chemicals: Adenine, Thymine, Cytosine, and Guanine (A, C, T, and G). That language, traced back over past generations, tells a clear story about how modern human DNA came to be. Species that have similar DNA are likely to look like each other: genetically, all humans are 99.9% identical. We can calculate (and for the most part, have calculated) our genetic similarity to — or “evolutionary distance” from — every form of life. We share 90% of our DNA with chimps, 84% with dogs, 47% with fruit flies, and 24% with wine grapes. You heard it here first: humans are literally one-fourth wine.

Our closest relatives, however, are the primates. All primates have 24 chromosomes — or “bits” of DNA code — but humans have just 23. Evolution is committed to the idea that we evolved directly from primates, which implies that there must have been a very specific change to our DNA as we evolved: we must have lost one chromosome completely or had two chromosomes fuse together to make one. Amazingly, scientists have found evidence of the latter. Francis Collins, the director of the Human Genome Project, writes:

…special sequences occur at the tips of all primate chromosomes. Those sequences generally do not occur elsewhere. But they are found right where evolution would have predicted, in the middle of our fused second chromosome. The fusion that occurred as we evolved from the apes has left its DNA imprint here.

Similar digital clues can be found in the massive history written in the language of our cells. We came from primates, primates came from mice, and so on and so forth back to the universal common ancestor.

If cell-language seems a bit intangible or abstract, don’t worry, because the exact same story is told through the analog history of the fossil record. We have discovered some billions of individual fossils, and they all confirm the evolutionary timeline. Paleontologists can use radiometric methods (e.g., carbon dating), and relative methods (e.g., finding geological “layers,” some of which must have come before others) to place fossils on the timeline of the Earth, and not a single fossil has ever been found out of place.

If you need even more evidence to believe in the Theory of Evolution, here’s the kicker: both the digital and analog histories perfectly align. Evolution is one consistent timeline that extends from today back to the beginning of life. It’s a Capital-T Theory; there really isn’t a debate.

The upshot, of course, is that any other theories that contradict evolution are simply incorrect. In the 17th century, Irish archbishop James Ussher added up the ages of all of the people in the Bible, and calculated that the Earth must have been created on the evening of October 22nd, 4004 BC. This hypothesis has become known as “Young Earth Creationism,” and it’s simply false. We know that our Earth is about 5.5 billion years old.

At this point in my journey, I was of the mind that you should always trust scientific proof over any other kind of justification. Thousands of objective observations and repeatable measurements felt more objectively true than the words of the Bible (or the shoddy calculations of an old archbishop). The key, of course, was “truth”: I felt that things couldn’t be true just because you wanted them to be. I had always seen truth as a concept that was invented to build objective consensus between people — e.g., the water hole is two miles northeast of our village. Relative truth wasn’t really a thing to me; I would have called those thoughts opinions, not objective facts. I understand that your opinion could very well be that evolution is false — but objectively, you’d be wrong. Evolution is a proven, objective fact, and it’s not going anywhere.

But before this starts to read like a weird chapter in which I just pat myself on the back for knowing a bunch about evolution, let me just say this: while I believe that evolution is undeniably true, the story doesn’t stop there. Evolution is the theory of the timeline, and explains that life moved from wine grapes to apes to humans. Crucially, however, it doesn’t say how that movement happened. For that, Darwin needed another hypothesis: natural selection.

Evolution and natural selection are often grouped together. But I quickly learned that natural selection was far more fraught with difficulty and ambiguity than its sister theory — a fact that proved troubling as I attempted to establish the dominance of evolution over creationism.

Two Simple Complications

I came to internalize the difference between the two hypotheses with a simple, if silly, metaphor: evolution is the path through the jungle, and natural selection is the machete. As we look back, we can see a well-defined trail cut through the dense underbrush. The question is… what blazed that trail? Could it have been just a dude with a machete? Or would it have taken more advanced technology?

To again define our terms, here’s a one-breath explanation of natural selection — or, as I like to put it, the science of babymaking. As Darwin discovered, natural selection takes just three ingredients: heritability, variation, and fitness. Heritability is DNA: a way for parents to pass on their traits. If parents couldn’t hand down their genes, there would be no “evolutionary line” connecting families. Variation makes that evolutionary line branch into a tree. As my blonde-haired, blue-eyed sister proves, children are not always exact replicas of their parents because DNA is a good, but not perfect, self-replicator. Every new human baby has ~64 DNA mutations, most benign. Some mutations, however, can affect the baby’s fitness, or the probability that it will one day have a baby of its own. Fitness determines which branches will thrive and which will wither away. Some mutations are “adaptive” (e.g., chiseled jaws), others are “maladaptive” (e.g., allergies). Adaptive branches make more babies and “succeed” in carrying on the evolutionary line.

The rationale behind natural selection is solid to the point of seeming tautological. When Darwin proposed evolution and natural selection in the same book, the two concepts became forever paired: Darwin showed us a trail through the jungle, and a simple way to cut down trees. It was a persuasive story for a very long time, and only after 150 years of research did we learn enough to begin doubting Darwin’s pairing.

The complications for natural selection come in the form of two interrelated problems: complexity and time. The former was explicitly acknowledged by Darwin in On the Origin of Species:

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case. No doubt many organs exist of which we do not know the transitional grades, more especially if we look to much-isolated species… [but] we should be extremely cautious in concluding that an organ could not have been formed by transitional gradations of some kind.

Darwin’s theory relies on the power of small mutations to make big changes over enough time. Big mutations aren’t practically possible — the odds are better that your DNA would end up inviable than that your DNA would give you wings — so small mutations are all that natural selection can use. If a biological feature (like a wing) couldn’t have been formed by successive baby steps, then, natural selection couldn’t have formed it. University of Pennsylvania biochemist Michael Behe explains what such an “irreducibly complex” feature might look like:

The mousetraps that my family uses in our home to deal with unwelcome rodents consist of a number of parts. There are (1) a flat wooden platform to act as a base; (2) a metal hammer, which does the actual job of crushing the little mouse; (3) a wire spring… (4) a sensitive catch which releases when slight pressure is applied; and (5) a metal bar which holds the hammer back… If any one of the components of the mousetrap (the base, hammer, spring, catch, or holding bar) is removed, then the trap does not function. In other words, the simple little mousetrap has no ability to trap a mouse until several parts are all assembled.

To evolve by natural selection, mousetraps would have to gain incremental benefits from its intermediate stages — for example, from having just a base, or just a hammer. But of course, a mousetrap missing any one of its parts is useless, so a mousetrap — per Darwin’s own admission — could never have evolved by natural selection.

The question, then: do irreducibly complex features exist in nature? Behe thinks so, and uses the eye as an example. An eye is made of dependent, interconnected parts: retina, pupil, lens, and so on. Any one of these features by itself wouldn’t give a creature an evolutionary advantage. Does this make the eye an evolutionary mousetrap?

The short answer is “no.” Scientists have found intermediate versions of eyes all throughout nature. The earliest precursor of the human eye is the photo-receptive spot present in plants like algae. They’re simple but effective in helping plants avoid the shade and find the sun. Over time, these spots became indented (allowing animals to determine the direction of a light source), then had their openings constricted (allowing those directional light sources to focus), then developed a rudimentary lens (focusing light without reducing the total amount of light the eye could see). Each model gave a small advantage over the model previous, which kept the train of evolution rolling.

The long answer is also “no,” but with a catch: the more complex the feature, the longer it takes to form by random mutation. Could some evolutionary steps have happened too quickly for natural selection to have caused them? This is the second, and more damning, problem for natural selection; and here, I found a healthy debate in the scientific literature. The problem is twofold: first, generating the vast diversity of life from a single common ancestor in just four billion years; second, matching the incredible sprints documented in evolutionary history.

A group of Penn professors wrote of their opinion on the first issue in an aptly-named paper: “There’s plenty of time for evolution.” Their argument is that, although totally random mutation would not be able to generate the evolutionary timeline, natural selection is not random. Rather, natural selection “locks in” changes as they provide even the smallest amount of incremental fitness. They make an analogy to hacking a 12-character-long password. If you were forced to guess truly randomly, the task would be impossible; even assuming the password to be only letters, that’s 2612 (or roughly 10 quadrillion) possibilities. But, supposing instead that your letters would “lock” when they were correct, the task is fairly easy — just guess each letter consecutively (aaaaaaaaaaaa, then bbbbbbbbbbbb, …) and you would find the password in 26 tries. Brought to the scale of evolution — so, dealing with the countless genetic letters of DNA — the difference between changing 20,000 genes without locking and with locking is absurd. Without locking, you’d need 1034,040 rounds of guessing. With it, you’d only need 390. If natural selection can lock in mutations as well as the Professors claimed, it would indeed have plenty of time.

That bold of a claim was bound to get a visceral response, and it did. Most critics challenged the idea of locking, especially of “parallel” locking in which each letter of a password can be tested (and judged) individually. The most detailed critique came from Casey Luskin, the research coordinator for the Center for Science and Culture:

[Natural selection] does not have access to information about future benefits of a particular mutation, or where in the global fitness landscape a particular mutation is relative to a particular target. It can only assess mutations based on their current effect on fitness in the local fitness landscape.

Or, in other words, the individual letters of the password can’t know whether they are contributing to the fitness of the overall organism, or if incremental benefits are coming from other letters entirely. He continues:

[The Penn Professors] also make unrealistic biological assumptions that, in effect, simplify the search. […] In their model they represent each genetic locus as a single letter. By doing so, they ignore the enormous sequence complexity of actual genetic loci (typically hundreds or thousands of nucleotides long), and vastly oversimplify the search for functional variants. In similar fashion, they assume that each evolutionary “advance” requires a change to just one locus, despite the clear evidence that most biological functions are the product of multiple gene products working together.

In other words, the Penn Professors might give natural selection too much credit. Natural selection can’t read the future and know that individual letter mutations are on the right track to large-scale, multi-letter adaptations, which means that natural selection can’t lock. If that’s true — and after reading these papers I had a large suspicion that it might have been — natural selection might have a very serious problem with time.

The second facet of the time problem only compounds the first. Evolution was far from a steady, methodical process; the evolutionary history is littered with events commonly called “discontinuities,” or “punctuated equilibria.” The most famous was the Cambrian Explosion, a period of 20 million years or less — only 0.4% of the Earth’s history — when all of the major body plans and broad types of animals emerged. We went from single-celled organisms to complex, fully-designed creatures in an extremely short period of time. In just 20 million years, arms, legs, gills, internal organs, and nervous systems emerged — and then, all of a sudden, nature appeared to discover every possible option and as quickly as it began, body plan development forever stopped. Both that astounding acceleration and abrupt stop are still unexplained; scientists have theories to explain punctuated equilibria, but I couldn’t find any convincing, consensus answers.

These two problems made me more skeptical of natural selection than I had been previously. Don’t get me wrong — on a scale of one to Darwin, I was still probably about an eight. I had my doubts, but still believed that natural selection likely accounted for most of the answer. But “most” is less than “all.” My mindset was changing.

From Zero to One…

As if waiting, silently, for the opportune moment to strike, a tough realization hit me as soon as I admitted to myself that natural selection wasn’t all I thought it was cracked up to be. That realization: that even if natural selection explained all of evolution, and even if evolution was undeniably true, I still only had half of the story. Or, more honestly, probably even less than half, because I wasn’t yet back to the true beginning of life.

Evolution explains how life got from one to many. But what of creation, or how life got from zero to one? That was the goal of my research: to disprove God’s claim to the initial creation of life. But, amazingly, no scientists from my research to that point had even mentioned the move from zero to one. The only mention of the creation of life from non-life, in fact, was in the form of “spontaneous generation” — an old and entirely discredited theory originally proposed by Aristotle. In Aristotle’s own words:

Such fish… arise all from one of two sources, from mud, or from sand and from decayed matter that rises thence as a scum; for instance, the so-called froth of the small fry comes out of sandy ground. This fry is incapable of growth and of propagating its kind; after living for a while it dies away and another creature takes its place, and so, with short intervals excepted, it may be said to last the whole year through.

To those who believed in spontaneous generation, moving from zero to one wasn’t confusing, mysterious, or even interesting because it happened all the time — fish from pond scum, maggots from dead animals, and so on. As God effortlessly made Adam from dust, Mother Nature created life using only the earthly elements at her disposal whenever she felt like it.

This idea lasted for thousands of years; people still believed in spontaneous generation when Darwin published his Origin of Species. It wasn’t until 1862 that Louis Pasteur, the namesake of “pasteurization,” finally disproved the theory. After a particularly strong experimental result, Pasteur boldly but accurately declared:

Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment. There is no known circumstance in which it can be confirmed that microscopic beings came into the world without germs, without parents similar to themselves.

Pasteur’s fundamental insight was simple: that life always comes from other life. Life, too complicated to arise by chance, can’t come from just an opportune mixture of elements.

But here’s the paradox: what was the origin of life if not an instance of spontaneous generation? Biologists are committed to the idea that all life comes from other life, but simultaneously hold that the original genesis of life was natural. Is that not a contradiction?

Biologists must not think so. As members of the scientific community, they believe that the first species of life on Earth had no outside help. But how? I had to know: what were the odds of the first form of life spontaneously bringing itself into existence? If you look into the evidence, that move from zero to one — while technically not impossible — would have been incredibly challenging.

First, life needed to find a suitable planet. The good news: life had a trillion trillion planets in the universe to choose from. The bad news: life is picky. Scientists have identified some 150 criteria for habitability; a planet must have the right atmosphere, size, moons, rotation speed, orbit speed, star around which to orbit… even the existence of an asteroid-catching megaplanet like Jupiter was a box that had to be checked. Author Eric Metaxas writes,

…without the Jovian giant where it is, comets and comet debris would strike us about a thousand times more frequently. [Jupiter has] 318 times the gravity [of Earth]. So most of the comets that come anywhere near Jupiter are pulled toward it. It absorbs many of them into its gaseous depths without so much as a hiccup [or] just deflects them away from us and out of our solar system entirely.

The importance of avoiding asteroids is nothing to laugh at — the early universe was so violent that any life would have evaporated as soon as it had emerged. For example, the early Earth went through a phase of “heavy bombardment” during which there were more than 22,000 asteroid impacts: 40 of them larger than 620 miles in diameter (nearly the distance from New York to Chicago), and several larger than 3,100 miles wide (think California to Maine). That equates to an extinction event as great as or greater than the one that killed the dinosaurs every 1000 years. If that is how it looked even with a huge deflector like Jupiter, it’s hard to imagine how violent a planet might be without one; suffice it to say that these 150 criteria are all must-haves, not nice-to-haves. Without every single one fulfilled, life could not have arisen.

In the aggregate, these 150 criteria, by Metaxas’ calculation, create one huge problem. Using what he calls “conservative” estimates, Metaxas calculates that we could expect to find a life-supportive planet once out of every 1073 planets we searched. That number is ridiculously small:

In words, that’s one in ten trillion trillion trillion trillion trillion trillion; the number of planets we’d have to search to find life is far greater than the number of planets we have in the entire universe. The odds of any one planet out of our universe’s trillion trillion being a hospitable home are:

That number, one divided by 10 followed by forty-nine zeroes, represents the odds that one planet — just one! — in our universe could support life. To think that it did so already boggles the mind, but finding a planet was just the first hurdle that life had to clear.

Life would also need the right ingredients — like those that graduate student Stanley Miller and Nobel-winning professor Harold Urey combined in their famous “primordial soup.” Miller and Urey cooked up a mixture of liquids and gases that they thought were abundant in the primordial Earth (hydrogen, ammonia, methane, and water), stimulated the mixture with electrical sparks to simulate lightning, and let the solution sit for a week. To their amazement, their soup was full of amino acids — the building blocks of life — when they returned! If the early Earth did, in fact, look like Miller and Urey thought it did, their experiment would be evidence that the formation of life from non-life could be possible.

Unfortunately, however, the primordial Earth probably never looked like Miller and Urey’s soup. Physicist Paul Davies writes,

…geologists no longer think that the early atmosphere resembled the gas mixture in Miller’s flask… methane and ammonia were unlikely ever to have been present in abundance. And if Earth once had substantial hydrogen in its atmosphere, it wouldn’t have lasted long.

Even on Earth, life would have had trouble.

Not to kick a dead horse (or, I guess to do exactly that), but finding the proper planet with the proper elements still would not have been life’s biggest challenge.

We know that it takes a village to raise a strand of DNA, and DNA’s little helpers go by a number of different names — like “enzymes” or “metabolism” — but at the most basic level, these supporting cast members can all be understood as specialized proteins. Each serves a unique function in speeding up and automating the process of DNA replication. (DNA itself is just the information, or the instructions that these proteins carry out.) For example, there’s the DNA splitter — helicase — that “unzips” the DNA strand into two complementary strands (and often graces the backs of edgy AP Chemistry t-shirts). Those single strands of DNA are then carried by messenger RNA out of the nucleus of the cell and into the hands of transfer RNA, which translates the DNA message into instructions that a ribosome, or 3D protein printer, can read. It’s not a simple process, and DNA itself is inert through all of it; DNA takes no action other than to passively direct and instruct.

How, then, did DNA arise in the first place? This was a criticism that I didn’t expect to see but saw everywhere: the spontaneous generation of DNA is the chicken-and-egg paradox to end all chicken-and-egg paradoxes. DNA requires proteins to replicate — but those proteins can only be created by ribosomes using the instructions of DNA. DNA needs proteins; proteins need DNA. Neither could have come before the other.

If this is true — and I believe that it is (despite the attempts by some scientists to posit other intermediate replicators like RNA to break the paradox) — then it’s far more damning than the odds that life could have found a planet. The odds of life arising spontaneously when it needed a village but couldn’t have had one would be zero.

…And Apparently Back to Zero Again

But of course the odds weren’t zero. Life exists; that’s the most insane thing about this whole line of research. No matter what we think should have happened, we know what did. Life emerged from non-life. We know that life went from zero to one because life exists — that statement is so undeniable that it’s almost silly, but if you think about what scientists know about the beginning of life on Earth, it’s more awe-inspiring than anything else.

Life needed a planet, but shouldn’t have been able to find one. The odds that it faced were roughly equivalent to the odds of you entering the lottery and winning six times in a row, or getting struck by lightning eight times in the next twelve months. Life had just one try to get it right, and astoundingly found perhaps the only planet of the trillion trillion in the universe that could have supported it.

And even the planet that it found was far from ideal: the planet didn’t have the minerals necessary to support life. Life needed lots of carbon, but shouldn’t have been able to find it. Between the end of late bombardment and the first known species, life had only 350 million years (not long at all when you consider the 13,800 million years that our universe has existed) to accumulate enough raw materials — again, materials that didn’t exist on the planet on which it had emerged.

That life did all of this without any help is unbelievable. No proteins, no metabolism, no enzymes to unzip its genes. Just itself, alone, on a barren, un-hydrogenated planet in an infinitely expansive yet unmistakably empty universe.

But somehow, someway, life went from zero to one, and spontaneously emerged. It shouldn’t have, but it did.

Is that answer supposed to satisfy me? Because it doesn’t. There are some scientists — and perhaps even some of you readers — who are satisfied by knowing that life found a way to emerge by itself. Biologists call this mindset the “Anthropic Principle.” More or less, the idea is that there’s nothing interesting to see because we already know the ending to the story. Some who believe in the Principle even make the claim that the ending of the story (somehow) forced the beginning to turn out as it did. In academic speak, Professors John Barrow and Frank Tipler argue that:

The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the universe be old enough for it to have already done so.

Or, more plainly, Richard Dawkins writes:

Thus our presence selects out from this vast array only those universes that are compatible with our existence. Although we are puny and insignificant on the scale of the cosmos, this makes us in a sense the lords of creation.

These folks are champions of science — the people whom I exalted as the beacons of rational, measured thought — and that’s how they decided to rationalize the improbability of the beginning of life. I mean, seriously?

How is the Anthropic Principle remotely verifiable or scientific? How could you test the probability of a “cosmological quantity” if you only have one universe and one emergent tree of life to study? How can you falsify the claim that physical values are restricted by allowing carbon-based life without being able to roll the dice again and observe the creation of a new universe? How could a conscious, living thing select a universe that was created billions of years before it ever existed? How could someone believe that a probability so small that nobody even knows how to pronounce its denominator is uneventful? Is that a rational way to respond to such an improbability?

I thought not. Life couldn’t have just emerged, at least given what we currently know about the early universe. But what would that mean about the relative prospects of evolution and creationism?

I still didn’t think that God was responsible for the creation of life, but after realizing how ridiculously improbable the emergence of life was and learning that the response of many in the scientific community was just to shrug their shoulders and carry on, I was more open to the idea than I had been in the past. At the very least, I was willing to ask the question: what if God actually were the answer? What if life needed a kick start, or a source of divine lightning for its soup? If someone won the lottery six times in a row, they would be arrested; and in this case, who would be our primary perp for stacking the odds in favor of life if not God?

Of course, the scientific mindset rejects such poetry. I was unwilling to accept the religious conclusion without more proactive evidence for God — but before I had time to brood too long, my research on the Anthropic Principle quickly led me into what would become the final stage of my journey through science, faith, and the world in between: the world of physics.

My thoughts at the end of my trip through consciousness were that I needed a harder science, and that while biology hadn’t proved the savior that I needed, I could still hope that my luck could change with physics: perhaps the most basic, reductive discipline there is. I was notably less confident that physics would come through with a piece of evidence to disprove God — seeing as I was sitting in the batter’s box with an unpromising 0–2 count — but I still had hope, nonetheless.

Physics, it seemed, would be the final frontier for the conflict a-brewin’ in my understanding of the world between science and religion — and exploring that frontier brought me all the way back to the explosive world of the very, very beginning.

Don’t worry if your theory doesn’t agree with the observations, because they are probably wrong.

― Sir Arthur Eddington

Part Four: Explosions

And Epistemology

THE STORY OF THE VERY BEGINNING is surprisingly dramatic and found its own beginning with (who else but?) Albert Einstein.

Fresh off of his discovery of special relativity, the theory better known by its hallmark equation e=mc2, Einstein began to work on applying his new physics to the universe as a whole. Relativity was already ground-breaking and had nearly no historical precedent, but Einstein, being as ambitious as he was, set out to take his “special” theory of the very small and make it “general,” and able to hold at even the grandest scales. After years of harrowing work, promising dead ends, and an uncountable number of those two-story-high blackboards, Einstein finally made the math work. There was only one problem: his new solution predicted something that he believed to be false.

In the early 1900s, Einstein (and the rest of the educated world) believed that the universe was static, neither expanding nor contracting. With his new equation, Einstein could have been the first to realize that this wasn’t the case — but after an uncharacteristically bad judgment call, he decided to edit his original math, adding in a “cosmological constant” to make the numbers sing to his pre-defined melody and predict a static universe. His new equation — albeit inelegant and fudged — was celebrated in the scientific community, and smoothly joined the scientific canon as so many of Einstein’s theories had done before.

Enter: Edwin Hubble. You probably know him for his eponymous telescope, as well you should. The man was a master of the monster lens. He made a number of casual discoveries over his storied career: he was the first person to realize that the Milky Way wasn’t the only galaxy in the universe; he single-handedly tripled the size of the observed universe by spotting a supernova one million lightyears away; he literally wrote the book on how to classify far-off galaxies. Perhaps his most impactful observation of all, however, came surreptitiously enough — while observing galaxies one night as he often did, he noticed that one particular galaxy was off-color. To be precise, it was redshifted, which, given what we know about light, would imply that it must have been moving away from Earth. But that was impossible! As everyone and Einstein knew, the universe was static. To confirm his suspicion that the redshift he observed was phony, he trained his telescope on a few other known galaxies to measure their colors — and was astounded by the result. Every galaxy he observed — every single one — was moving away from Earth and away from every other galaxy.

Imagine the universe as a balloon and galaxies as dots drawn on the balloon in Sharpie. As the balloon inflates, the dots move further away from one another. This, Hubble realized, was an analog for the expansion of the universe. Our universe wasn’t static, it was expanding. Which meant that Einstein was wrong.

Hubble’s discovery would come to have a profound impact on the way that we understand the world. First, it proved to us that Einstein was fallible — Einstein later called the cosmological constant the “greatest blunder of his life.” But far more importantly, it implied a very particular trajectory for the universe over time. If the universe is currently expanding, it must have been smaller in the past. As you play the tape of the universe in reverse, then, the balloon shrinks. Play that tape all the way back to the beginning, and the balloon would shrink and shrink and shrink until it couldn’t shrink any further. Eventually, the universe would become trapped by its own gravity, and collapse into an infinitely dense, infinitely hot point: a phenomenon that scientists now call a “singularity.”

In the years since the discovery of the universe’s expansion, scientists have learned much about that singularity, and the (at times, incredibly violent) expansion of the universe that followed. The resulting story of the very beginning, after being whittled and shaped by years of experimental research, is perhaps best told by Bill Bryson, a Fellow of the Royal Society:

And so, from nothing, our universe begins.

In a single blinding pulse, a moment of glory much too swift and expansive for any form of words, the singularity assumes heavenly dimensions, space beyond conception. In the first lively second (a second that many cosmologists will devote careers to shaving into ever-finer wafers) is produced gravity and the other forces that govern physics. In less than a minute the universe is a million billion miles across and growing fast. There is a lot of heat now, ten billion degrees of it, enough to begin the nuclear reactions that create the lighter elements-principally hydrogen and helium, with a dash (about one atom in a hundred million) of lithium. In three minutes, 98 percent of all the matter there is or will ever be has been produced. We have a universe. It is a place of the most wondrous and gratifying possibility, and beautiful, too. And it was all done in about the time it takes to make a sandwich.

The fact that scientists know what happened in the first minutes of the universe is unbelievable, considering that our universe is over seven million billion minutes (or 13.8 billion years) old. Amazingly, scientists can go back even further than that. Using the laws of thermodynamics, gravity, air pressure, and more, modern scientists now have an understanding of what the universe looked like all the way down to the first hundredth of a second after the Big Bang.

That fact is patently ridiculous. I can hardly remember what I had for lunch today; to think that there is any evidence left of what happened billions of years ago is unbelievable.

But even so — is that evidence enough? In the context of the debate between science and religion, the real question is the beginning. Not what came at t=0.01, but what came at t=0, and before. Is it not fair to ask what caused the Big Bang? In the words of MIT physicist Alan Guth, “[The Big Bang Theory is] not really a theory of a bang at all. It is really only the theory of the aftermath of a bang.” As much seems self-evident because the singularity came before the Bang; something had to explode. But what was that something? And where did it come from?

We know what happened, but the Big Bang Theory by itself can’t tell us why or how it was brought about — we have a gunshot without a perp, an explosion without a cause. And, as the religious in the audience are undoubtedly already thinking, that’s exactly where God might enter the picture.

Could God have been the cause of the Big Bang? The story of the very beginning is of obvious importance to religion — the first verse of the Bible stakes God’s claim to the creation of the heavens and the earth — which may make the debate about what came first an even more contentious battleground than that of the beginning of life.

I was still committed to finding evidence against the religious worldview, even at this late stage of my journey, and I had faith that physics, the “hardest” and coolest branch of science, might be able to come through for me in the clutch. A Ph.D. co-worker of mine once said that every scientist he knows wants to be a theoretical physicist. Or, he added, maybe a surfer. But probably a theoretical physicist first.

No more need to dance around, team. My research introduced me to four answers from modern physics to the question of what came first; let’s compare them to the one answer from modern religion to figure this debate out once and for all.

Crazy, Tiny, Many, and Contained

As far as the theories of the very beginning come, there is not one more pop-famous than quantum physics. Which, really makes no sense. How does the theory that produces this equation:

become the most popular in land?

Part of the answer is the people; Stephen Hawking has made as much his life goal. He started by selling ten million copies of A Brief History of Time, a book about quantum physics simple enough to find a suitable home in airport book stores. Hawking has been introduced recently to the non-airport-book-store-browsing public as well; The Theory of Everything, starring Oscar-winning actor Eddie Redmayne as a young Hawking, marked the pinnacle of a recent surge in physics-centric programming (e.g., Interstellar, Gravity, Cosmos) led by Hawking and other advocates of science like Neil DeGrasse Tyson.

The physics personalities are probably part of the answer, but I think that the staying power of quantum mechanics really comes from the ideas that the discipline champions.

Quantum physics is the science of the very, very small, and the world it describes is exciting: it captures your imagination, and often feels like a true scientific frontier in a world in dire need of mystery and discovery. As I started to learn about the quantum world, it struck me as incredibly unpredictable: no matter my intuition, quantum physics would nearly always return the opposite result. Small stuff is just weird.

Cleverly, quantum physicists have found ways to exploit the weirdness of the very small to explain what may have happened in the very beginning. Perhaps the fundamental tenet of quantum theory (did I really just say that?) is Heisenberg’s Uncertainty Principle. In normal-person English, the principle says that the more accurately you measure how fast something is moving, the less accurately you can measure its position, and vice versa. For example, say you’re a photographer and want to take a picture of an arrow flying through the air. If you increase your shutter speed, your picture will freeze the arrow — giving it a definite location, but losing the information about how fast (and in which direction) it’s moving. Conversely, if you let your shutter stay open longer, you’ll see a motion blur and get a sense of speed and direction, but you’ll lose any chance of saying that the arrow is in one definite place or another. You can have one or the other but not both; Heisenberg proved that what is true for cameras is true of reality, and quantum physicists quantified that uncertainty as “Planck’s Constant”: 0.000663 non-illionths of a meter squared kilogram per second, to be exact. (Best of luck understanding that unit of measurement.)

The trickeration that might explain the very beginning is that the state of being perfectly at rest is impossible according to Heisenberg. Imagine a pendulum at rest. If it were truly not moving at all, you’d know where it was (the bottom) and its motion (zero), but such certainty breaks the rules. Quantum physicists say, then, that perfect rest is “unstable.” Even a resting pendulum must vibrate back and forth. Similarly, even a completely “empty” field would see particle and anti-particle pairs (like protons and antiprotons; yes, antiprotons exist) vibrate into and out of existence. That, quantum physicists argue, could have been how it all started — with one particularly good vibration that “took,” and “inflated” into the universe that we know today.

Got it? If not, don’t worry. Hawking’s book might have sold 10 million copies, but I’d estimate that approximately 0.000663% of them actually understood what they were reading. I surely was surely part of the majority; Hawking lost me at “imaginary time.”

Luckily, however, Richard Feynman, perhaps the most famous quantum physicist of all time, famously said that “nobody understands quantum mechanics.” So, we’re not alone.

Another modern answer of the very beginning that — thankfully — is far easier to understand than quantum physics is string theory. According to the theory, the smallest things are not molecules, nor atoms, nor quarks, but tiny, vibrating strings that exist in the eleventh dimension. Of course, to help this four-dimensional world (height-width-depth-time) exist, those eleven dimensions have to “curl up” with our four; to understand how, scientists think of them not as eleven-dimensional objects but as one-dimensional branes. If you forget what a brane is, Wikipedia has a helpful reminder:

Branes can be described as objects of certain categories, such as the derived category of coherent sheaves on a complex algebraic variety, or the Fukaya category of a symplectic manifold.

So, yeah. I was lying about the “easier” thing.

Our job here is only made harder because there isn’t even just one string theory. Some scientists like the “branes” description — whatever that may be — while some prefer other numbers of dimensions and other ways that the strings “fold” to create the world. Stephen Hawking explains that:

…there appeared to be at least five different theories and millions of ways the extra dimensions could be curled up… String theorists now believe that the five different string theories [are just] approximations.

So, there are at least five string theories and millions of ways (within each theory) that the strings could have folded into the four dimensions that created our universe. This seems like a theory that laypeople like us are years from being able to truly understand. The string-theory coal needs far more time before physicists have diamonds ready for sale in the airport bookstore.

One fun string-theory fact for the road: strings have their own Planck measurement, this time the “Planck-length” that represents the smallest possible physical distance — and the hypothesized width of a string. Measuring in at 1.6 x 10–35 meters, these strings would be approximately one quadrillion times smaller than the smallest things that our most powerful electron microscopes can currently see. Strings are tiny, and to fully grasp how they could have created our expansive universe must challenge even the most powerfully imaginative among us.

The third modern theory of the beginning is less vibrational than the two before — and (actually, this time) this one isn’t as crazy. The multiverse theory asks whether our universe might be one of many. Theoretical physicist Alan Guth, who (according to the Internet) does not surf, was the first to realize the power of this idea on December 7th, 1979, the day of his “SPECTACULAR REALIZATION” that:

This kind of supercooling can explain why the universe today is so incredibly flat — and thereby resolve the fine-tuning paradox pointed out by Bob Dicke in his Einstein day lectures.

The problem Guth was trying to solve, as he noted, was the “fine-tuning” of the universe. That fine-tuning is worth an aside here.

As you can imagine, the odds of an explosion as radical as the Big Bang producing order are exceedingly low. Explosions blow things apart; they don’t put them back together again. So, how did the Bang result in an orderly universe?

The sheer number of variables that the Bang had to balance to produce a stable universe is staggering. For example, if mass were bound together slightly more tightly than it is now (via the strong nuclear force), we’d only have very heavy elements. Or if mass were bound together more loosely, we’d only have very light elements. Neither all heavy nor all light elements does a universe make. Similarly, the precise tuning of the electromagnetic force is notable for its minuscule margin of error — if it were off by just one part in ten quadrillion (1016), our universe couldn’t exist. Compared to gravity, however, that margin is spacious. Eric Metaxas, the lover of crazy small numbers whom we met earlier, explains that if gravity had been off by just:

one part in 10,000,000,000,000,000,000,000,000,000,000,000,-000,000, the universe would not exist. But somehow, it is just what it needs to be. Statistically this is quite impossible, but once again there it is, and here we are… John Lennox says that the accuracy needed to hit a number that precisely is “the kind of accuracy a marksman would need to hit a coin at the far side of the observable universe, twenty billion light-years away.”

I can’t help but ask: how good of a shot is Lennox? That analogy is ridiculous, but so are those numbers. The real question, of course, is how the Big Bang managed to strike this balance so perfectly. Just one fraction off and the shot misses.

This was Alan Guth’s SPECTACULAR REALIZATION: that perhaps if there were many, many universes, the odds that just one of them happened to work out might be more believable. It would take an ungodly number of universes to make stability probable — but how else to explain the fined-tuned result of the Big Bang? That, in a nutshell, is the value of the multiverse theory. Hitting the lottery by buying one ticket is nearly impossible, but if you buy out your local gas station, your odds will be (slightly) higher.

But just because the theory is helpful does not mean that it’s true. Scientists aren’t in agreement on the truth of the multiverse theory — really, most at this point seem to be stuck making crazy, arbitrary analogies and debating about inflation, the mechanism for how multiverses might form. The strongest statements that you’ll find on the subject say that “there’s nothing preventing inflation from being true,” which is less than convincing if you ask me.

All of this uncertainty is enough to make a guy wonder if there might be an easy way out — a theory that could trump the others, or transcend their crazy, tiny, many-faceted minutiae. Luckily, I found another candidate in the form of a theory backed by Stephen Hawking, no less. Unluckily, it didn’t do much by the way of explanation (and was trapped in a book written by Hawking, with whom I had a history).

After some effort, though, I managed to understand the gist. The first facet of the theory is an appeal to the notion of spacetime — which, if you’ve seen the movie Interstellar, you know to be the modern discovery that what we perceive as “time” is really a whole lot like the other three dimensions of space. Spacetime can be warped, manipulated, and skewed. Most notably, time warps can occur around objects with crazy-strong gravity like black holes. The physics behind those time warps is above my paygrade, but the basic idea is that time (as we know it) goes much, much more slowly around black holes. In Interstellar, one hour on a planet near a black hole equates to seven years on Earth because time is being literally warped by the gravity of the black hole (as space would warp in an intuitive way). Hence, spacetime.

The second half of the theory is to imagine that time, like space, may have been created in the Big Bang. If it were, Hawking argues, there would be no boundary to time — even though it would be finite. The concept of “unbounded but finite” is hard to understand initially, but just think of Hawking’s time as the surface of a basketball. If you were an ant walking on that basketball, you’d never find a place that the basketball “stops.” There are no bounds to the ball as far as you are concerned. But of course, the ball is not infinite — it has a definite size and shape. The ball — like time — has a finite beginning, but no boundary; if you think of it more as spacetime than time, and imagine space beginning with the Big Bang, this idea makes a certain amount of sense. It would mean, in Hawking’s words, that “the laws of physics hold everywhere.” No matter where, and no matter when.

He explains:

Since events before the Big Bang have no observational consequences, one may as well cut them out of the theory, and say that time began at the Big Bang. Events before the Big Bang, are simply not defined, because there’s no way one could measure what happened at them.

[…] One wouldn’t have to appeal to something outside the universe, to determine how the universe began. Instead, the way the universe started out at the Big Bang would be determined by the state of the universe in imaginary time. Thus, the universe would be a completely self-contained system. It would not be determined by anything outside the physical universe, that we observe.

You can never come to a place where the laws don’t hold; the universe is self-contained, and the question of “what came before” is nonsensical because nothing came before time was invented.

I like the idea of this theory — it’s clean, complete, and supported by someone much smarter than me. But, from my layman’s perspective, there are still questions left to answer. Where did the rubber for the basketball come from in the first place? Why can’t we just imagine some other point outside of the ball, and ask why there is nothing there? And if the universe is self-contained, wouldn’t that be devastating for physicists? With nothing but the universe, what could have created the matter within the singularity before the Big Bang?

That final question highlights an assumption that I learned is troublesome for all four candidate theories of the very beginning. What could have created the universe? It couldn’t have been something in the universe — or else that something would have had to create itself — but if it wasn’t something in the universe, what the heck was it? And what created that thing? We want to find something powerful to the point of self-sufficiency, able to pull itself up by its own bootstraps and defy what we know about causality. But, even if something fit that definition — like, I don’t know, God — how could we believe in it? Wouldn’t it almost definitionally be beyond our comprehension?

Same Song, Second Verse

There was a sizeable part of me that, after wading through the physics above, devolved into nihilist doubt and existential angst. Maybe I was just never meant to know the answers to things like the physics of the quantum world, or whether we are just basketball-riding ants.

Is there a way that any one person could really know the answers to all of these questions with any reasonable degree of certainty? The sane, logical answer is no. How could we expect anybody to take the time? Hawking might understand it all, but that can’t be the general expectation; it took me the better part of a few months of reading to realize that unless this was my job, I would never truly understand quanta, strings, multiverses, or basketball-universes. So, in lieu of dragging you through more (informed, but undeniably amateur) point-counterpoint, let me take a step back here to talk about what I think these theories mean — speaking not as any sort of authority, but instead as a person who, like you, is just doing his best to make sense of the incredibly complicated story that physics tells of the very beginning.

First, the obvious: there are four theories here, not one. Scientists would like nothing more than to have a Theory of Everything, but that idea is still mythical; there are unresolved questions all over physics. If you search for “List of Unsolved Questions in Physics” on Wikipedia, you’ll see what I mean. That page makes you wonder what is solved. Some questions, in fact, sound like more like contradictions — for example, there’s a fundamental disagreement between quantum mechanics (the theory of the small) and general relativity (Einstein’s theory of the big, the one that used to contain the cosmological constant). Neither can live while the other survives. Similarly, it seems like it can’t be the case that quanta and strings and multiverses and basketballs will all end up being correct. So, where’s the truth?

The only conclusion that I can come to is that there must be something missing from the entire set. If one theory was true in the same way that the Big Bang Theory is true, we would have already seen consensus from the scientific community — and while scientists seem to be optimistic about the potential of these candidates, the theories very obviously lack the same kind of hard evidence that make evolution and the Big Bang so undeniable.

For example, take the theory of the quantum vibrational beginning, with particles and anti-particles popping into and out of existence. On one hand, it’s easy to see why the theory is so widely popular — mathematically, it’s elegant and intuitive that “zero” might be able to produce “one” and “negative one” while keeping the sum of the field still equal to zero. But we have no hard evidence, just math; and is math enough? It wasn’t enough for our early calculations of retrograde motion.

When ancient astronomers looked at the stars, they noticed that some planets seemed to double-back on their otherwise perfect, logical orbits. (That’s actually how they came to be known as “planets” — in Greek, planan means “wanderer.”) These astronomers were clever, though, and devised incredibly complicated mathematical models that could calculate the retrograde orbits. But, of course, the planets weren’t doubling back on themselves; planets never actually wandered, scientists just didn’t understand that they were orbiting the Sun, not the Earth. The math worked to explain retrograde motion, but the theory wasn’t true. What if the same pattern holds for quantum mechanics?

The other theories are no better off; in each case, we seem to distinctly lack primary, observational evidence. Strings, for one, are fundamentally impossible to ever observe. I didn’t even know there was an eleventh dimension, which seems like a problem. But even if it wasn’t, I feel like we’d hit a natural limit before our microscopes could see at the Planck scale. Similarly, how could we ever see another universe in the multiverse, or confirm that there isn’t anything outside of our basketball? Alan Guth, the SPECTACULAR REALIZER himself, even admitted his similar doubts, saying: “I’m a fan of the multiverse, but I wouldn’t claim it is true.”

On an even more fundamental level, I am not sure that these theories even answer the question of what came first. They take us one step beyond the Bang, at best, but is it unreasonable to ask what came before the strings, the quantum fields, or the multiverse?

Even posing this question reminds me of the good old days — when I was an inquisitive nerdling who wouldn’t accept any answer he was given. At a certain point, there’s just nothing else that someone can say — but I don’t think that makes my questions about the very beginning any less fair. David Albert, professor at Columbia University, agrees that our current quantum theories:

…have nothing whatsoever to say on the subject of where those fields came from, or of why the world should have consisted of the particular kinds of fields it does, or of why it should have consisted of fields at all, or of why there should have been a world in the first place. Period. Case closed. End of story.

In the end, we have a bit of a paradox on our hands. Do the theories of physics hit rock bottom, or don’t they? If they do not — and our theories are forced to go on forever and ever — we aren’t much better off than turtle-lady, the star of my favorite physics story of all time:

A famous scientist once gave a lecture on physics, cosmology, and the structure of the solar system, and opened the floor for questions afterwards. After a few normal back-and-forths, an old woman came to the microphone.

“Your theory that the sun is the center of the solar system, and the Earth is a ball which rotates around it has a very convincing ring to it, but it’s wrong,” she said, “I’ve got a better theory.”

The scientist asked what that might be. The woman replied that the Earth’s crust is actually just the shell of a giant turtle — and the scientist was surprised. “If your theory is correct, madam,” he asked, “what does that turtle stand on?”

“That’s a very good question,” she replied “but I have an answer: the first turtle stands on the back of a second, far larger, turtle, who stands directly under him.”

“And what does that turtle stand on?” he continued.

To this, the woman laughed. “It’s no use, mister — it’s turtles all the way down.”

If we don’t have that ultimate warrant, we have a string of justifications with no base, or a locomotive with no engine, or knowledge built by standing on the shoulders of giants… in freefall. It really can’t be that there is an infinite number of turtles.

But would we really be better off with the alternative: one final answer to rule them all? At the very beginning of my journey, I wanted nothing else but such an answer — a concrete, “hard” answer that could prove beyond a shadow of a doubt that the religious worldview was doomed. Through my research, however, I started to realize how impossible it would be to find that kind of an answer.

In psychology, I got caught in the infinite loops of our conscious minds: we couldn’t escape the horseradish; finding hard evidence was precluded by the ineffability of our minds. In biology, I found a similar but slightly different story: we had some hard evidence, but it all pointed us in the direction of an infinitely improbable event — one that probably could never be explained by brute physical fact. Or, understood differently, perhaps this is what a hard answer really looked like (i.e., “life just happened”), and I didn’t like what I saw.

And now, even in physics, perhaps the hardest science of them all, I couldn’t find an acceptable rock-bottom truth. Could the answer be that there wasn’t such an answer to find?

It may have taken me the better part of three years, but it was only after all of my research — through all three domains of science — that I remembered a passage from Gödel, Escher, Bach, the book by Douglas Hofstadter that I have already mentioned as one of my favorites.

In GEB, Hofstadter tells the story of Principia Mathematica, a book that claimed (and seemed to succeed) in establishing a system for judging the truth of any mathematical statement using just a few simple rules. If your statement was valid according to the rules, it was a true statement of mathematics. In the language we’ve been using, Principia Mathematica (or PM) would be the hardest source of truth imaginable: it explained everything, and was formed using purely mathematical rules.

Kurt Gödel, however, didn’t believe that such a “hard” source of truth could really be as totalizing as it claimed.

In a truly ingenious way, Gödel found a way to render the classic brain buster “This sentence is false” in the language of Principia Mathematica as his devastating, and simple, statement “G”:

“G” cannot be proved within the Principia Mathematica.

The statement was legitimate according to the rules, as even the authors of PM recognized, but had a horrifying implication: If the statement G could be proved within PM, then it negated itself — and PM had a contradiction written into its very rules. But, if G couldn’t be proved within PM — meaning, the statement G is true — then PM is fundamentally incomplete. Gödel proved that the latter possibility must be the case: that G was a true mathematical statement, even though it couldn’t be proved to be be so within the Principia Mathematica.

The conclusion that Gödel drew, and later generalized, was that any sufficiently complex system must be similarly incomplete in this way. No “hard” system can justify itself; it’s impossible to say that such a system is both consistent and complete.

Math is about as hard a system as it gets, and it was still vulnerable. Other systems, like psychology, or biology, or physics — or even science more generally — can fall prey to the same trap. If they’re hard enough, they are incomplete.

I knew of Gödel’s conclusion even before I started my psychology research, but hadn’t fully put it together until the very end of my journey: finding that final, ultimate warrant that can justify the whole train above it might just be as impossible as proving PM to be complete.

So, then, what are we to make of the beginning? Can our questions about the beginning ever truly stop? Or will our turtles have to go all the way down?

Believing the Crazy

To wrap up our discussion on physics before considering faith, truth, and everything else, I can hear your counter to my physics nihilism above: sure, we can technically never get down to the ultimate truth, but does that mean that we don’t understand what happened in the beginning? Do we really need to go any further than understand the beginning to a reasonable degree of certainty? The theories of the very beginning that we have might not be perfect, or get us back to the ultimate truth, but they still do a good job of helping make sense of the world before the Big Bang.

I hear that argument completely: it was my gut reaction to this entire debacle, and it’s a sentiment that many, many scientists might agree with. If there’s one thing that I have learned from my (now encyclopedic) knowledge of the science vs. religion debates on YouTube, it’s that scientists hate when technicalities get in the way of a work-in-progress, or when intermediate theories like those four I discussed above are treated as final. The truth is that good science takes time, and demanding a totalizing theory immediately is unfair. Give it time, and science will find the way to the truth — even if it’s not quite there yet.

That was my first take, and my opinion as I worked my way through psychology, biology, and physics. In every case, I was forgiving. I found answers that contradicted what I believed, but excused them for one reason or another. The science wasn’t hard enough for psych; I didn’t know enough about biology; the theories hadn’t had enough time in physics. Excuses, all across the board. Why, I wondered, was I so willing to give science a free pass, but so brutal in demanding that religion stand up to my every question and challenge?

Granted: a super-intelligent, all-powerful man in the clouds is a crazy idea that probably warrants skepticism. But is it really any less crazy to believe in the alternatives? In physics, are we really supposed to believe that our universe is made of vibrating micro-strings? Or that our universe is just one of an infinite number in the multiverse, and we happened to randomly find ourselves the one universe that worked out?

Richard Swinburne said that “to postulate a trillion-trillion other universes, rather than one God, in order to explain the orderliness of our universe, seems the height of irrationality.”

Try this on for size: The strings that make the universe are invisible. They exist in another dimension, in a reality so far beyond ours that it’s easy to think of them as entirely separate from our world; yet, other people in positions of authority tell us that they are the reason the world exists in the first place. If we are to believe them, we have to think that a mysterious, indescribable action a few dimensions away brought our world into being. We will never see these strings, no matter how hard we look, and no matter how far our technology brings us. We still think that they exist, however, because they help us make sense of the rest of the world.

Replace each instance of “strings” in that paragraph with “God,” and you’re not too far away from one of the most popular quotes that the prolific Christian author C.S. Lewis ever wrote: “I believe in Christianity as I believe that the sun has risen: not only because I see it, but because by it I see everything else.”

Would it really be so crazy to believe in God after uncovering such crazy science?

It is paradoxical, yet true, to say that the more we know, the more ignorant we become in the absolute sense, for it is only through enlightenment that we become conscious of our limitations. Precisely one of the most gratifying results of intellectual evolution is the continuous opening up of new and greater prospects.

― Nikola Tesla

Part Five: Faith

And Truth, and Everything Else

WHEN I WAS A FRESHMAN at Michigan — and a fresh enrollee in ECON 101 — I knew everything about economics. It was all so simple. Are prices too low? Demand exceeds supply, and market forces will push prices up. Are prices too high? Supply exceeds demand, and prices fall towards equilibrium. I was unstoppable. Invincible. Future PhD. And then I got to ECON 301. In just a sentence, I was put back in my place: “The first thing to know about micro,” my new professor began, “is that everything you learned in ECON 101 was a lie.”

As it turns out, the laws of economics aren’t that simple. (Which, in retrospect, explains why nobody in this country actually understands the economy.) Real people don’t have demand curves, and as any business-person can tell you, most companies aren’t flying, they’re falling with style. Economics is more of an art than a science: the entire institution depends on people behaving rationally, and nobody is fully rational.

I mention the rise and fall of my economics stardom to reinforce one of my favorite quotes of all time. It’s a paradox, said prolific inventor Nikola Tesla, that the more that you know, the more “ignorant” you become. If you have ever gotten past 101 knowledge — whether in economics, religion, science, or anything else — you know that Tesla is profoundly right, because you know what it’s like to understand the depths of what you don’t know.

The primary difference between the Christian Keil who began this project and the Christian Keil who exists now is that the former was pretty certain that he knew the truth, and the latter isn’t quite sure anymore. That seems backwards, but I wouldn’t trade places with old Christian for anything in the world.

I might not know the final answer, but I do know exactly what I think about how psychology, biology, and physics relate to my potential faith in God. And, perhaps more importantly, I know what it’s like to have a strong opinion, consider the evidence against it, and totally change my mind. If there’s one thing that I’ve learned, it’s that modern science looks a whole lot more like religious faith than I’d thought. And that thought, by itself, is worth the journey that it took to get here.

Ignorance and Uncertainty

When I began my journey, I was ignorant: I didn’t know why I believed what I believed. I had opinions but no proof, and didn’t understand the scientific world in which I trusted. As I began to learn about that world, I became uncertain. That transition was a surprise, because ignorance and uncertainty seemed to be the same thing. It wasn’t until much later that I came to understand the subtle difference between the two concepts: ignorance is bad, and uncertainty is just uncomfortable.

That first claim shouldn’t be a hard sell. Ignorance, to me, is nothing more than fear: fear of failure, fear of the unknown, and fear of wasted effort. I have harbored all three. It’s easy to feel comfortable by narrowing your horizons especially when the possibility of failure looms large. If you have your beliefs already, why question them? Doing so will decrease your confidence, and may even leave you in a liminal state where you can’t say for sure what you believe. Saying “I’m not sure yet” — and admitting your uncertainty — is uncomfortable, and scary. But at least it’s honest.

Perhaps the hallmark of adulthood, or at least young adulthood, is pretending that you know the answers to things that you’re not sure about. Of course I know how to make a grilled cheese sandwich, I say as I toast two slices of bread and melt Kraft singles between them in the microwave. Of course I knew April 15th is tax day, I say as I’ve yet to give Uncle Sam even a passing thought on April 9th. And what is true of grilled cheese and taxes is certainly true of the bigger questions, like religion and politics and morality. This may very well be my own projection, but I would be very surprised if there didn’t exist a silent majority that fits somewhere between the extremes that society so often defines for these huge issues. Not everybody is 100% for or against the death penalty, and not everybody fits into either pro-life or pro-choice. There is an entire spectrum of belief, and most of it must fall between the two extremes.

To me, choosing between one extreme or another is often a matter of practical necessity, not honest truth. I’m not certain what I believe about either issue, and even though I would publicly profess to be anti-death penalty and pro-choice, my decision is far more about how I want to be perceived than what I believe to be the true value of a human life. How could I possibly know what a life is actually worth? Another truism of adulthood is that the world is painted in shades of gray: everybody silently knows (even if few will admit) that most of us have no real clue about these kinds of issues. We have our opinions, and they can be strong and well-founded. But we still have no real idea of the underlying truth.

If you’re uncertain about what you believe, you’re not alone.

From my experience, it is far, far better to admit your uncertainty — if only to yourself — than to stay on the (admittedly enticing) plateau of ignorance where your beliefs are unfounded. Uncertainty means expanding your horizons, being open to all of the evidence, and taking the time to put together disparate and often contradictory sources of information and opinion. And, for me, it meant being brutally honest with myself about the shortcomings of my current view of the world.

My particular move from ignorance to uncertainty took the better part of a few years, and coincided with my slow realization that I really didn’t understand modern science as well as I thought I did. I held on for dear life, making excuses and exceptions galore, but in the end I could only come to one conclusion: that there are certain areas in which science doesn’t have all of the answers.

The natural extension of my conclusion — that God might be found in the gaps in the scientific explanation of the world — is not my own; Dietrich Bonhoeffer called this idea the “God of the Gaps” hypothesis. Candidate “gaps” have been proposed to exist in physics, chemistry, biology, psychology, paleontology, geology, and elsewhere. The list goes on and on — wherever science has yet to explain a particular phenomenon, “gappers” find weakness. But, as I found almost immediately upon starting my research on science and religion, the God of the Gaps hypothesis is almost universally despised among scientists and believers.

The objection from the scientists is predictable: they believe that with time, all gaps in the record will be closed. In the early 1900s, it seemed that we might forever have a “gap” in human knowledge of the very small because we could never see inside individual cells. But then, in 1931, scientists invented the electron microscope, allowing up to ten-million-times magnification. That’s small enough to look into cells, organelles, viruses, and even inside atoms themselves. The religious argument against God-of-the-Gaps is a reaction to the historical fact that if a gap can be filled, it will be. “If that’s how you want to invoke your evidence for God,” says Neil DeGrasse Tyson, “then God is an ever-receding pocket of scientific ignorance.”

For a very long time, I also cast aside the God of the Gaps hypothesis. If even believers weren’t willing to stand by it, I didn’t see any reason to do so myself. But then, I happened upon scientist-believer John Lennox’s book, God’s Undertaker. Lennox’s reframing of the classic argument set my imagination afire:

…there would appear to be two kinds of gaps: those that are closed by science and those that are revealed by science — perhaps we could call them ‘bad gaps’ and ‘good gaps’, respectively. As an example of a bad gap, we might think of Newton’s suggestion that God occasionally had to tweak some of the orbits of the planet to bring them into line. That kind of gap we could expect to be closed by science because it falls within the explanatory power of science to settle. Good gaps will be revealed by science as not being within its explanatory power. They will be those (few) places where science as such points beyond itself to explanations that are not within its purview… This is not a gap of ignorance, but a gap in principle… revealed by our knowledge, and not by our ignorance, of science.

What if Lennox is right? In that case, the criticisms of both scientists and believers could be answered. God could stay constant; and if constant, he would have to be accounted for in any scientific theory of the universe. The question, of course, is whether such good gaps exist — and after my research on consciousness, the beginning of life, and the beginning of the universe, I am inclined to think that they might.

The hard problem of consciousness may just be too hard for the tools of science to handle. We can see into our own brains, but not those of others, so we may never truly confirm what qualia really are. Are they the sole result of neurons and neurotransmitters? Or do they need some soul power as well?

Similarly, the preponderance of the biological evidence says that life must always come from other life. Yet of course, life must have moved from zero to one at some point in history, even though it would have needed to search 1224 times more planets than we have in the universe to have a 50% chance of finding a home. Wouldn’t it have needed some outside help to beat those incredible odds against spontaneous generation?

And finally, the gap at the very beginning of the universe feels truly “good.” We know exactly what happened up to the first hundredth of a second after the Big Bang, and exactly nothing about what happened before it. Even the theories we do have are susceptible to the devastatingly simple question “well, what came before that?” A prime mover must exist (for how else could the universe have started?), and must not exist (for how could a prime mover come to be?). We have an asymptote at the beginning of time with science on one side and philosophy on the other. Or, in other words, a well-defined good gap.

These conclusions, all taken together, have made me deeply uncertain about my own beliefs. I don’t think that God definitely resides in any of these gaps, but he very well might; the scientific story in which I trusted is, after a detailed review, patchwork, and far less than the totalizing worldview that I once hoped could disprove God.

Uncertainty and Belief

It took me a concerted effort to get to uncertainty, and once I was there, I didn’t like it. I moved from a world of (ignorant, unjustified) certainty to one of doubt — and at the end of my exploration through psychology, biology, and physics, I was skeptical in a borderline destructive way. I had placed so much trust in science, but after digging into the evidence — and making a fairly grand gesture by digging so deeply — I found myself either a solipsist or a nihilist. At the end of my journey through physics, in particular, I was left wondering whether any absolute truth existed — or if we would be stuck like turtle-lady, clinging to crazy theories that purported to explain more than they could.

That wasn’t where I hoped to end, and it would have been very easy to stop there, let down and skeptical. I could hold to my belief that uncertainty was a better place to be intellectually, but in practice, it stunk.

No matter what your particular journey, I am sure that you will find dark days like those I experienced; they’re an inevitable consequence of learning how much you don’t know. It also may be inevitable when considering the biggest questions that we know to ask. I often felt outgunned, trying to comprehend our seemingly infinite universe with my decidedly finite brain.

Imagine that you owned a string the size of the equator– 250,000 miles, or 132,000,000 feet long — and decided to tie it around the planet. (A natural choice.) Once tied, the string would be totally taut to the ground with no room underneath. Also imagine that I, thinking I’m a hotshot, came along with a very slightly longer string: say, 132,000,003 feet long. My string would be 0.000000027% longer than yours. How much slack do you think would be under my rope if we wrapped mine around the equator as well? Intuition says almost none. Distribute those three feet around the Earth, and you’ll be lucky to fit a big toe under the rope. But amazingly, my string would be more than six inches further off the ground than yours. This is after distribution around the whole equator: six entire inches. I still can’t really believe it, even knowing the answer — my “common sense” just isn’t fit to answer questions at such a grand scale. And, of course, in the grand scheme of things, the Earth is actually fairly small; the observable universe is 93 billion light-years across, an incomprehensible 22 quintillion (22,000,000,000,000,000,000) times larger than the circumference of Earth. And even the largest numbers are nothing compared to the infinities that abound in religion.

It’s easy to feel like this is all just a bit unfair. Why should we mortal, fallible, downright small beings ever be asked to understand the infinite universe? This question was (one of) the devil(s) on my shoulder throughout my journey.

As I worked through psychology, biology, and physics, my uncertainty grew and grew and grew — but luckily for all of us, that is only half of the story, for as I learned about science, I was also learning about faith. At the beginning of this journey, I thought faith was fluff — a concept invented by the religious to mask their inability or unwillingness to understand the hard evidence. But in the end, against all odds, faith became the hero of my story.

Faith, as I now understand it, is how you get from uncertainty to belief, and how you find truth in an increasingly uncertain and ambiguous world. You might remember the New Atheist exhortation that faith is advocacy without evidence or even in the face of it, but faith seems to be quite the opposite, at least when done right. The haters are correct that faith can be a leap — that it pushes beyond what you can concretely know for a fact — but real faith, I think, is justifiable, honest, and true. True to you, a particular person, of course — but still deeply true, nonetheless.

An analogy will help: think of your life as a scatterplot, and every experience of your life as a single point on that graph. As you go through life, your plot will fill up with points and you may begin to develop a coherent picture of the underlying trends and patterns beneath the randomness. You’ll notice, perhaps, a correlation between the onset of fall and the onset of your allergies, or a correlation between the effort that you put into a task and the reward that you take from it. Your experiences will teach you about the world, and the patterns that arise can help to inform your beliefs about the world (e.g., “I have fall allergies,” “hard work is worth it”).

Of course, the real world is rarely linear. You’ll miraculously make it through one fall season without needing Claritin; you’ll work hard on an assignment for school only to be let down by a poor grade. That’s where faith comes in — to salvage coherent meaning from the noise that results from the uncertainty and non-linearity of life.

The scatterplot of my faith began with a single point — the event during which the Power Team told me that believing in God would give me superpowers. My sample size of one offered unanimous support in favor of God, so I believed. As I grew older, however, the trend broke down. I realized that God probably didn’t lead to superpowers, and he might not lead to the truth either. I saw so much variability in my scatterplot, in fact, that I panicked: I disliked the uncertainty of a challenged, doubting faith, and quickly (and prematurely) drew a line on my trend pointed towards disbelief. I declared a mostly unfounded belief in atheism, and closed myself to future points and future experiences in my life. It was easier, simpler, and cleaner to do so; but of course, not the right way to go about developing mature, honest beliefs.

As I progressed through the events detailed in the first chapter of this book, I erased my shoddily drawn line and instead started scouring the world for more data points — hopefully those that would confirm the trend towards non-belief that I originally hypothesized as a moody teenager. But, during my journey through science and religion, I learned that — once again — I was in an uncertain world. My scatterplot had plenty of micro-trends (e.g., “I believe that Evolution and the Big Bang are Capital-T Theories,” “The odds were heavily stacked against the emergence of life”), but lacked one macro-trend pointing towards either religious belief or disbelief. Over time, such a macro-trend might emerge for me; I sincerely hope that one does.

In a very full-circle way, I suppose that I’m actually wishing here for a faith like my dad portrayed in our original table-side debate. As my dad grew up, religion consistently gave him the answers that helped him make sense of the world. Accordingly, a strong trend developed; one that pointed him towards God. Whether his confidence was built using the same sorts of reasons that I am looking for now is no matter; the important thing is that over time, his faith was strengthened with each new point.

Of course, there was still variability — days where work wasn’t going so well, days when he wondered if the universe was really fair, days when his eldest son showed him Wikipedia articles that contradicted his faith — and he embraced that variability with an open mind. He didn’t shut down in those times, he simply accepted the uncertainty while also recognizing the heavy weight of the trend that his experiences had already built.

It makes total sense, when you think about it, that I wasn’t able to persuade him that day. He had spent his entire life thinking about faith and gathering evidence to build up the “denominator” of his experience. Wouldn’t it make sense that a single point (just a numerator of “1”) wouldn’t have as big of a sway on him as it might, for example, on my brother? He never said that my evidence couldn’t change his mind, just that it likely wouldn’t. He wasn’t dogmatic, I now realized: he was just old mature. His faith was strong — as any healthy faith developed over a lifetime should be — and he had grown resistant to outliers.

Now, I want that kind of faith. Resilient, well-reasoned, and still fundamentally open.

A Scientific Faith

The greatest part about that kind of faith, in my mind, is that I wouldn’t have to sacrifice any degree of scientific belief to get there: in fact, this brand of faith is exactly the same as the faith that scientists use every day. How else could they extrapolate from finite observations to universal laws? And really, how else could they justify the scientific method? The method may have worked in the past, but no scientific experiment (of course based on the method) could justify the scientific method without being hopelessly circular.

This was an unexpected conclusion, and one that initially resisted. Science was the antithesis of faith, at least how I saw it in the beginning — so if I was really about to believe that science was fundamentally faith-based, I’d have to be pretty certain of my new conclusion. But the more that I thought about it, the stronger my convictions became. Even the most hardcore believer in science would have to admit two things: the reliance of the scientific method on induction, and the (at least current) incompleteness of the scientific picture of the world.

Science is inherently inductive. To deduce means to work from a set of definitions, rules, and tautologies (e.g., 1+1=2, à la Principia Mathematica), and to find unique combinations of those rules to produce new solutions. Deductive solutions, because they work from the mathematical truth, are unerringly true once proven. Unfortunately, however, we don’t have direct access to any of the true definitions or rules behind the universe. We can guess or infer them from our experiences in the world, but as soon as we do, we’ve moved into induction: constructing general laws from specific experiences.

The distinction is key, as physicist Paul Davies explains:

Everybody agrees that the workings of nature exhibit striking regularities. The orbits of the planets, for example, are described by simple geometric shapes, and their motions display distinct mathematical rhythms. Patterns and rhythms are also found within atoms and their constituents. Even everyday structures such as bridges and machines generally behave in an ordered and predictable manner. On the basis of such experiences, scientists use inductive reasoning to argue that these regularities are lawlike… [but] inductive reasoning has no absolute security. Just because the sun has risen every day of your life, there is no guarantee that it will therefore rise tomorrow. The belief that it will — that there are indeed dependable regularities of nature — is an act of faith, but one which is indispensable to the progress of science.

The claim that the sun may not rise tomorrow — although strictly true — is a bit petty and impractical; it might require faith to believe in the sunrise, but only the very smallest amount imaginable. I don’t mean to say that there aren’t any facts in the world — if you really behaved as if the physical laws of the universe were malleable, you’d be both insane and completely debilitated by your doubt. In other contexts, however, induction is much less secure. Even by the most gracious calculation, humanity has only ever ventured through 1/44,000,000,000,000th of the observable universe. Why, then, should we assume that our physical laws apply everywhere?

In situations of absolute (or practically absolute) truth, we don’t need faith at all, but in situations of complete uncertainty, it’s all we have. So, where do we draw the line — and what do we believe? That is the true question of faith. When do we know enough to know “the truth”? That’s a huge question.

The closest I have to an answer comes from philosopher David Hume:

When anyone tells me that he saw a dead man restored to life, I immediately consider with myself whether it be more probable that this person should either deceive or be deceived or that the fact which he related should really have happened. I weigh the one miracle against the other and according to the superiority which I discover, I pronounce my decision. Always I reject the greater miracle. If the falsehood of his testimony would be more miraculous than the event he relates, then and not till then, can he pretend to command my belief or opinion.

But, even with that definition, how do we define “miraculous” in a universally applicable way?

The difficulty of the grand question of faith — of when enough is enough to construct a valid and defensible belief — extends as much to science as it does to religion. The necessity of faith to religion is self-evident: if you want to believe in an infinitely kind and infinitely wise supernatural being, you’re going to need to induct an answer beyond the hard evidence. That isn’t to say that you can’t have hard evidence for God; in fact, it’s quite the opposite: if you were to truly believe in God, you would probably need a lot of proactive evidence to suggest he exists.

The same reliance on faith can be seen in science. When do we know enough about these invisible strings to say that they truly exist? How can we be sure that our minds aren’t skewing our interpretations of the data on consciousness? As I’ve shown throughout this book, the issues with which science has the most trouble are often the issues that mean the most to us as thinking people living in the world. Sure, science does an amazing job with the Big Bang — but if we want to know about the very beginning and where we came from in the first place, we need to move past the evidence, imply solutions based on our laws of mathematics and science, and find an answer using the tools of faith.

It can be difficult to believe for those of us predisposed to trust in science to admit, but science often depends on faith, especially when it matters most.

We all need faith to make sense of our messy, uncertain world — whether we are scientists, believers, or anything in between.

Finding Faith, and Finding Truth

I’ve used this word a lot, but this really has been a journey, hasn’t it? I was born into belief, then fell out of faith. I realized that I didn’t fit cleanly into any one category, and became agnostic-ish. I worked my way through science to try to prove the truth of atheism, and found that science and religion really aren’t as different as they may seem. As we’re nearing the end of our time together, it’s worth asking: what do I believe now? What’s the conclusion? I suppose that’s an answer that I owe you — but I fear that it’s not the answer that you may have expected as you started reading this book.

The truth is that I still can’t side with either Christianity or atheism. I don’t know the final answer. I’m still probably some version of agnostic-ish — but that isn’t to say that I haven’t changed while writing this book, because I absolutely have.

In the beginning, I thought that science was a hard barrier to belief and that any scientist who knew their stuff could never be a Christian. Now, I don’t think that’s the case at all. 50% of scientists believe. There are (potentially) good gaps in the scientific stories of consciousness, the beginning of life, and the beginning of the universe. At the absolute least, I no longer think that science stands between me and religious faith. In fact, it might even help me believe, as crazy as that may seem.

Science and religion, as I understand them now, are just two different ways to do the same thing: make sense of our crazy, messy world. Our plots are scattered; the world is uncertain, unpredictable, and dynamic; the trends are almost never linear. Science says to use the information in the world, and to find the inductive truth. Religion also helps us understand the world, but asks us to think of the truth beyond, or even behind it. But no matter if you prefer to stick to the information in the world or learn — as I am still learning — to consider the possibility of some deep truth beyond it, you’ll need faith. You’ll never have all of the evidence — and faith, whether scientific or religious, can help you stay true to what you know while also believing in something bigger.

Do you remember that scene in Indiana Jones and the Last Crusade where Indy comes to the bottomless pit on his quest for the Holy Grail? By then, he’s already outsmarted two challenges — the spinning blades and the word puzzle on the floor — and the expectation is that he’s getting close to his grail, but then… boom! Bottomless pit. No way across, no way around, and no way back. Just emptiness in front of him.

In a much less dramatic way, the situation Indy finds himself in is exactly the situation in which you would need faith: when processing an entirely new situation or piece of evidence. You can use your powers of induction to tell you what to believe in this circumstance, but the uncertainty of the situation is still very real. You don’t have direct evidence for this new experience just as Indy doesn’t have any clue what to do when he faces his bottomless pit.

And what does Indy do? He relies on his faith, takes a step into the void, and — amazingly — finds solid ground waiting for him in the form of an invisible bridge. He relied on his faith, and found his way across. Honestly, if it were me, I probably wouldn’t have taken the step without throwing some sand into the chasm first — but that’s just me. It’s scary to step out into uncertain situations — with roads never traveled by, and in which we are totally without direction. But, by traveling with faith, you can act with some degree of certainty, confidence, and calm. You can know that you’ve done all that you can to prepare for the looming void of uncertainty, and what more could you want?

The world is messy. It’s uncertain and painted in shades of gray; it’s unpredictable and dynamic. As you experience more, your understanding of the world will inevitably change. The only thing that can get in the way of that understanding — and the strengthening of your faith that will result — is yourself. You can do as I did, and prematurely draw a trend in your scatterplot out of fear. But if you embrace the uncertainty, stay honest, and stay open to the world, you can find your truth.

Even after all of this, I’m still looking for my truth. Religion has cleared the hurdle of science, at least for now, but I will still need to address other reservations before I consider myself a true believer. I still don’t know how I feel about the crazy stories of the Bible, and I’m more or less committed to the idea that the Bible is largely metaphorical. I don’t know how I feel about the morality of the Bible either, especially that of the Old Testament. And even in the realm of science, I’m not sure if I know of any proactive reasons to believe that God might be behind the gaps; surely something must be there, but why would we assume that something to be the Abrahamic God?

These reservations are challenges, yes, but I know an honest way to address them: I’ll stay open, consider the evidence, and, yes, probably keep up my hot streak of ordering books from Amazon.

As the bell sounds here, I find myself a strong believer in science, a growing believer in the power of religious thinking, and a supreme believer in the power of faith to make sense of an uncertain world. If I had to describe my newfound self in a word, I might choose faithful — even over agnostic-ish.

I believe in the power of evidence to change open minds, and sincerely believe that no person (or God) could reasonably ask us to do any more than to find our truth — to consider all of the experiences of our lives and to come to the best conclusion we can given the evidence that we have. No one person will ever have all of the evidence, so no person could be expected to have all of the truth, but we can hope to attain a representative sample of it — especially if we care enough to proactively search for the truth, as I (over)did here with science. I may not have all of the answers, but I do have some.

This isn’t the ending that I expected, and honestly, it’s not the ending that I wanted. This book ends in media res; in the middle of my journey, as a check-in en route to an unknown destination. I am not sure where I’m headed, but I am sure that if you reach out to me in a year I’ll likely have a totally different take on the issues of this book, because I’m still moving. As of this writing, I’m reading a book about the origins of life. How could I possibly say that the next chapter I read won’t give me some new piece of evidence and change my mind?

The fact that my faith is still in flux is vaguely terrifying, but it’s also exhilarating because I know that I’m on the right path. I’m moving away from ignorance and unthinking belief, and I’m trudging through the uncertain middle in the hopes that faith will set me free. I’m embracing my uncertainty. I’m finding ways to reconcile the faith of my family with the science that I love. And, throughout it all, I’m refusing to give into fear: the fear of uncertainty, the fear of failure, the fear of admitting that I may have been wrong.

Now that this chapter of my journey is coming to a close, I’m newly empowered: I know what I believe about science and religion, and I’m ready to freely address the other hurdles that stand between me and religious faith. That said, you probably shouldn’t hold your breath in anticipation of Agnostic-ish II: The Reckoning. I’ve done enough talking about myself in this book to last a lifetime.

Now, the burden is on you. What if you had to write that sequel? Would it change how you spent the next week, or month, or year? If you think about it, you will write your own sequel, whether you put it down on paper or not. It may be something that you consciously undertake, or you may keep on keeping on like normal. But every day is another page in your book — or another dot on your scatterplot — and it’s now up to you to decide what those pages will hold.

Regardless of your particular sequel, I sincerely hope that you take some of the lessons that I learned in this book to heart. If this book can help even one person find their faith — whether in science, religion, or anything in between — it will be a success in my eyes.

Not to self-aggrandize, but my journey is proof that the fear that you may feel as you step away from the faith of your parents, or the atheism of your youth, or the agnosticism of your indecision can be beaten. It’s proof that you can know what you believe, even as you face down the biggest issues imaginable. And it’s proof that as you begin your journey through your science, you are not alone.

If you stay open to the world, and refuse to be satisfied with unthinking faith, you can — and will — find your truth. That’s the moral of my story.

And now, it’s your turn.

--

--

Christian Keil
Pronounced Kyle

🛰️ By day, I help improve global internet access. ✍🏼By night, I help make the internet a better place to be.