•
FAEL
Prévia do material em texto
<p>ALSO BY ROBERT M. SAPOLSKY</p><p>Behave: The Biology of Humans at Our Best and Worst</p><p>Monkeyluv: And Other Essays on Our Lives as Animals</p><p>A Primate’s Memoir</p><p>The Trouble with Testosterone and Other Essays on the Biology of</p><p>the Human Predicament</p><p>Why Zebras Don’t Get Ulcers: A Guide to Stress, Stress-Related</p><p>Diseases, and Coping</p><p>Stress, the Aging Brain, and the Mechanisms of Neuron Death</p><p>PENGUIN PRESS</p><p>An imprint of Penguin Random House LLC</p><p>penguinrandomhouse.com</p><p>Copyright © 2023 by Robert M. Sapolsky</p><p>Penguin Random House supports copyright. Copyright fuels creativity, encourages diverse voices,</p><p>promotes free speech, and creates a vibrant culture. Thank you for buying an authorized edition of</p><p>this book and for complying with copyright laws by not reproducing, scanning, or distributing any</p><p>part of it in any form without permission. You are supporting writers and allowing Penguin Random</p><p>House to continue to publish books for every reader.</p><p>The English translation by Daniel Kahn, of the Yiddish poem “Mayn Rue Platz” by Morris</p><p>Rosenfeld, on this page is used by permission.</p><p>This page constitutes an extension of this copyright page.</p><p>Library of Congress Cataloging-in-Publication Data</p><p>Names: Sapolsky, Robert M., author.</p><p>Title: Determined : a science of life without free will / Robert M. Sapolsky.</p><p>Description: New York : Penguin Press, 2023. | Includes bibliographical references and index.</p><p>Identifiers: LCCN 2023023790 (print) | LCCN 2023023791 (ebook) | ISBN 9780525560975</p><p>(hardcover) | ISBN 9780525560982 (ebook)</p><p>Subjects: LCSH: Free will and determinism.</p><p>Classification: LCC BJ1461 .S325 2023 (print) | LCC BJ1461 (ebook) | DDC 123/.5—</p><p>dc23/eng/20230705</p><p>LC record available at https://lccn.loc.gov/2023023790</p><p>LC ebook record available at https://lccn.loc.gov/2023023791</p><p>ISBN 9780593656723 (international edition)</p><p>Cover design: Pete Garceau</p><p>Designed by Alexis Farabaugh, adapted for ebook by Cora Wigen</p><p>pid_prh_6.1_145134794_c0_r0</p><p>http://www.penguinrandomhouse.com/</p><p>https://lccn.loc.gov/2023023790</p><p>https://lccn.loc.gov/2023023791</p><p>To L, and to B & R,</p><p>Who make it all seem worth it.</p><p>Who make it worth it.</p><p>CONTENTS</p><p>1. Turtles All the Way Down</p><p>2. The Final Three Minutes of a Movie</p><p>3. Where Does Intent Come From?</p><p>4. Willing Willpower: The Myth of Grit</p><p>5. A Primer on Chaos</p><p>6. Is Your Free Will Chaotic?</p><p>7. A Primer on Emergent Complexity</p><p>8. Does Your Free Will Just Emerge?</p><p>9. A Primer on Quantum Indeterminacy</p><p>10. Is Your Free Will Random?</p><p>10.5. Interlude</p><p>11. Will We Run Amok?</p><p>12. The Ancient Gears within Us: How Does Change Happen?</p><p>13. We Really Have Done This Before</p><p>14. The Joy of Punishment</p><p>15. If You Die Poor</p><p>Acknowledgments</p><p>Appendix: Neuroscience 101</p><p>Notes</p><p>Illustration Credits</p><p>Index</p><p>my brain: click them</p><p>me: why?</p><p>my brain: you gotta</p><p>W</p><p>1</p><p>Turtles All the Way Down</p><p>hen I was in college, my friends and I had an anecdote that we</p><p>retold frequently; it went like this (and our retelling was so</p><p>ritualistic that I suspect this is close to verbatim, forty-five</p><p>years later):</p><p>So, it seems that William James was giving a lecture about the</p><p>nature of life and the universe. Afterward, an old woman came</p><p>up and said, “Professor James, you have it all wrong.”</p><p>To which James asked, “How so, madam?”</p><p>“Things aren’t at all like you said,” she replied. “The world is</p><p>on the back of a gigantic turtle.”</p><p>“Hmm.” said James, bemused. “That may be so, but where</p><p>does that turtle stand?”</p><p>“On the back of another turtle,” she answered.</p><p>“But madam,” said James indulgently, “where does that turtle</p><p>stand?”</p><p>To which the old woman responded triumphantly: “It’s no</p><p>use, Professor James. It’s turtles all the way down!”[*]</p><p>Oh, how we loved that story, always told it with the same intonation. We</p><p>thought it made us seem droll and pithy and attractive.</p><p>We used the anecdote as mockery, a pejorative critique of someone</p><p>clinging unshakably to illogic. We’d be in the dinner hall, and someone had</p><p>said something nonsensical, where their response to being challenged had</p><p>made things worse. Inevitably, one of us would smugly say, “It’s no use,</p><p>Professor James!” to which the person, who had heard our stupid anecdote</p><p>repeatedly, would inevitably respond, “Screw you, just listen. This actually</p><p>makes sense.”</p><p>Here is the point of this book: While it may seem ridiculous and</p><p>nonsensical to explain something by resorting to an infinity of turtles all the</p><p>way down, it actually is much more ridiculous and nonsensical to believe</p><p>that somewhere down there, there’s a turtle floating in the air. The science</p><p>of human behavior shows that turtles can’t float; instead, it is indeed turtles</p><p>all the way down.</p><p>Someone behaves in a particular way. Maybe it’s wonderful and</p><p>inspiring, maybe it’s appalling, maybe it’s in the eye of the beholder, or</p><p>maybe just trivial. And we frequently ask the same basic question: Why did</p><p>that behavior occur?</p><p>If you believe that turtles can float in the air, the answer is that it just</p><p>happened, that there was no cause besides that person having simply</p><p>decided to create that behavior. Science has recently provided a much more</p><p>accurate answer, and when I say “recently,” I mean in the last few centuries.</p><p>The answer is that the behavior happened because something that preceded</p><p>it caused it to happen. And why did that prior circumstance occur? Because</p><p>something that preceded it caused it to happen. It’s antecedent causes all the</p><p>way down, not a floating turtle or causeless cause to be found. Or as Maria</p><p>sings in The Sound of Music, “Nothing comes from nothing, nothing ever</p><p>could.”[*]</p><p>To reiterate, when you behave in a particular way, which is to say when</p><p>your brain has generated a particular behavior, it is because of the</p><p>determinism that came just before, which was caused by the determinism</p><p>just before that, and before that, all the way down. The approach of this</p><p>book is to show how that determinism works, to explore how the biology</p><p>over which you had no control, interacting with environment over which</p><p>you had no control, made you you. And when people claim that there are</p><p>causeless causes of your behavior that they call “free will,” they have (a)</p><p>failed to recognize or not learned about the determinism lurking beneath the</p><p>surface and/or (b) erroneously concluded that the rarefied aspects of the</p><p>universe that do work indeterministically can explain your character,</p><p>morals, and behavior.</p><p>Once you work with the notion that every aspect of behavior has</p><p>deterministic, prior causes, you observe a behavior and can answer why it</p><p>occurred: as just noted, because of the action of neurons in this or that part</p><p>of your brain in the preceding second.[*] And in the seconds to minutes</p><p>before, those neurons were activated by a thought, a memory, an emotion,</p><p>or sensory stimuli. And in the hours to days before that behavior occurred,</p><p>the hormones in your circulation shaped those thoughts, memories, and</p><p>emotions and altered how sensitive your brain was to particular</p><p>environmental stimuli. And in the preceding months to years, experience</p><p>and environment changed how those neurons function, causing some to</p><p>sprout new connections and become more excitable, and causing the</p><p>opposite in others.</p><p>And from there, we hurtle back decades in identifying antecedent causes.</p><p>Explaining why that behavior occurred requires recognizing how during</p><p>your adolescence a key brain region was still being constructed, shaped by</p><p>socialization and acculturation. Further back, there’s childhood experience</p><p>shaping the construction of your brain, with the same then applying to your</p><p>fetal environment. Moving further back, we have to factor in the genes you</p><p>inherited and their effects on behavior.</p><p>But we’re not done yet. That’s because everything in your childhood,</p><p>starting with how you were mothered within minutes of birth, was</p><p>influenced by culture, which means as well by the centuries of ecological</p><p>factors that influenced what kind of culture your ancestors invented, and by</p><p>the evolutionary pressures that molded the species you belong to. Why did</p><p>suppose a defendant says, “I did it. I knew there were other things I could do,</p><p>but I intended to do it, planned it in advance. I not only knew that X could have been the</p><p>outcome, I wanted that to happen.” Good luck convincing someone that the defendant</p><p>lacked free will.</p><p>But the point of this chapter is that even if either or both of these are the</p><p>case, I still think that free will doesn’t exist. To appreciate why, time for a</p><p>Libet-style thought experiment.</p><p>THE DEATH OF FREE WILL IN THE SHADOW OF</p><p>INTENT</p><p>You have a friend doing research for her doctorate in neurophilosophy, and</p><p>she asks you to be a test subject. Sure. She’s upbeat because she’s figured</p><p>out how to both get another data point for her study and simultaneously</p><p>accomplish something else that she’s keen on—win-win. It involves</p><p>ambulatory EEG, out of the lab, like in the bungee jumping study. You’re</p><p>out there now, wired up with the leads, electromyography being done on</p><p>your hand, a clock in view.</p><p>As with the classic Libet, the motoric action involved is to move your</p><p>index finger. Hey, aren’t we decades past that sort of really artificial</p><p>scenario? Fortunately, the study is more sophisticated than that, thanks to</p><p>your friend’s careful experimental design—you’ll be making a simple</p><p>movement, but with a nonsimple consequence. Don’t plan ahead to make</p><p>this movement, you’re told, do it spontaneously, and note on the clock what</p><p>time it is when you first consciously intend to. All set? Now, when you feel</p><p>like it, pull a trigger and kill this person.</p><p>Maybe the person is an enemy of the Fatherland, a terrorist blowing up</p><p>bridges in one of the gloriously occupied colonies. Maybe it’s the person</p><p>behind the cash register in the liquor store you’re robbing. Maybe they’re a</p><p>terminally ill loved one in unspeakable pain, begging you to do this. Maybe</p><p>it’s someone who is about to harm a child; maybe it is the infant Hitler,</p><p>cooing in his crib.</p><p>You are free to choose not to shoot. You’re disillusioned with the</p><p>regime’s brutality and refuse; you think killing the clerk ups the ante too</p><p>much if you’re caught; despite your loved one begging, you just can’t do it.</p><p>Or maybe you’re Humphrey Bogart, your friend is Claude Rains, you’re</p><p>confusing reality with story line and figure that if you let Major Strasser</p><p>escape, the story doesn’t end and you’ll get to star in a sequel to</p><p>Casablanca.[*]</p><p>But suppose you have to pull the trigger or else there’ll be no readiness</p><p>potential to detect and your friend’s research will be slowed down.</p><p>Nonetheless, you still have options. You can shoot the person. You can</p><p>shoot but intentionally miss. You can shoot yourself rather than comply.[*]</p><p>As a major plot twist, you can shoot your friend.</p><p>It makes intuitive sense that if you want to understand what you wind up</p><p>doing with your index finger on that trigger, that you should explore</p><p>Libetian concerns, studying particular neurons and particular milliseconds</p><p>in order to understand the instant you feel you have chosen to do</p><p>something, the instant your brain has committed to that action, and whether</p><p>those two things are the same. But here’s why these Libetian debates, as</p><p>well as a criminal justice system that cares only about whether someone’s</p><p>actions are intentional, are irrelevant to thinking about free will. As first</p><p>aired at the beginning of this chapter, that is because neither asks a question</p><p>central to every page of this book: Where did that intent come from in the</p><p>first place?</p><p>If you don’t ask that question, you’ve restricted yourself to a domain of a</p><p>few seconds. Which is fine by many people. Frankfurt writes, “The</p><p>questions of how the actions and his identifications with their springs are</p><p>caused are irrelevant to the questions of whether he performs the actions</p><p>freely or is morally responsible for performing them.” Or in the words of</p><p>Shadlen and Roskies, Libetian-ish neuroscience “can provide a basis for</p><p>accountability and responsibility that focuses on the agent, rather than on</p><p>prior causes” (my emphasis).</p><p>Where does intent come from? Yes, from biology interacting with</p><p>environment one second before your SMA warmed up. But also from one</p><p>minute before, one hour, one millennium—this book’s main song and</p><p>dance. Debating free will can’t start and end with readiness potentials or</p><p>with what someone was thinking when they committed a crime.[*] Why</p><p>have I spent page after page going over the minutiae of the debates about</p><p>what Libet means before blithely dismissing all of it with “And yet I think</p><p>that is irrelevant”? Because Libet is viewed as the most important study</p><p>ever done exploring the neurobiology of whether we have free will.</p><p>Because virtually every scientific paper on free will trots out Libet early on.</p><p>Because maybe you were born at the precise moment that Libet published</p><p>his first study and now, all these years later, you’re old enough that your</p><p>music is called “classic” rock and you have started to make little middle-</p><p>aged grunting sounds when you get up from a chair . . . and they’re still</p><p>debating Libet. And as noted before, this is like trying to understand a</p><p>movie solely by watching its final three minutes.[33]</p><p>This charge of myopia is not meant to sound pejorative. Myopia is</p><p>central to how we scientists go about finding out new things—by learning</p><p>more and more about less and less. I once spent nine years on a single</p><p>experiment; this can become the center of a very small universe. And I’m</p><p>not accusing the criminal justice system of myopically focusing solely on</p><p>whether there was intent—after all, where intent came from, someone’s</p><p>history and potential mitigating factors, are considered when it comes to</p><p>sentencing.</p><p>Where I am definitely trying to sound pejorative and worse is when this</p><p>ahistorical view of judging people’s behavior is moralistic. Why would you</p><p>ignore what came before the present in analyzing someone’s behavior?</p><p>Because you don’t care why someone else turned out to be different from</p><p>you.</p><p>As one of the few times in this book where I will knowingly be personal,</p><p>this brings me to the thinking of Daniel Dennett of Tufts University.</p><p>Dennett is one of the best-known and most influential philosophers out</p><p>there, a leading compatibilist who has made his case both in technical work</p><p>within his field and in witty, engaging popular books.</p><p>He implicitly takes this ahistorical stance and justifies it with a metaphor</p><p>that comes up frequently in his writing and debates. For example, in Elbow</p><p>Room: The Varieties of Free Will Worth Wanting, he asks us to imagine a</p><p>footrace where one person starts off way behind the rest at the starting line.</p><p>Would this be unfair? “Yes, if the race is a hundred-yard dash.” But it is fair</p><p>if this is a marathon, because “in a marathon, such a relatively small initial</p><p>advantage would count for nothing, since one can reliably expect other</p><p>fortuitous breaks to have even greater effects.” As a succinct summary of</p><p>this view, he writes, “After all, luck averages out in the long run.”[34]</p><p>No, it doesn’t.[*] Suppose you’re born a crack baby. In order to</p><p>counterbalance this bad luck, does society rush in to ensure that you’ll be</p><p>raised in relative affluence and with various therapies to overcome your</p><p>neurodevelopmental problems? No, you are overwhelmingly likely to be</p><p>born into poverty and stay there. Well then, says society, at least let’s make</p><p>sure your mother is loving, is stable, has lots of free time to nurture you</p><p>with books and museum visits. Yeah, right; as we know, your mother is</p><p>likely to be drowning in the pathological consequences of her own</p><p>miserable luck in life, with a good chance of leaving you neglected, abused,</p><p>shuttled through foster homes. Well, does society at least mobilize then to</p><p>counterbalance that additional bad luck, ensuring that you live in a safe</p><p>neighborhood with excellent schools? Nope, your neighborhood is likely to</p><p>be gang-riddled and your school underfunded.</p><p>You start out a marathon a few steps back from the rest of the pack in</p><p>this world of ours. And counter to what Dennett says, a quarter mile</p><p>in,</p><p>because you’re still lagging conspicuously at the back of the pack, it’s your</p><p>ankles that some rogue hyena nips. At the five-mile mark, the rehydration</p><p>tent is almost out of water and you can get only a few sips of the dregs. By</p><p>ten miles, you’ve got stomach cramps from the bad water. By twenty miles,</p><p>your way is blocked by the people who assume the race is done and are</p><p>sweeping the street. And all the while, you watch the receding backsides of</p><p>the rest of the runners, each thinking that they’ve earned, they’re entitled to,</p><p>a decent shot at winning. Luck does not average out over time and, in the</p><p>words of Levy, “we cannot undo the effects of luck with more luck”;</p><p>instead our world virtually guarantees that bad and good luck are each</p><p>amplified further.</p><p>In the same paragraph, Dennett writes that “a good runner who starts at</p><p>the back of the pack, if he is really good enough to DESERVE winning, will</p><p>probably have plenty of opportunity to overcome the initial disadvantage”</p><p>(my emphasis). This is one step above believing that God invented poverty</p><p>to punish sinners.</p><p>Dennett has one more thing to say that summarizes this moral stance.</p><p>Switching sports metaphors to baseball and the possibility that you think</p><p>there’s something unfair about how home runs work, he writes, “If you</p><p>don’t like the home run rule, don’t play baseball; play some other game.”</p><p>Yeah, I want another game, says our now-adult crack baby from a few</p><p>paragraphs ago. This time, I want to be born into a well-off, educated</p><p>family of tech-sector overachievers in Silicon Valley who, once I decide</p><p>that, say, ice-skating seems fun, will get me lessons and cheer me on from</p><p>my first wobbly efforts on the ice. Fuck this life I got dumped into; I want</p><p>to change games to that one.</p><p>Thinking that it is sufficient to merely know about intent in the present is</p><p>far worse than just intellectual blindness, far worse than believing that it is</p><p>the very first turtle on the way down that is floating in the air. In a world</p><p>such as we have, it is deeply ethically flawed as well.</p><p>Time to see where intent comes from, and how the biology of luck</p><p>doesn’t remotely average out in the long run.[35]</p><p>B</p><p>3</p><p>Where Does Intent Come From?</p><p>ecause of our fondness for all things Libetian, we sit you in front</p><p>of two buttons; you must push one of them. You’re given only</p><p>hazy information about the consequences of pushing each button,</p><p>beyond being told that if you pick the wrong button, thousands of people</p><p>will die. Now pick.</p><p>No free will skeptic insists that sometimes you form your intent, lean</p><p>way over to push the appropriate button, and suddenly, the molecules</p><p>comprising your body deterministically fling you the other way and make</p><p>you push the other button.</p><p>Instead, the last chapter showed how the Libetian debate concerns when</p><p>exactly you formed that intent, when you became conscious of having</p><p>formed it, whether neurons commanding your muscles had already</p><p>activated by then, when it was that you could still veto that intention. Plus,</p><p>questions about your SMA, frontal cortex, amygdala, basal ganglia—what</p><p>they knew and when they knew it. Meanwhile, in parallel in the courtroom</p><p>next door, lawyers argue over the nature of your intent.</p><p>The last chapter concluded by claiming that all these minutiae of</p><p>milliseconds are completely irrelevant to why there is no free will. Which is</p><p>why we didn’t bother sticking electrodes into your brain just before seating</p><p>you. They wouldn’t reveal anything useful.</p><p>This is because the Libetian Wars don’t ask the most fundamental</p><p>question: Why did you form the intent that you did?</p><p>This chapter shows how you don’t ultimately control the intent you</p><p>form. You wish to do something, intend to do it, and then successfully do</p><p>so. But no matter how fervent, even desperate, you are, you can’t</p><p>successfully wish to wish for a different intent. And you can’t meta your</p><p>way out—you can’t successfully wish for the tools (say, more self-</p><p>discipline) that will make you better at successfully wishing what you wish</p><p>for. None of us can.</p><p>Which is why it would tell us nothing to stick electrodes in your head to</p><p>monitor what neurons are doing in the milliseconds when you form your</p><p>intent. To understand where your intent came from, all that needs to be</p><p>known is what happened to you in the seconds to minutes before you</p><p>formed the intention to push whichever button you choose. As well as what</p><p>happened to you in the hours to days before. And years to decades before.</p><p>And during your adolescence, childhood, and fetal life. And what happened</p><p>when the sperm and egg destined to become you merged, forming your</p><p>genome. And what happened to your ancestors centuries ago when they</p><p>were forming the culture you were raised in, and to your species millions of</p><p>years ago. Yeah, all that.</p><p>Understanding this turtleism shows how the intent you form, the person</p><p>you are, is the result of all the interactions between biology and</p><p>environment that came before. All things out of your control. Each prior</p><p>influence flows without a break from the effects of the influences before.</p><p>As such, there’s no point in the sequence where you can insert a freedom of</p><p>will that will be in that biological world but not of it.</p><p>Thus, we’ll now see how who we are is the outcome of the prior</p><p>seconds, minutes, decades, geological periods before, over which we had no</p><p>control. And how bad and good luck sure as hell don’t balance out in the</p><p>end.</p><p>SECONDS TO MINUTES BEFORE</p><p>We ask our first version of the question of where that intent came from:</p><p>What sensory information flowing into your brain (including some you’re</p><p>not even conscious of) in the preceding seconds to minutes helped form that</p><p>intent?[*] This can be obvious—“I formed the intent to push that button</p><p>because I heard the harsh demand that I do so, and saw the gun pointed in</p><p>my face.”</p><p>But things can be subtler. You view a picture of someone holding an</p><p>object, for a fraction of a second; you must decide whether it was a cell</p><p>phone or a handgun. And your decision in that second can be influenced by</p><p>the pictured person’s gender, race, age, and facial expression. We all know</p><p>real-life versions of this experiment resulting in police mistakenly shooting</p><p>an unarmed person, and about the implicit bias that contributed to that</p><p>mistake.[1]</p><p>Some examples of intent being influenced by seemingly irrelevant</p><p>stimuli have been particularly well studied.[*] One domain concerns how</p><p>sensory disgust shapes behavior and attitudes. In one highly cited study,</p><p>subjects rated their opinions about various sociopolitical topics (e.g., “On a</p><p>scale of 1 to 10, how much do you agree with this statement?”). And if</p><p>subjects were sitting in a room with a disgusting smell (versus a neutral</p><p>one), the average level of warmth both conservatives and liberals reported</p><p>for gay men decreased. Sure, you think—you’d feel less warmth for anyone</p><p>if you’re gagging. However, the effect was specific to gay men, with no</p><p>change in warmth toward lesbians, the elderly, or African Americans.</p><p>Another study showed that disgusting smells make subjects less accepting</p><p>of gay marriage (as well as about other politicized aspects of sexual</p><p>behavior). Moreover, just thinking about something disgusting (eating</p><p>maggots) makes conservatives less willing to come into contact with gay</p><p>men.[2]</p><p>Then there’s a fun study where subjects were either made uncomfortable</p><p>(by placing their hand in ice water) or disgusted (by placing their thinly</p><p>gloved hand in imitation vomit).[*] Subjects then recommended punishment</p><p>for norm violations that were purity related (e.g., “John rubbed someone’s</p><p>toothbrush on the floor of a public restroom” or the supremely distinctive</p><p>“John pushed someone into a dumpster which was swarming with</p><p>cockroaches”) or violations unrelated to purity (e.g., “John scratched</p><p>someone’s car with a key”). Being disgusted by fake puke, but not being</p><p>icily uncomfortable, made subjects more selectively punitive about purity</p><p>violations.[3]</p><p>How can a disgusting smell or tactile sensation change unrelated</p><p>moral</p><p>assessments? The phenomenon involves a brain region called the insula</p><p>(aka the insular cortex). In mammals, it is activated by the smell or taste of</p><p>rancid food, automatically triggering spitting out the food and the species’s</p><p>version of barfing. Thus, the insula mediates olfactory and gustatory disgust</p><p>and protects from food poisoning, an evolutionarily useful thing.</p><p>But the versatile human insula also responds to stimuli we deem morally</p><p>disgusting. The insula’s “this food’s gone bad” function in mammals is</p><p>probably a hundred million years old. Then, a few tens of thousands of</p><p>years ago, humans invented constructs like morality and disgust at moral</p><p>norm violations. That’s way too little time to have evolved a new brain</p><p>region to “do” moral disgust. Instead, moral disgust was added to the</p><p>insula’s portfolio; as it’s said, rather than inventing, evolution tinkers,</p><p>improvising (elegantly or otherwise) with what’s on hand. Our insula</p><p>neurons don’t distinguish between disgusting smells and disgusting</p><p>behaviors, explaining metaphors about moral disgust leaving a bad taste in</p><p>your mouth, making you queasy, making you want to puke. You sense</p><p>something disgusting, yech . . . and unconsciously, it occurs to you that it’s</p><p>disgusting and wrong when those people do X. And once activated this</p><p>way, the insula then activates the amygdala, a brain region central to fear</p><p>and aggression.[4]</p><p>Naturally, there is the flip side to the sensory disgust phenomenon—</p><p>sugary (versus salty) snacks make subjects rate themselves as more</p><p>agreeable and helpful individuals and rate faces and artwork as more</p><p>attractive.[5]</p><p>Ask a subject, Hey, in last week’s questionnaire you were fine with</p><p>behavior A, but now (in this smelly room) you’re not. Why? They won’t</p><p>explain how a smell confused their insula and made them less of a moral</p><p>relativist. They’ll claim some recent insight caused them, bogus free will</p><p>and conscious intent ablaze, to decide that behavior A isn’t okay after all.</p><p>It’s not just sensory disgust that can shape intent in seconds to minutes;</p><p>beauty can as well. For millennia, sages have proclaimed how outer beauty</p><p>reflects inner goodness. While we may no longer openly claim that, beauty-</p><p>is-good still holds sway unconsciously; attractive people are judged to be</p><p>more honest, intelligent, and competent; are more likely to be elected or</p><p>hired, and with higher salaries; are less likely to be convicted of crimes,</p><p>then getting shorter sentences. Jeez, can’t the brain distinguish beauty from</p><p>goodness? Not especially. In three different studies, subjects in brain</p><p>scanners alternated between rating the beauty of something (e.g., faces) or</p><p>the goodness of some behavior. Both types of assessments activated the</p><p>same region (the orbitofrontal cortex, or OFC); the more beautiful or good,</p><p>the more OFC activation (and the less insula activation). It’s as if irrelevant</p><p>emotions about beauty gum up cerebral contemplation of the scales of</p><p>justice. Which was shown in another study—moral judgments were no</p><p>longer colored by aesthetics after temporary inhibition of a part of the PFC</p><p>that funnels information about emotions into the frontal cortex.[*]</p><p>“Interesting,” the subject is told. “Last week, you sent that other person to</p><p>prison for life. But just now, when looking at this other person who had</p><p>done the same thing, you voted for them for Congress—how come?” And</p><p>the answer isn’t “Murder is definitely bad, but OMG, those eyes are like</p><p>deep, limpid pools.” Where did the intent behind the decision come from?</p><p>The fact that the brain hasn’t had enough time yet to evolve separate</p><p>circuits for evaluating morality and aesthetics.[6]</p><p>Next, want to make someone more likely to choose to clean their hands?</p><p>Have them describe something crummy and unethical they’ve done.</p><p>Afterward, they’re more likely to wash their hands or reach for hand</p><p>sanitizer than if they’d been recounting something ethically neutral they’d</p><p>done. Subjects instructed to lie about something rate cleansing (but not</p><p>noncleansing) products as more desirable than do those instructed to be</p><p>honest. Another study showed remarkable somatic specificity, where lying</p><p>orally (via voice mail) increased the desire for mouthwash, while lying by</p><p>hand (via email) made hand sanitizers more desirable. One neuroimaging</p><p>study showed that when lying by voice mail boosts preference for</p><p>mouthwash, a different part of the sensory cortex activates than when lying</p><p>by email boosts the appeal of hand sanitizers. Neurons believing, literally,</p><p>that your mouth or hand, respectively, is dirty.</p><p>Thus, feeling morally soiled makes us want to cleanse. I don’t believe</p><p>there’s a soul for such moral taint to weigh on, but it sure weighs on your</p><p>frontal cortex; after disclosing an unethical act, subjects are less effective at</p><p>cognitive tasks that tap into frontal function . . . unless they got to wash</p><p>their hands in between. The scientists who first reported this general</p><p>phenomenon poetically named it the “Macbeth effect,” after Lady Macbeth,</p><p>washing her hands of that imaginary damned spot caused by her</p><p>murderousness.[*] Reflecting that, induce disgust in subjects, and if they can</p><p>then wash their hands, they judge purity-related norm violations less</p><p>harshly.[7]</p><p>Our judgments, decisions, and intentions are also shaped by sensory</p><p>information coming from our bodies (i.e., interoceptive sensation).</p><p>Consider one study concerning the insula confusing moral and visceral</p><p>disgust. If you’re ever on a ship in rough waters and are heaving over the</p><p>rail, it’s guaranteed that someone will sidle over and smugly tell you that</p><p>they’re feeling great because they ate some ginger, which settles the</p><p>stomach. In the study, subjects judged the wrongness of norm violations</p><p>(e.g., a morgue worker touching the eye of a corpse when no one is looking;</p><p>drinking out of a new toilet); consuming ginger beforehand lessened</p><p>disapproval. Interpretation? First, hearing about that illicit eyeball touching</p><p>pushes your stomach toward lurching, thanks to your weird human insula.</p><p>Your brain then decides your feelings about that behavior based in part on</p><p>lurching severity—less lurching, thanks to ginger, and funeral home</p><p>shenanigans don’t seem as bad.[*],[8]</p><p>Particularly interesting findings regarding interoception concern hunger.</p><p>One much-noted study suggested that hunger makes us less forgiving.</p><p>Specifically, across more than a thousand judicial decisions, the longer it</p><p>had been since judges had eaten, the less likely they were to grant a prisoner</p><p>parole. Other studies also show that hunger changes prosocial behavior.</p><p>“Changes”—decreasing prosociality, as with the judges, or increasing it? It</p><p>depends. Hunger seems to have different effects on how charitable subjects</p><p>say they are going to be, versus how charitable they actually are,[*] or where</p><p>subjects have either only one or multiple chances to be naughty or nice in</p><p>an economic game. But as the key point, people don’t cite blood glucose</p><p>levels when explaining why, say, they were nice just now and not earlier.[9]</p><p>In other words, as we sit there, deciding which button to push with</p><p>supposed freely chosen intent, we are being influenced by our sensory</p><p>environment—a foul smell, a beautiful face, the feel of vomit goulash, a</p><p>gurgling stomach, a racing heart. Does this disprove free will? Nah—the</p><p>effects are typically mild and only occur in the average subject, with plenty</p><p>of individuals who are exceptions. This is just the first step in understanding</p><p>where intentions come from.[10]</p><p>MINUTES TO DAYS BEFORE</p><p>The choice you’d seemingly freely make about the life-or-death button-</p><p>pressing task can also be powerfully influenced by events in the preceding</p><p>minutes to days. As one of the most important routes, consider the scads of</p><p>different types of hormones in our circulation—each secreted at a different</p><p>rate and effecting the brain in varied ways from one individual to the next,</p><p>all without our control or awareness. Let’s start with one of the usual</p><p>suspects when it comes to hormones altering behavior,</p><p>namely testosterone.</p><p>How does testosterone (T) in the preceding minutes to days play a role in</p><p>determining whether you kill that person? Well, testosterone causes</p><p>aggression, so the higher the T level, the more likely you’ll be to make the</p><p>more aggressive decision.[*] Simple. But as a first complication, T doesn’t</p><p>actually cause aggression.</p><p>For starters, T rarely generates new patterns of aggression; instead, it</p><p>makes preexisting patterns more likely to happen. Boost a monkey’s T</p><p>levels, and he becomes more aggressive to monkeys already lower-ranking</p><p>than him in the dominance hierarchy, while brown-nosing his social betters</p><p>as per usual. Testosterone makes the amygdala more reactive, but only if</p><p>neurons there are already being stimulated by looking at, say, the face of a</p><p>stranger. Moreover, T lowers the threshold for aggression most dramatically</p><p>in individuals already prone toward aggression.[11]</p><p>The hormone also distorts judgment, making you more likely to interpret</p><p>a neutral facial expression as threatening. Boosting your T levels makes you</p><p>more likely to be overly confident in an economic game, resulting in being</p><p>less cooperative—who needs anyone else when you’re convinced you’re</p><p>fine on your own?[*] Moreover, T tilts you toward more risk-taking and</p><p>impulsivity by strengthening the ability of the amygdala to directly activate</p><p>behavior (and weakening the ability of the frontal cortex to rein it in—stay</p><p>tuned for the next chapter).[*] Finally, T makes you less generous and more</p><p>self-centered in, for example, economic games, as well as less empathic</p><p>toward and trusting of strangers.[12]</p><p>A pretty crummy picture. Back to your deciding which button to press. If</p><p>T is having particularly strong effects in your brain at the time, you become</p><p>more likely to perceive threat, real or otherwise, less caring about others’</p><p>pain, and more likely to fall into aggressive tendencies that you already</p><p>have.</p><p>What factors determine whether T has strong effects in your brain? Time</p><p>of day matters, as T levels are nearly twice as high during the daily</p><p>circadian peak as during the trough. Whether you’re sick, are injured, just</p><p>had a fight, or just had sex all influence T secretion. It also depends on how</p><p>high your average T levels are; they can vary fivefold among healthy</p><p>individuals of the same sex, even more so in adolescents. Moreover, the</p><p>brain’s sensitivity to T also varies, with T receptor numbers in some brain</p><p>regions varying up to tenfold among individuals. And why do individuals</p><p>differ in how much T their gonads make or how many receptors there are in</p><p>particular brain regions? Genes and fetal and postnatal environment matter.</p><p>And why do individuals differ in the extent of their preexisting tendencies</p><p>toward aggression (i.e., how the amygdala, frontal cortex, and so on differ)?</p><p>Above all, because of how much life has taught them at a young age that</p><p>the world is a menacing place.[*],[13]</p><p>Testosterone is not the only hormone that can influence your button-</p><p>pressing intentions. There’s oxytocin, acclaimed for having prosocial effects</p><p>among mammals. Oxytocin enhances mother-infant bonding in mammals</p><p>(and enhances human-dog bonding). The related hormone vasopressin</p><p>makes males more paternal in the rare species where males help parent.</p><p>These species also tend to form monogamous pair bonds; oxytocin and</p><p>vasopressin strengthen the bond in females and males, respectively. What’s</p><p>the nuts-and-bolts biology of why males in some rodent species are</p><p>monogamous and others not? Monogamous species are genetically prone</p><p>toward higher concentrations of vasopressin receptors in the dopaminergic</p><p>“reward” part of the brain (the nucleus accumbens). The hormone is</p><p>released during sex, the experience with that female feels really really</p><p>pleasurable because of the higher receptor number, and the male sticks</p><p>around. Amazingly, boost vasopressin receptor levels in that part of the</p><p>brain in males from polygamous rodent species, and they become</p><p>monogamous (wham, bam, thank . . . weird, I don’t know what just came</p><p>over me, but I’m going to spend the rest of my life helping this female raise</p><p>our kids).[14]</p><p>Oxytocin and vasopressin have effects that are the polar opposite of T’s.</p><p>They decrease excitability in the amygdala, making rodents less aggressive</p><p>and people calmer. Boost your oxytocin levels experimentally, and you’re</p><p>more likely to be charitable and trusting in a competitive game. And</p><p>showing how this is the endocrinology of sociality, you wouldn’t have the</p><p>response to oxytocin if you thought you were playing against a computer.</p><p>[15]</p><p>As an immensely cool wrinkle, oxytocin doesn’t make us warm and</p><p>fuzzy and prosocial to everyone. Only to in-group members, people who</p><p>count as an Us. In one study in the Netherlands, subjects had to decide if it</p><p>was okay to kill one person to save five; oxytocin had no effects when the</p><p>potential victim had a Dutch name but made subjects more likely to</p><p>sacrifice someone with a German or Middle Eastern name (two groups that</p><p>evoke negative connotations among the Dutch) and increased implicit bias</p><p>against those two groups. In another study, while oxytocin made team</p><p>members more cooperative in a competitive game, as expected, it made</p><p>them more preemptively aggressive to opponents. The hormone even</p><p>enhances gloating over strangers’ bad luck.[16]</p><p>Thus, the hormone makes us nicer, more generous, empathic, trusting,</p><p>loving . . . to people who count as an Us. But if it is a Them, who looks,</p><p>speaks, eats, prays, loves differently than we do, forget singing</p><p>“Kumbaya.”[*]</p><p>On to individual differences related to oxytocin. The hormone’s levels</p><p>vary manyfold among different individuals, as do levels of receptors for</p><p>oxytocin in the brain. Those differences arise from the effects of everything</p><p>from genes and fetal environment to whether you woke up this morning</p><p>next to someone who makes you feel safe and loved. Moreover, oxytocin</p><p>receptors and vasopressin receptors each come in different versions in</p><p>different people. Which flavor you were handed at conception influences</p><p>parenting style, stability of romantic relationships, aggressiveness,</p><p>sensitivity to threat, and charitableness.[17]</p><p>Thus, the decisions you supposedly make freely in moments that test</p><p>your character—generosity, empathy, honesty—are influenced by the levels</p><p>of these hormones in your bloodstream and the levels and variants of their</p><p>receptors in your brain.</p><p>One last class of hormones. When an organism is stressed, whether</p><p>mammal, fish, bird, reptile, or amphibian, it secretes from the adrenal gland</p><p>hormones called glucocorticoids, which do roughly the same things to the</p><p>body in all these cases.[*] They mobilize energy from storage sites in the</p><p>body, like the liver or fat cells, to fuel exercising muscle—very helpful if</p><p>you are stressed because, say, a lion is trying to eat you, or if you’re that</p><p>lion and will starve unless you predate something. Following the same</p><p>logic, glucocorticoids increase blood pressure and heart rate, delivering</p><p>oxygen and energy to those life-saving muscles that much faster. They</p><p>suppress reproductive physiology—don’t waste energy, say, ovulating, if</p><p>you’re running for your life.[18]</p><p>As might be expected, during stress, glucocorticoids alter the brain.</p><p>Amygdala neurons become more excitable, more potently activating the</p><p>basal ganglia and disrupting the frontal cortex—all making for fast, habitual</p><p>responses with low accuracy in assessing what’s happening. Meanwhile, as</p><p>we’ll see in the next chapter, frontal cortical neurons become less excitable,</p><p>limiting their ability to make the amygdala act sensibly.[19]</p><p>Based on these particular effects in the brain, glucocorticoids have</p><p>predictable effects on behavior during stress. Your judgments become more</p><p>impulsive. If you’re reactively aggressive, you become more so, if anxious,</p><p>more so, if depressive, ditto. You become less empathic, more egoistic,</p><p>more selfish in moral decision-making.[20]</p><p>The workings of every bit of this endocrine system will reflect whether</p><p>you’ve</p><p>been stressed recently by, say, a mean boss, a miserable morning’s</p><p>commute, or surviving your village being pillaged. Your gene variants will</p><p>influence the production and degradation of glucocorticoids, as well as the</p><p>number and function of glucocorticoid receptors in different parts of your</p><p>brain. And the system would have developed differently in you depending</p><p>on things like the amount of inflammation you experienced as a fetus, your</p><p>parents’ socioeconomic status, and your mother’s parenting style.[*]</p><p>Thus, three different classes of hormones work over the course of</p><p>minutes to hours to alter the decision you make. This just scratches the</p><p>surface; Google “list of human hormones,” and you’ll find more than</p><p>seventy-five, most effecting behavior. All rumbling below the surface,</p><p>influencing your brain without your awareness. Do these endocrine effects</p><p>over the course of minutes to hours disprove free will? Certainly not on</p><p>their own, because they typically alter the likelihood of certain behaviors,</p><p>rather than cause them. On to our next turtle heading all the way down.[21]</p><p>WEEKS TO YEARS BEFORE</p><p>So hormones can change the brain over the course of minutes to hours. In</p><p>those cases, “change the brain” isn’t some abstraction. As a result of a</p><p>hormone’s actions, neurons might release packets of neurotransmitter when</p><p>they otherwise wouldn’t; particular ion channels might open or close; the</p><p>number of receptors for some messenger might change in a specific brain</p><p>region. The brain is structurally and functionally malleable, and your</p><p>pattern of hormone exposure this morning will have altered your brain now,</p><p>as you contemplate the two buttons.</p><p>The point of this section is that such “neuroplasticity” is small potatoes</p><p>compared with how the brain can change in response to experience over</p><p>longer periods. Synapses might permanently become more excitable, more</p><p>likely to send a message from one neuron to the next. Pairs of neurons can</p><p>form entirely new synapses, or disconnect existing ones. Branchings of</p><p>dendrites and axons might expand or contract. Neurons can die; others are</p><p>born.[*] Particular brain regions might expand or atrophy so dramatically</p><p>that you can see the changes on a brain scan.[22]</p><p>Some of this neuroplasticity is immensely cool but tangential to free-will</p><p>squabbles. If someone goes blind and learns to read braille, her brain</p><p>remaps—i.e., the distribution and excitability of synapses to particular brain</p><p>regions change. Result? Reading braille with her fingertips, a tactile</p><p>experience, stimulates neurons in the visual cortex, as if she were reading</p><p>printed text. Blindfold a volunteer for a week and his auditory projections</p><p>start colonizing the snoozing visual cortex, enhancing his hearing. Learn a</p><p>musical instrument and the auditory cortex remaps to devote more space to</p><p>the instrument’s sound. Persuade some wildly invested volunteers to</p><p>practice a five-finger exercise on the piano two hours a day for weeks, and</p><p>their motor cortex remaps to devote more space to controlling finger</p><p>movements in that hand; get this—the same thing happens if the volunteer</p><p>spends that time imagining the finger exercise.[23]</p><p>But then there’s neuroplasticity relevant to free will–lessness.</p><p>Developing post-traumatic stress disorder after trauma transforms the</p><p>amygdala. Synapse number increases along with the extent of the circuitry</p><p>by which the amygdala influences the rest of the brain. The overall size of</p><p>the amygdala increases, and it becomes more excitable, with a lower</p><p>threshold for triggering fear, anxiety, and aggression.[24]</p><p>Then there’s the hippocampus, a brain region central to learning and</p><p>memory. Suffer from major depression for decades and the hippocampus</p><p>shrinks, disrupting learning and memory. In contrast, experience two weeks</p><p>of rising estrogen levels (i.e., be in the follicular stage of your ovulatory</p><p>cycle), and the hippocampus beefs up. Likewise, if you enjoy exercising</p><p>regularly or are stimulated by an enriching environment.[25]</p><p>Moreover, experience-induced changes aren’t limited to the brain.</p><p>Chronic stress expands the adrenal glands, which then pump out more</p><p>glucocorticoids, even when you’re not stressed. Becoming a father reduces</p><p>testosterone levels; the more nurturing you are, the bigger the drop.[26]</p><p>How’s this for how unlikely the subterranean biological forces on your</p><p>behavior can be over weeks to months—your gut is filled with bacteria,</p><p>most of which help you digest your food. “Filled with” is an understatement</p><p>—there are more bacteria in your gut than cells in your own body,[*] of</p><p>hundreds of different types, collectively weighing more than your brain. As</p><p>a burgeoning new field, the makeup of the different species of bacteria in</p><p>your gut over the previous weeks will influence things like appetite and</p><p>food cravings . . . and gene expression patterns in your neurons . . . and</p><p>proclivity toward anxiety and the ferocity with which some neurological</p><p>diseases spread through your brain. Clear out all of a mammal’s gut bacteria</p><p>(with antibiotics) and transfer in the bacteria from another individual, and</p><p>you’ll have transferred those behavioral effects. These are mostly subtle</p><p>effects, but who would have thought that bacteria in your gut were</p><p>influencing what you mistake for free agency?</p><p>The implications of all these findings are obvious. How will your brain</p><p>function as you contemplate the two buttons? It depends in part on events</p><p>during previous weeks to years. Have you been barely managing to pay the</p><p>rent each month? Experiencing the emotional swell of finding love or of</p><p>parenting? Suffering from deadening depression? Working successfully at a</p><p>stimulating job? Rebuilding yourself after combat trauma or sexual assault?</p><p>Having had a dramatic change in diet? All will change your brain and</p><p>behavior, beyond your control, often beyond your awareness. Moreover,</p><p>there will be a metalevel of differences outside your control, in that your</p><p>genes and childhood will have regulated how easily your brain changes in</p><p>response to particular adult experiences—there is plasticity as to how much</p><p>and what kind of neuroplasticity each person’s brain can manage.[27]</p><p>Does neuroplasticity show that free will is a myth? Not by itself. Next</p><p>turtle.[28]</p><p>BACK TO ADOLESCENCE</p><p>As will be familiar to any reader who is, was, or will be an adolescent, this</p><p>is one complex time of life. Emotional gyrations, impulsive risk-taking and</p><p>sensation seeking, the peak time of life for extremes of both pro- and</p><p>antisocial behavior, for individuated creativity and for peer-driven</p><p>conformity; behaviorally, it is a beast unto itself.</p><p>Neurobiologically as well. Most research examines why adolescents</p><p>behave in adolescent ways; in contrast, our purpose is to understand how</p><p>features of the adolescent brain help explain button-pushing intentions in</p><p>adulthood. Conveniently, the same hugely interesting bit of neurobiology is</p><p>relevant to both. By early adolescence, the brain is a fairly close</p><p>approximation of the adult version, with adult densities of neurons and</p><p>synapses, and the process of myelinating the brain already achieved. Except</p><p>for one brain region which, amazingly, won’t fully mature for another</p><p>decade. The region? The frontal cortex, of course. Maturation of this region</p><p>lags way behind the rest of the cortex—to some degree in all mammals, and</p><p>dramatically so in primates.[29]</p><p>Some of that delayed maturation is straightforward. Starting with fetal</p><p>brain building, there’s a steady increase in myelination up to adult levels,</p><p>including in the frontal cortex, just with a huge delay. But the picture is</p><p>majorly different when it comes to neurons and synapses. At the start of</p><p>adolescence, the frontal cortex has more synapses than in the adult.</p><p>Adolescence and early adulthood consist of the frontal cortex pruning</p><p>synapses that turn out to be superfluous, poky, or plain wrong, as the region</p><p>gets progressively leaner and meaner. As a great demonstration of this,</p><p>while a thirteen-year-old and a twenty-year-old may perform equally on</p><p>some</p><p>test of frontal function, the former needs to mobilize more of the</p><p>region to accomplish this.</p><p>So the frontal cortex—with its roles in executive function, long-term</p><p>planning, gratification postponement, impulse control, and emotion</p><p>regulation—isn’t fully functional in adolescents. Hmm, what do you</p><p>suppose that explains? Just about everything in adolescence, especially</p><p>when adding the tsunamis of estrogen, progesterone, and testosterone</p><p>flooding the brain then. A juggernaut of appetites and activation,</p><p>constrained by the flimsiest of frontal cortical brakes.[30]</p><p>For our purposes, the main point about delayed frontal maturation isn’t</p><p>that it produces kids who got really bad tattoos but the fact that adolescence</p><p>and early adulthood involve a massive construction project in the brain’s</p><p>most interesting part. The implications are obvious. If you’re an adult, your</p><p>adolescent experiences of trauma, stimulation, love, failure, rejection,</p><p>happiness, despair, acne—the whole shebang—will have played an outsize</p><p>role in constructing the frontal cortex you’re working with as you</p><p>contemplate those buttons. Of course, the enormous varieties of</p><p>adolescence experiences will help produce enormously varied frontal</p><p>cortexes in adulthood.</p><p>A fascinating implication of the delayed maturation is important to</p><p>remember when we get to the section on genes. By definition, if the frontal</p><p>cortex is the last part of the brain to develop, it is the brain region least</p><p>shaped by genes and most shaped by environment. This raises the question</p><p>of why the frontal cortex matures so slowly. Is it intrinsically a tougher</p><p>building project than the rest of the cortex? Are there specialized neurons,</p><p>neurotransmitters unique to the region that are tough to synthesize,</p><p>distinctive synapses that are so fancy that they require thick construction</p><p>manuals? No, virtually nothing unique like that.[*],[31]</p><p>Thus, delayed maturation isn’t inevitable, given the complexity of</p><p>frontal construction, where the frontal cortex would develop faster, if only it</p><p>could. Instead, the delay actively evolved, was selected for. If this is the</p><p>brain region central to doing the right thing when it’s the harder thing to do,</p><p>no genes can specify what counts as the right thing. It has to be learned the</p><p>long, hard way, by experience. This is true for any primate, navigating</p><p>social complexities as to whether you hassle or kowtow to someone, align</p><p>with them or stab them in the back.</p><p>If that’s the case for some baboon, just imagine humans. We have to</p><p>learn our culture’s rationalizations and hypocrisies—thou shalt not kill,</p><p>unless it’s one of them, in which case here’s a medal. Don’t lie, except if</p><p>there’s a huge payoff, or it’s a profoundly good act (“Nope, no refugees</p><p>hiding in my attic, no siree”). Laws to be followed strictly, laws to be</p><p>ignored, laws to be resisted. Reconciling acting as if each day is your last</p><p>with today being the first day of the rest of your life. On and on. Reflecting</p><p>that, while frontocortical maturation finally tops out around puberty in other</p><p>primates, we need another dozen years. This suggests something</p><p>remarkable—the genetic program of the human brain evolved to free the</p><p>frontal cortex from genes as much as possible. Much more to come about</p><p>the frontal cortex in the next chapter.</p><p>Next turtle.[32]</p><p>AND CHILDHOOD</p><p>So adolescence is the final phase of frontal cortical construction, with the</p><p>process heavily shaped by environment and experience. Moving further</p><p>back into childhood, there are massive amounts of construction of</p><p>everything in the brain,[*] a process of a smooth increase in the complexity</p><p>or neuron neuronal circuitry and of myelination. Naturally, this is paralleled</p><p>by growing behavioral complexity. There’s maturation of reasoning skills</p><p>and of cognition and affect relevant to moral decision-making (e.g.,</p><p>transitioning from obeying laws to avoid punishment to obeying because</p><p>where would society be without people obeying them?). There’s maturation</p><p>of empathy (with growing capacities to empathize with someone’s</p><p>emotional rather than physical state, about abstract pain, about pains you’ve</p><p>never experienced, about pain for people totally different from you).</p><p>Impulse control is also maturing (from successfully restraining yourself for</p><p>a few minutes from eating a marshmallow in order to then be rewarded with</p><p>two marshmallows, to staying focused on your eighty-year project to get</p><p>into the nursing home of your choice).</p><p>In other words, simpler things precede more complicated things. Child-</p><p>development researchers have typically framed these trajectories of</p><p>maturation as coming in “stages” (for example, Harvard psychologist</p><p>Lawrence Kohlberg’s canonical stages of moral development). Predictably,</p><p>there are huge differences as to what particular maturational stage different</p><p>kids are at, the speed of stage transitions, and the stage carried stably into</p><p>adulthood.[*],[33]</p><p>Speaking to our interests, you have to ask where individual differences</p><p>in maturation come from, how much control we have over that process, and</p><p>how it helps generate the you that is you, contemplating the buttons. What</p><p>sorts of influences effect maturation? An overlapping list of the most usual</p><p>suspects, with incredibly brief summaries:</p><p>1. Parenting, of course. Differences in parenting styles were the focus of highly influential</p><p>work originating with Berkeley psychologist Diana Baumrind. There’s authoritative</p><p>parenting, where high levels of demands and expectation are placed on the child, coupled</p><p>with lots of flexibility in responding to the child’s needs; this is usually the style aspired</p><p>to by neurotic middle-class parents. Then there’s authoritarian parenting (high demand,</p><p>low responsiveness—“Do this because I said so”), permissive parenting (low demand,</p><p>high responsiveness), and negligent parenting (low demand, low responsiveness). And</p><p>each tends to produce a different sort of adult. As we’ll see in the next chapter, parental</p><p>socioeconomic status (SES) is also enormously important; for example, low familial SES</p><p>predicts stunted maturation of the frontal cortex in kindergarteners.[34]</p><p>2. Peer socialization, with different peers modeling different behaviors with varying allure.</p><p>The importance of peers has often been underappreciated by developmental</p><p>psychologists but is no surprise to any primatologists. Humans invented a novel way to</p><p>transmit information across generations, where an adult expert intentionally directs</p><p>information at young’uns—i.e., a teacher. In contrast, the usual among primates is kids</p><p>learning by watching their somewhat older peers.[35]</p><p>3. Environmental influences. Is the neighborhood park safe? Are there more bookstores or</p><p>liquor stores? Is it easy to buy healthy food? What’s the crime rate? All the usual.</p><p>4. Cultural beliefs and values, which influence these other categories. As we’ll see, culture</p><p>dramatically influences parenting style, the behaviors modeled by peers, the sorts of</p><p>physical and social communities that are constructed. Cultural variability in overt and</p><p>covert rites of passage, the brands of places of worship, whether kids aspire to earn lots</p><p>of merit badges versus getting skilled at harassing out-group members.</p><p>A pretty straightforward list. And, of course, there are loads of</p><p>individual differences in childhood patterns of hormone exposure, nutrition,</p><p>pathogen load, and so on. All converging to produce a brain that, as we’ll</p><p>see in chapter 5, has to be unique.</p><p>The huge question then becomes, How do different childhoods produce</p><p>different adults? Sometimes, the most likely pathway seems pretty clear</p><p>without having to get all neurosciencey. For example, a study examining</p><p>more than a million people across China and the U.S. showed the effects of</p><p>growing up in clement weather (i.e., mild fluctuations around an average of</p><p>seventy degrees). Such individuals are, on the average, more individualistic,</p><p>extroverted, and open to novel experience. Likely explanation: the world is</p><p>a safer, easier place to explore growing up when you don’t</p><p>have to spend</p><p>significant chunks of each year worrying about dying of hypothermia</p><p>and/or heatstroke when you go outside, where average income is higher and</p><p>food stability greater. And the magnitude of the effect isn’t trivial, being</p><p>equal to or greater than that of age, gender, the country’s GDP, population</p><p>density, and means of production.[36]</p><p>The link between weather clemency in childhood and adult personality</p><p>can be framed biologically in the most informative way—the former</p><p>influences the type of brain you’re constructing that you will carry into</p><p>adulthood. As is almost always the case. For example, lots of childhood</p><p>stress, by way of glucocorticoids, impairs construction of the frontal cortex,</p><p>producing an adult less adept at helpful things like impulse control. Lots of</p><p>exposure to testosterone early in life makes for the construction of a highly</p><p>reactive amygdala, producing an adult more likely to respond aggressively</p><p>to provocation.</p><p>The nuts and bolts of how this happens revolves around the massively</p><p>trendy field of “epigenetics,” revealing how early life experience causes</p><p>long-lasting changes in gene expression in particular brain regions. Now,</p><p>this is not experience changing genes themselves (i.e., changing DNA</p><p>sequences), but instead changing their regulation—whether some gene is</p><p>always active, never active, or active in one context but not another; a lot is</p><p>known by now about how this works. As one celebrated example, if you’re</p><p>a baby rat growing up with an atypically inattentive mother,[*] epigenetic</p><p>changes in the regulation of one gene in your hippocampus will make it</p><p>harder for you to recover from stress as an adult.[37]</p><p>Where do differences in rodential mothering style come from?</p><p>Obviously, from one second, one minute, one hour, before in that rat mom’s</p><p>biological history. Knowledge about epigenetic bases of this has grown at</p><p>breakneck speed, showing, for example, how some epigenetic changes in</p><p>the brain can have multigenerational consequences (e.g., helping to explain</p><p>why being a rat, monkey, or human abused in childhood increases the odds</p><p>of being an abusive parent). Just to show the scale of epigenetic complexity,</p><p>differences in mothering styles in monkeys cause epigenetic changes in</p><p>more than a thousand genes expressed in the offspring’s frontal cortex.[38]</p><p>If you had to compress the variability in all those facets of childhood</p><p>influences into a single axis, it would be easy—how lucky was the</p><p>childhood you were handed? This massively important fact has been</p><p>formalized into an Adverse Childhood Experience (ACE) score. What</p><p>count as adverse experiences in this measure? A logical list:</p><p>For each of these experienced, you get a point on the checklist, where</p><p>the unluckiest have scores approaching an unimaginable ten and the</p><p>luckiest luxuriating around zero.</p><p>This field has produced a finding that should floor anyone holding out</p><p>for free will. For every step higher in one’s ACE score, there is roughly a 35</p><p>percent increase in the likelihood of adult antisocial behavior, including</p><p>violence; poor frontocortical-dependent cognition; problems with impulse</p><p>control; substance abuse; teen pregnancy and unsafe sex and other risky</p><p>behaviors; and increased vulnerability to depression and anxiety disorders.</p><p>Oh, and also poorer health and earlier death.[39]</p><p>You’d get the same story if you flipped the approach 180 degrees. As a</p><p>child, did you feel loved and safe in your family? Was there good modeling</p><p>about sexuality? Was your neighborhood crime-free, your family mentally</p><p>healthy, your socioeconomic status reliable and good? Well then, you’d be</p><p>heading toward a high RLCE score (Ridiculously Lucky Childhood</p><p>Experiences), predictive of all sorts of important good outcomes.</p><p>Thus, essentially every aspect of your childhood—good, bad, or in</p><p>between—factors over which you had no control, sculpted the adult brain</p><p>you have while contemplating those buttons. How’s this for an example</p><p>outside of someone’s control—because of the randomness of month of</p><p>birth, some kids can be as much as six months older or younger than the</p><p>average of their peer group. Older kindergarteners, for example, are</p><p>typically more cognitively advanced. Result—they get more one-on-one</p><p>attention and praise from teachers, so that by first grade their advantage is</p><p>even greater, so that by second grade . . . And in the UK, which has an</p><p>August 31 cutoff for kindergarten, this “relative age effect” produces a</p><p>major skew in educational attainment. For example:</p><p>Luck evens out over time, my ass.[*],[40]</p><p>Does the role of childhood invalidate free will? Nope—the likes of ACE</p><p>scores are about adult potential and vulnerability, not inevitable destiny, and</p><p>there are plenty of people whose adulthoods are radically different from</p><p>what you’d expect, given their childhoods. This is just another piece of the</p><p>sequence of influences.[41]</p><p>BACK TO THE WOMB</p><p>If you couldn’t control what family you landed in at birth, you sure had no</p><p>control over which womb you hung out in for nine influential months.</p><p>Environmental influences begin long before birth. The biggest source of</p><p>these influences is what’s in the maternal circulation, which will help</p><p>determine what’s in the fetus—levels of a huge array of different hormones,</p><p>immune factors, inflammatory molecules, pathogens, nutrients,</p><p>environmental toxins, illicit substances, all which regulate brain function in</p><p>adulthood. Not surprising, the general themes echo those of childhood. Lots</p><p>of glucocorticoids from Mom marinating your fetal brain, thanks to</p><p>maternal stress, and there’s increased vulnerability to depression and</p><p>anxiety in your adulthood. Lots of androgens in your fetal circulation</p><p>(coming from Mom; females secrete androgens, though to a lesser extent</p><p>than do males) makes you more likely as an adult of either sex to show</p><p>spontaneous and reactive aggression, poor emotion regulation, low</p><p>empathy, alcoholism, criminality, even lousy handwriting. A shortage of</p><p>nutrients for the fetus, caused by maternal starvation, and there’s increased</p><p>risk of schizophrenia in adulthood, along with a variety of metabolic and</p><p>cardiovascular diseases.[*],[42]</p><p>The implications of fetal environmental effects? Another route toward</p><p>how lucky or unlucky you’re likely to be in the world that awaits you.[43]</p><p>BACK TO YOUR VERY BEGINNING: GENES</p><p>Down to the next turtle. If you didn’t choose the womb you grew in, you</p><p>certainly didn’t choose the unique mixture of genes you inherited from your</p><p>parents. Genes have plenty to do with decision-making crossroads, and in</p><p>more interesting ways than commonly believed.</p><p>We start with an unbelievably superficial primer on genes, to position us</p><p>to appreciate things when we get to genes and free will.</p><p>First, what are genes, and what do they do? Our bodies are filled with</p><p>thousands of different types of proteins doing dizzyingly varied jobs. Some</p><p>are “cytoskeletal” proteins that give different cell types their distinctive</p><p>shapes. Some are messengers—many neurotransmitters, hormones, and</p><p>immune messengers are proteins. It’s proteins that make up enzymes that</p><p>construct those messengers and that tear them apart when they’re obsolete;</p><p>virtually all receptors for messengers throughout the body are made of</p><p>protein.</p><p>Where does all this proteinaceous versatility come from? Each type of</p><p>protein is constructed from a distinctive sequence of different types of</p><p>amino acid building blocks; the sequence determines the shape of the</p><p>protein; the shape determines function. A “gene” is the stretch of DNA that</p><p>specifies the sequence/shape/function of a particular protein. Each of our</p><p>approximately twenty thousand genes codes for the production of a unique</p><p>protein.[*]</p><p>How does a gene “decide” when to initiate the construction of the</p><p>protein it codes for, and whether there will be one or ten thousand copies</p><p>made? Implicit in this question is the popular view of genes as the be-all</p><p>and end-all, the code of codes in regulating what goes on in your body. As it</p><p>turns out, genes decide nothing, are</p><p>out at sea. Saying that a gene decides</p><p>when to generate its associated protein is like saying that the recipe decides</p><p>when to bake the cake that it codes for.</p><p>Instead, genes are turned on and off by environment. What is meant here</p><p>by environment? It can be the environment within a single cell—a cell is</p><p>running low on energy, which generates a messenger molecule that</p><p>activates the genes that code for proteins that boost energy production.</p><p>Environment can encompass the entire body—a hormone is secreted and is</p><p>carried in the circulation to target cells at the other end of the body, where it</p><p>binds to its distinctive receptors; as a result, particular genes are turned on</p><p>or off. Or environment can take the form of our everyday usage, namely</p><p>events happening in the world around us. These different versions of</p><p>environment are linked. For example, living in a stressful, dangerous city</p><p>will produce chronically elevated levels of glucocorticoids secreted by your</p><p>adrenal glands, which will activate particular genes in neurons in the</p><p>amygdala, making those cells more excitable.[*]</p><p>How do different environmentally activated messengers turn on different</p><p>genes? Not every stretch of DNA contributes to the code in a gene; instead,</p><p>long stretches don’t code for anything. Instead, they are the on/off switches</p><p>for activating nearby genes. Now for a wild fact—only about 5 percent of</p><p>DNA constitutes genes. The remaining 95 percent? The dizzyingly complex</p><p>on/off switches, the means by which various environmental influences</p><p>regulate unique networks of genes, with multiple types of switches on a</p><p>single gene and multiple genes being regulated by the same type of switch.</p><p>In other words, most DNA is devoted to gene regulation rather than to</p><p>genes themselves. Moreover, evolutionary changes in DNA are usually</p><p>more consequential when they alter on/off switches rather than the gene. As</p><p>another measure of the importance of the regulation, the more complex the</p><p>organism, the greater the percentage of its DNA is devoted to gene</p><p>regulation.[*]</p><p>Where have we gotten in this primer? Genes code for workhorse</p><p>proteins; genes don’t decide when they are active but are, instead, regulated</p><p>by environmental signals; the evolution of DNA is disproportionately about</p><p>gene regulation rather than about genes.</p><p>So environmental signals have activated some gene, leading to the</p><p>production of its protein; the newly made proteins then do their usual thing.</p><p>As a next key point, the same protein can work differently in different</p><p>environments. Such “gene/environment interactions” are less important in</p><p>species that inhabit only one type of environment. But they’re plenty</p><p>relevant in species that inhabit multiple types of environments—species</p><p>like, say, us. We can live in tundra, desert, or rain forest; in an urban</p><p>megalopolis of millions or in small hunter-gatherer bands; in capitalist or</p><p>socialist societies, polygamous or monogamous cultures. When it comes to</p><p>humans, it can be silly to ask what a particular gene does—only what it</p><p>does in a particular environment.</p><p>What might gene/environment interactions look like? Suppose someone</p><p>has a gene variant related to aggression; depending on the environment, that</p><p>can result in an increased likelihood of street brawling or of playing chess</p><p>really aggressively. Or a gene related to risk-taking that, depending on</p><p>environment, will influence whether you rob a store or gamble on founding</p><p>a start-up. Or a gene related to addiction that, depending on environment,</p><p>produces a Brahmin drinking too much Scotch in his club or someone</p><p>desperately stealing to get money for heroin.[*]</p><p>Final bit of the primer. Most genes come in more than one flavor, with</p><p>people inheriting their particular variants from their parents. Such gene</p><p>variants code for slightly different versions of their protein, with some</p><p>being better at their job than others.[*]</p><p>Where have we gotten? People differing in the flavors of genes they</p><p>possess, those genes being regulated differently in different environments,</p><p>producing proteins whose effects vary in different environments. We now</p><p>consider how genes relate to this free-will obsession of ours.</p><p>It’s button time; how will your brain be influenced in that moment by the</p><p>flavors of particular genes you inherited? Consider the neurotransmitter</p><p>serotonin—differing profiles of serotonin signaling among people help</p><p>explain individual differences related to mood, levels of arousal, tendency</p><p>toward compulsive behavior, ruminative thoughts, and reactive aggression.</p><p>And how can individual differences in gene variants contribute to</p><p>differences in serotonin signaling? Easily—different flavors exist for the</p><p>genes coding for the proteins that synthesize serotonin, that remove it from</p><p>the synapse, and that degrade it,[*] plus variants in the genes that code more</p><p>than a dozen different types of serotonin receptors.[44]</p><p>Same story with the neurotransmitter dopamine. To barely scratch the</p><p>surface, individual differences in dopamine signaling are relevant to reward,</p><p>anticipation, motivation, addiction, gratification postponement, long-term</p><p>planning, risk-taking, novelty seeking, salience of cues, and ability to focus</p><p>—you know, things pertinent to our judging, say, whether someone could</p><p>have transcended their dire circumstances if only they could have shown</p><p>some self-discipline. And the genetic sources of dopaminergic differences</p><p>among people? Genetic variants related to dopamine’s synthesis,</p><p>degradation, and removal from the synapse,[*] as well as in the various</p><p>dopamine receptors.[45]</p><p>We could go on now to the neurotransmitter norepinephrine. Or enzymes</p><p>that synthesize and degrade various hormones and hormone receptors. Or</p><p>pretty much anything pertinent to brain function. There’s usually extensive</p><p>individual variation in every relevant gene, and you weren’t consulted as to</p><p>which you’d choose to inherit.</p><p>What about the flip side—a bunch of people all have the identical gene</p><p>variant but live in different environments? You get precisely what was</p><p>discussed above, namely dramatically different effects of the gene variant</p><p>depending on environment. For example, one variant of the gene whose</p><p>protein breaks down serotonin will increase your risk of antisocial</p><p>behavior . . . but only if you were severely abused during childhood. A</p><p>variant of a dopamine receptor gene makes you either more or less likely to</p><p>be generous, depending on whether you grew up with or without secure</p><p>parental attachment. That same variant is associated with poor gratification</p><p>postponement . . . if you were raised in poverty. One variant of the gene that</p><p>directs dopamine synthesis is associated with anger . . . but only if you were</p><p>sexually abused as a kid. One version of the gene for the oxytocin receptor</p><p>is associated with less sensitive parenting . . . but only when coupled with</p><p>childhood abuse. On and on (and with many of the same relationships being</p><p>seen in other primate species as well).[46]</p><p>Dang, how can environment cause genes to work so differently, even in</p><p>diametrically opposite ways? Just to start to put all the pieces together,</p><p>because different environments will cause different sorts of epigenetic</p><p>changes in the same gene or genetic switch.</p><p>Thus, people have all these different versions of all of these, and these</p><p>different versions work differently, depending on childhood environment.</p><p>Just to put some numbers to it, humans have roughly twenty thousand genes</p><p>in our genome; of those, approximately 80 percent are active in the brain—</p><p>sixteen thousand. Of those genes, nearly all come in more than one flavor</p><p>(are “polymorphic”). Does this mean that in each of those genes, the</p><p>polymorphism consists of one spot in that gene’s DNA sequence that can</p><p>differ among individuals? No—there are actually an average of 250 spots in</p><p>the DNA sequence of each gene . . . which adds up to there being individual</p><p>variability in approximately four million spots in the sequence of DNA that</p><p>codes for genes active in the brain.[*],[47]</p><p>Does behavior genetics</p><p>disprove free will? Not on its own—as a familiar</p><p>theme, genes are about potentials and vulnerabilities, not inevitabilities, and</p><p>the effects of most of these genes on behavior are relatively mild.</p><p>Nonetheless, all these effects on behavior arise from genes you didn’t</p><p>choose, interacting with a childhood you didn’t choose.[48]</p><p>BACK CENTURIES: THE SORT OF PEOPLE YOU</p><p>COME FROM</p><p>The Libetian buttons beckon. What does your culture have to do with the</p><p>intent you will act upon? Tons. Because from your moment of birth, you</p><p>were subject to a universal, which is that every culture’s values include</p><p>ways to make their inheritors recapitulate those values, to become “the sort</p><p>of people you come from.” As a result, your brain reflects who your</p><p>ancestors were and what historical and ecological circumstances led them to</p><p>invent those values surrounding you. If a fairly tunnel-visioned</p><p>neurobiologist became dictator of the world, anthropology would be</p><p>defined as “the study of the ways that different groups of people attempt to</p><p>shape brain construction in their children.”</p><p>Cultures produce dramatically different behaviors with consistent</p><p>patterns. One of the most studied contrasts concerns “individualist” versus</p><p>“collectivist” cultures. The former emphasize autonomy, personal</p><p>achievement, uniqueness, and the needs and rights of the individual; it’s</p><p>looking out for number one, where your actions are “yours.” Collectivist</p><p>cultures, in contrast, espouse harmony, interdependence, and conformity,</p><p>where the needs of the community guide behavior; the priority is that your</p><p>actions make the community proud, because you are “theirs.” Most studies</p><p>of these contrasts compare individuals from the poster child of individualist</p><p>cultures, the United States, with those from the textbook collectivist</p><p>cultures of East Asia. The differences make sense. People from the U.S. are</p><p>more likely to use first-person-singular pronouns, to define themselves in</p><p>personal rather than relational terms (“I’m a lawyer” versus “I’m a parent”),</p><p>to organize memory around events rather than social relations (“the summer</p><p>I learned to swim” versus “the summer we became friends”). Ask subjects</p><p>to draw a sociogram—a diagram with circles representing themselves and</p><p>the people who matter in their lives, connected by lines—Americans</p><p>typically place themselves in the biggest circle, in the center. Meanwhile, an</p><p>East Asian’s circle typically is no bigger than the others, and is not front</p><p>and center. The American goal is to distinguish yourself by getting ahead of</p><p>everyone else; the East Asian is to avoid being distinguishable.[*] And from</p><p>these differences come major differences as to what count as norm</p><p>violations and what you do about them.[49]</p><p>Naturally, this reflects different workings of the brain and body. On</p><p>average, in East Asian individuals, the dopamine “reward” system activates</p><p>more when looking at a calm versus excited facial expression; for</p><p>Americans, it’s the opposite. Show subjects a picture of a complex scene.</p><p>Within milliseconds, East Asians typically scan the entire scene as a whole,</p><p>remembering it; Americans focus on the person in the center of the picture.</p><p>Force an American to tell you about times that other people influenced</p><p>them, and they secrete glucocorticoids; someone East Asian will secrete the</p><p>stress hormone when forced to tell you about times they influenced other</p><p>people.[50]</p><p>Where do these differences come from? The standard explanations for</p><p>American individualism include (a) not only are we a nation of immigrants</p><p>(as of 2017, ~37 percent immigrants or children of), but it’s not random</p><p>who emigrates; instead, immigrating is a filtering process selecting for</p><p>people willing to leave their world and culture behind, sustain an arduous</p><p>journey to a place with barriers impeding their entry, and labor at the most</p><p>shit jobs when granted admission; and (b) most of American history has</p><p>been spent with an expanding western border settled by similarly tough,</p><p>individualist pioneers. Meanwhile, the standard explanation for East Asian</p><p>collectivism is ecology dictating the means of production—ten millennia of</p><p>rice farming, which demands massive amounts of collective labor to turn</p><p>mountains into terraced rice paddies, collective planting and harvesting of</p><p>each person’s crops in sequence, collective construction and maintenance of</p><p>massive and ancient irrigation systems.[*],[51]</p><p>A fascinating exception that proves the rule concerns parts of northern</p><p>China where the ecosystem precludes rice growing, producing millennia of</p><p>the much more individualistic process of wheat farming. Farmers from this</p><p>region, and even their university student grandchildren, are as</p><p>individualistic as Westerners. As one finding that is beyond cool, Chinese</p><p>from rice regions accommodate and avoid obstacles (in this case, walking</p><p>around two chairs experimentally placed to block the way in Starbucks);</p><p>people from wheat regions remove obstacles (i.e., moving the chairs apart).</p><p>[52]</p><p>Thus, cultural differences arising centuries, millennia, ago, influence</p><p>behaviors from the most subtle and minuscule to dramatic.[*] Another</p><p>literature compares cultures of rain forest versus desert dwellers, where the</p><p>former tend toward inventing polytheistic religions, the latter, monotheistic</p><p>ones. This probably reflects ecological influences as well—life in the desert</p><p>is a furnace-blasted, desiccated singular struggle for survival; rain forests</p><p>teem with a multitude of species, biasing toward the invention of a</p><p>multitude of gods. Moreover, monotheistic desert dwellers are more warlike</p><p>and more effective conquerors than rain forest polytheists, explaining why</p><p>roughly 55 percent of humans proclaim religions invented by Middle</p><p>Eastern monotheistic shepherds.[53]</p><p>Shepherding raises another cultural difference. Traditionally, humans</p><p>make livings as agriculturalists, hunter-gatherers, or pastoralists. The last</p><p>are folks in deserts, grasslands, or plains of tundra, with their herds of goats,</p><p>camels, sheep, cows, llamas, yaks, or reindeer. Such pastoralists are</p><p>uniquely vulnerable. It’s hard to sneak in at night and steal someone’s rice</p><p>field or rain forest. But you can be a sneaky varmint and rustle someone’s</p><p>herd, stealing the milk and meat they survive on.[*] This pastoralist</p><p>vulnerability has generated “cultures of honor” with the following features:</p><p>(a) extreme but temporary hospitality to the stranger passing through—after</p><p>all, most pastoralists are wanderers themselves with their animals at some</p><p>point; (b) adherence to strict codes of behavior, where norm violations are</p><p>typically interpreted as insulting someone; (c) such insults demanding</p><p>retributive violence—the world of feuds and vendettas lasting generations;</p><p>(d) the existence of warrior classes and values where valor in battle</p><p>produces high status and a glorious afterlife. Much has been made of the</p><p>hospitality, conservatism (as in strictly conserving cultural norms), and</p><p>violence of the traditional culture of honor of the American South. The</p><p>pattern of violence tells a ton: murders in the South, which typically has the</p><p>highest rates in the country, are not about stickups gone wrong in a city;</p><p>they’re about murdering someone who has seriously tarnished your honor</p><p>(by conspicuously bad-mouthing you, failing to repay a debt, coming on to</p><p>your significant other . . .), particularly if living in a rural area.[*] Where</p><p>does the Southern culture of honor come from? A widely accepted theory</p><p>among historians makes this paragraph’s point perfectly—while colonial</p><p>New England filled with Pilgrims, and the mid-Atlantic with mercantile</p><p>folks like Quakers, the South was disproportionately peopled by wild-assed</p><p>pastoralists from northern England, Scotland, and Ireland.[54]</p><p>One last cultural comparison, between “tight” cultures (with numerous</p><p>and strictly enforced norms of behavior) and “loose” ones. What are some</p><p>predictors of a society being tight? A history of lots of cultural crises,</p><p>droughts, famines, and earthquakes, and high rates of infectious</p><p>diseases.[*]</p><p>And I mean it with “history”—in one study of thirty-three countries,</p><p>tightness was more likely in cultures that had high population densities back</p><p>in 1500.[*], [55]</p><p>Five hundred years ago!? How can that be? Because generation after</p><p>generation, ancestral culture influenced the likes of how much physical</p><p>contact mothers had with their children; whether kids were subject to</p><p>scarification, genital mutilation, and life-threatening rites of passage;</p><p>whether myths and songs were about vengeance or turning the other cheek.</p><p>Does the influence of culture disprove free will? Obviously not. As</p><p>usual, these are tendencies, amid lots of individual variation. Just consider</p><p>Gandhi, Anwar Sadat, Yitzhak Rabin, and Michael Collins, atypically</p><p>inclined toward peacemaking, assassinated by coreligionists atypically</p><p>inclined toward extremism and violence.[*],[56]</p><p>OH, WHY NOT? EVOLUTION</p><p>For various reasons, humans were sculpted by evolution over millions of</p><p>years to be, on the average, more aggressive than bonobos but less so than</p><p>chimps, more social than orangutans but less so than baboons, more</p><p>monogamous than mouse lemurs but more polygamous than marmosets.</p><p>’Nuff said.[57]</p><p>SEAMLESS</p><p>Where does intent come from? What makes us who we are at any given</p><p>minute? What came before.[*] This raises an immensely important point</p><p>first brought up in chapter 1, which is that the biology/environment</p><p>interactions of, say, a minute ago and a decade ago are not separate entities.</p><p>Suppose we are considering the genes someone inherited, back when they</p><p>were a fertilized egg, and what those genes have to do with that person’s</p><p>behavior. Well then, we are being geneticists thinking about genetics. We</p><p>could even make our club more exclusive and be “behavior geneticists,”</p><p>publishing our research only in a journal called, well, Behavior Genetics.</p><p>But if we are talking about the genes inherited that are relevant to the</p><p>person’s behavior, we’re automatically also talking about how the person’s</p><p>brain was constructed—because brain construction is primarily carried out</p><p>by the proteins coded for by “genes implicated in neurodevelopment.”</p><p>Similarly, if we are studying the effects of childhood adversity on adult</p><p>behavior, often best understood on the psychological or sociological level,</p><p>we’re implicitly also considering how the molecular biology of childhood</p><p>epigenetics helps explain adult personality and temperament. If we are</p><p>evolutionary biologists thinking about human behavior, by definition we’re</p><p>also being behavior geneticists, developmental neurobiologists, and</p><p>neuroplasticians (spell-check just went crazy). This is because evolving</p><p>means changes in what variants of genes you find in organisms and thus the</p><p>ways in which they shape brain construction. Study hormones and behavior,</p><p>and we’re also studying what fetal life had to do with the development of</p><p>the glands that secrete those hormones. So on and so on. Each moment</p><p>flowing from all that came before. And whether it’s the smell of a room,</p><p>what happened to you when you were a fetus, or what was up with your</p><p>ancestors in the year 1500, all are things that you couldn’t control.[*] A</p><p>seamless stream of influences that, as said at the beginning, precludes being</p><p>able to shoehorn in this thing called free will that is supposedly in the brain</p><p>but not of it. In the words of legal scholar Pete Alces, there is “no remaining</p><p>gap between nature and nurture for moral responsibility to fill.” Philosopher</p><p>Peter Tse hits the nail on the head when referring to the biological turtles all</p><p>the way down as a “responsibility destroying regress.”[*], [58]</p><p>This seamless stream shows why bad luck doesn’t get evened out, why it</p><p>amplifies instead. Have some particular unlucky gene variant, and you’ll be</p><p>unluckily sensitive to the effects of adversity during childhood. Suffering</p><p>from early-life adversity is a predictor that you’ll be spending the rest of</p><p>your life in environments that present you with fewer opportunities than</p><p>most, and that enhanced developmental sensitivity will unluckily make you</p><p>less able to benefit from those rare opportunities—you may not understand</p><p>them, may not recognize them as opportunities, may not have the tools to</p><p>make use of them or to keep you from impulsively blowing the opportunity.</p><p>Fewer of those benefits make for a more stressful adult life, which will</p><p>change your brain into one that is unluckily bad at resilience, emotional</p><p>control, reflection, cognition . . . Bad luck doesn’t get evened out by good.</p><p>It is usually amplified until you’re not even on the playing field that needs</p><p>to be leveled.</p><p>This is the view forcefully argued by philosopher Neil Levy in his 2011</p><p>book, Hard Luck: How Luck Undermines Free Will and Moral</p><p>Responsibility (Oxford University Press). He focuses on two categories of</p><p>luck. One, present luck, examines its role in the difference between driving</p><p>while so drunk that, when coupled with events in the seconds to minutes</p><p>before, you would have killed someone if they had happened to be crossing</p><p>the street, and the bad luck of being in that state and actually killing</p><p>someone. As we saw, whether this distinction is meaningful is often the</p><p>domain of legal scholars. More meaningful to Levy is what he calls</p><p>constitutive luck, the fortune, good or bad, that sculpted you up to this</p><p>moment. In other words, our world of one second before, one minute</p><p>before . . . (although he only passingly frames the idea biologically). And</p><p>when you recognize that that is all there is to explain who we are, he</p><p>concludes, “it is not ontology that rules out free will, it is luck (his</p><p>emphasis).”[*] In his view, not only does it make no sense to hold us</p><p>responsible for our actions; we also had no control over the formation of</p><p>our beliefs about the rightness and consequences of that action or about the</p><p>availability of alternatives. You can’t successfully believe something</p><p>different from what you believe.[*]</p><p>In the first chapter, I wrote about what is needed to prove free will, and</p><p>this chapter has added details to that demand: show me that the thing a</p><p>neuron just did in someone’s brain was unaffected by any of these</p><p>preceding factors—by the goings-on in the eighty billion neurons</p><p>surrounding it, by any of the infinite number of combinations of hormone</p><p>levels percolated that morning, by any of the countless types of childhoods</p><p>and fetal environments were experienced, by any of the two to the four</p><p>millionth power different genomes that neuron contains, multiplied by the</p><p>nearly as large range of epigenetic orchestrations possible. Et cetera. All out</p><p>of your control.</p><p>“Turtles all the way down” is a joke because the confident claim</p><p>presented to William James is not just absurd but immune to every</p><p>challenge he raises. It’s a highbrow version of the insult battles that would</p><p>go on in schoolyards in my youth: “You’re a sucky baseball player.” “I</p><p>know you are, but what am I?” “Now you’re being annoying.” “I know you</p><p>are, but what am I?” “Now you’re indulging in lazy sophistry.” “I know you</p><p>are . . .” If the old woman going at James were, at some point, to report that</p><p>the next turtle down floats in the air, the anecdote wouldn’t be funny; while</p><p>the answer is still absurd, the rhythm of the infinite regress has been broken.</p><p>Why did that moment just occur? “Because of what came before it.”</p><p>Then why did that moment just occur? “Because of what came before that,”</p><p>forever,[*] isn’t absurd and is, instead, how the universe works. The</p><p>absurdity amid this seamlessness is to think that we have free will and that</p><p>it exists because at some point, the state of the world (or of the frontal</p><p>cortex or neuron or molecule of serotonin . . .) that “came before that”</p><p>happened out of thin air.</p><p>In order to prove there’s free will, you have to show that some behavior</p><p>just happened out of thin air in the sense of considering all these biological</p><p>precursors. It may be possible to sidestep that with some subtle</p><p>philosophical arguments, but you can’t with anything known to science.</p><p>As noted in the</p><p>that behavior occur? Because of biological and environmental interactions,</p><p>all the way down.[*]</p><p>As a central point of this book, those are all variables that you had little</p><p>or no control over. You cannot decide all the sensory stimuli in your</p><p>environment, your hormone levels this morning, whether something</p><p>traumatic happened to you in the past, the socioeconomic status of your</p><p>parents, your fetal environment, your genes, whether your ancestors were</p><p>farmers or herders. Let me state this most broadly, probably at this point too</p><p>broadly for most readers: we are nothing more or less than the cumulative</p><p>biological and environmental luck, over which we had no control, that has</p><p>brought us to any moment. You’re going to be able to recite this sentence in</p><p>your irritated sleep by the time we’re done.</p><p>There are all sorts of aspects about behavior that, while true, are not</p><p>relevant to where we’re heading. For example, the fact that some criminal</p><p>behavior can be due to psychiatric or neurological problems. That some</p><p>kids have “learning differences” because of the way their brains work. That</p><p>some people have trouble with self-restraint, because they grew up without</p><p>any decent role models or because they’re still a teenager with a teenager’s</p><p>brain. That someone has said something hurtful merely because they’re</p><p>tired and stressed, or even because of a medication they’re taking.</p><p>All of these are circumstances where we recognize that sometimes,</p><p>biology can impinge on our behavior. This is essentially a nice humane</p><p>agenda that endorses society’s general views about agency and personal</p><p>responsibility but reminds you to make exceptions for edge cases: judges</p><p>should consider mitigating factors in criminals’ upbringing during</p><p>sentencing; juvenile murderers shouldn’t be executed; the teacher handing</p><p>out gold stars to the kids who are soaring in learning to read should do</p><p>something special too for that kid with dyslexia; college admissions officers</p><p>should consider more than just SAT cutoffs for applicants who have</p><p>overcome unique challenges.</p><p>These are good, sensible ideas that should be instituted if you decide that</p><p>some people have much less self-control and capacity to freely choose their</p><p>actions than average, and that at times, we all have much less than we</p><p>imagine.</p><p>We can all agree on that; however, we’re heading into very different</p><p>terrain, one that I suspect most readers will not agree with, which is</p><p>deciding that we have no free will at all. Here would be some of the logical</p><p>implications of that being the case: That there can be no such thing as</p><p>blame, and that punishment as retribution is indefensible—sure, keep</p><p>dangerous people from damaging others, but do so as straightforwardly and</p><p>nonjudgmentally as keeping a car with faulty brakes off the road. That it</p><p>can be okay to praise someone or express gratitude toward them as an</p><p>instrumental intervention, to make it likely that they will repeat that</p><p>behavior in the future, or as an inspiration to others, but never because they</p><p>deserve it. And that this applies to you when you’ve been smart or self-</p><p>disciplined or kind. Oh, as long as we’re at it, that you recognize that the</p><p>experience of love is made of the same building blocks that constitute</p><p>wildebeests or asteroids. That no one has earned or is entitled to being</p><p>treated better or worse than anyone else. And that it makes as little sense to</p><p>hate someone as to hate a tornado because it supposedly decided to level</p><p>your house, or to love a lilac because it supposedly decided to make a</p><p>wonderful fragrance.</p><p>That’s what it means to conclude that there is no free will. This is what</p><p>I’ve concluded, for a long, long time. And even I think that taking that</p><p>seriously sounds absolutely nutty.</p><p>Moreover, most people agree that it sounds that way. People’s beliefs</p><p>and values, their behavior, their answers to survey questions, their actions</p><p>as study subjects in the nascent field of “experimental philosophy,” show</p><p>that people believe in free will when it matters—philosophers (about 90</p><p>percent), lawyers, judges, jurors, educators, parents, and candlestick</p><p>makers. As well as scientists, even biologists, even many neurobiologists,</p><p>when push comes to shove. Work by psychologists Alison Gopnik at UC</p><p>Berkeley and Tamar Kushnir at Cornell shows that preschool kids already</p><p>have a robust belief in a recognizable version of free will. And such a belief</p><p>is widespread (but not universal) among a wide variety of cultures. We are</p><p>not machines in most people’s view; as a clear demonstration, when a driver</p><p>or an automated car makes the same mistake, the former is blamed more.[1]</p><p>And we are not alone in our faith in free will—research that we’ll look at in</p><p>a later chapter suggests that other primates even believe that there is free</p><p>will.[2]</p><p>This book has two goals. The first is to convince you that there is no free</p><p>will,[*] or at least that there is much less free will than generally assumed</p><p>when it really matters. To accomplish that, we’ll look at the way smart,</p><p>nuanced thinkers argue for free will, from the perspectives of philosophy,</p><p>legal thought, psychology, and neuroscience. I’ll be trying to present their</p><p>views to the best of my ability, and to then explain why I think they are all</p><p>mistaken. Some of these mistakes arise from the myopia (used in a</p><p>descriptive rather than judgmental sense) of focusing solely on just one</p><p>sliver of the biology of behavior. Sometimes this is because of faulty logic,</p><p>such as concluding that if it’s not possible to ever tell what caused X,</p><p>maybe nothing caused it. Sometimes the mistakes reflect unawareness or</p><p>misinterpretation of the science underlying behavior. Most interestingly, I</p><p>sense that mistakes arise for emotional reasons that reflect that there being</p><p>no free will is pretty damn unsettling; we’ll consider this at the end of the</p><p>book. So one of my two goals is to explain why I think all these folks are</p><p>wrong, and how life would improve if people stopped thinking like them.[3]</p><p>Right around here, one might ask of me, Where do you get off? As will</p><p>be seen, free-will debates often revolve around narrow issues—“Does a</p><p>particular hormone actually cause a behavior or just make it more likely?”</p><p>or “Is there a difference between wanting to do something and wanting to</p><p>want something?”—that are usually debated by specialized authorities. My</p><p>intellectual makeup happens to be that of a generalist. I’m a</p><p>“neurobiologist” with a lab that does things like manipulate genes in a rat’s</p><p>brain to change behavior. At the same time, I spent part of each year for</p><p>more than three decades studying the social behavior and physiology of</p><p>wild baboons in a national park in Kenya. Some of my research turned out</p><p>to be relevant to understanding how adult brains are influenced by the stress</p><p>of childhood poverty, and as a result, I’ve wound up spending time around</p><p>the likes of sociologists; another facet of my work has been relevant to</p><p>mood disorders, leading me to hang with psychiatrists. And for the last</p><p>decade, I’ve had a hobby of working with public defender offices on</p><p>murder trials, teaching juries about the brain. As a result, I’ve been</p><p>carpetbagging in a number of different fields related to behavior. Which I</p><p>think has made me particularly prone toward deciding that free will doesn’t</p><p>exist.</p><p>Why? Crucially, if you focus on any single field like these—</p><p>neuroscience, endocrinology, behavioral economics, genetics, criminology,</p><p>ecology, child development, or evolutionary biology—you are left with</p><p>plenty of wiggle room for deciding that biology and free will can coexist. In</p><p>the words of UC San Diego philosopher Manuel Vargas, “Claiming that</p><p>some scientific result shows the falsity of ‘free will’ . . . is either bad</p><p>scholarship or academic hucksterism.”[4] He is right, if in-your-face. As we</p><p>will see in the next chapter, most experimental neurobiology research about</p><p>free will is narrowly anchored by the result of one study that examined</p><p>events that happen in the brain a few seconds before a behavior occurs. And</p><p>Vargas</p><p>first chapter, the prominent compatibilist philosopher</p><p>Alfred Mele judged this requirement of free will as setting the bar “absurdly</p><p>high.” Some subtle semantics come into play; what Levy calls</p><p>“constitutive” luck is luck that is “remote” to Mele, “remote” as in so</p><p>detached in time—a whole million years before you decide, a whole minute</p><p>before you decide—that it doesn’t preclude free will and responsibility.</p><p>This is supposedly because the remoteness is so remote as to not be</p><p>remotely relevant, or because the consequences of that remote biological</p><p>and environmental luck are still filtered through some sort of immaterial</p><p>“you” at the end picking and choosing among the influences, or because</p><p>remote bad luck, á la Dennett, will be balanced out by good luck in the long</p><p>run and can thus be ignored. This is how some compatibilists arrive at the</p><p>conclusion that someone’s history is irrelevant. Levy’s wording of</p><p>“constitutive” luck suggests something very different, namely that not only</p><p>is history relevant but, in his words, “the problem of history is a problem of</p><p>luck.” It is why it is anything but an absurdly high bar or straw man to say</p><p>that free will can exist only if neurons’ actions are completely uninfluenced</p><p>by all the uncontrollable factors that came before. It’s the only requirement</p><p>there can be, because all that came before, with its varying flavors of</p><p>uncontrollable luck, is what came to constitute you. This is how you</p><p>became you.[59]</p><p>T</p><p>4</p><p>Willing Willpower: The Myth of Grit</p><p>he last two chapters were devoted to how you can believe in free</p><p>will by ignoring history. And you can’t—to repeat our emerging</p><p>mantra, all we are is the history of our biology, over which we had</p><p>no control, and of its interaction with environments, over which we also had</p><p>no control, creating who we are in the moment.</p><p>However, not all free-will fans deny the importance of history, and this</p><p>chapter dissects two ways in which it is invoked. The first, which we’ll</p><p>blow over relatively quickly, is a silly effort by some serious scholars to</p><p>incorporate history into the picture, as part of a larger strategy of saying,</p><p>“Yes, of course free will exists. Just not where you’re looking.” It happened</p><p>in the past. It’ll happen in your future. It happens wherever you’re not</p><p>looking in the brain. It happens outside you, floating on interactions</p><p>between people.</p><p>We’ll look at the second misuse of history more deeply. Those last two</p><p>chapters were about the damage caused if you decide that punishment and</p><p>reward are morally justifiable because history doesn’t matter when</p><p>explaining someone’s behavior. This chapter is about how it’s just as</p><p>destructive to conclude that history is relevant only to some aspects of</p><p>behavior.</p><p>WAS-NESS</p><p>Suppose you have some guy in a tough situation—being threatened by a</p><p>stranger who’s coming at him with a knife. Our guy pulls out a gun and</p><p>shoots once, leaving the assailant on the ground. What does our guy then</p><p>do? Does he conclude, “It’s over, he’s incapacitated, I’m safe?” Or does he</p><p>keep shooting? What if he waits eleven seconds before attacking the</p><p>assailant further? In the final scenario he is charged with premeditated</p><p>murder—if he had stopped after the first shot, it would have counted as</p><p>self-defense; but he had eleven seconds to think about his options, meaning</p><p>that his second round of shots was freely chosen and premeditated.</p><p>Let’s consider the guy’s history. He was born with fetal alcohol</p><p>syndrome, due to his mother’s drinking. She abandoned him when he was</p><p>five, resulting in a string of foster homes featuring physical and sexual</p><p>abuse. A drinking problem by thirteen, homeless at fifteen, multiple head</p><p>injuries from fights, surviving by panhandling and being a sex worker,</p><p>robbed numerous times, stabbed a month earlier by a stranger. An outreach</p><p>psychiatric social worker saw him once and noted that he might well have</p><p>PTSD. Ya think?</p><p>Someone has tried to kill you and you have eleven seconds to make a</p><p>life-or-death decision; there’s a well-understood neurobiology as to why</p><p>you readily make a terrible decision during this monumental stressor. Now,</p><p>instead, it’s our guy with a neurodevelopmental disorder due to fetal</p><p>neurotoxicity, repeated childhood trauma, substance abuse, repeated brain</p><p>injuries, and a recent stabbing in a similar situation. His history has resulted</p><p>in this part of his brain being enlarged, this other part atrophied, this</p><p>pathway disconnected. And as a result, there’s, like, zero chance that he’ll</p><p>make a prudent, self-regulated decision in those eleven seconds. And you’d</p><p>have done the same thing if life had handed you that brain. In this context,</p><p>“eleven seconds to premeditate” is a joke.[*]</p><p>Despite that, the compatibilist philosophers (and most prosecutors . . .</p><p>and judges . . . and juries) don’t think it’s a joke. Sure, life has thrown awful</p><p>things at the guy, but he’s had plenty of time in the past to have chosen to</p><p>not be the sort of person who would go back and put another bullet in the</p><p>assailant’s brain.</p><p>A great summary of this viewpoint is given by philosopher Neil Levy</p><p>(one that he does not agree with):</p><p>Agents are not responsible as soon as they acquire a set of active</p><p>dispositions and values; instead, they become responsible by</p><p>taking responsibility for their dispositions and values.</p><p>Manipulated agents are not immediately responsible for their</p><p>actions, because it is only after they have had sufficient time to</p><p>reflect upon and experience the effects of their new dispositions</p><p>that they qualify as fully responsible agents. The passing of time</p><p>(under normal conditions) offers opportunities for deliberation</p><p>and reflection, thereby enabling agents to become responsible for</p><p>who they are. Agents become responsible for their dispositions</p><p>and values in the course of normal life, even when these</p><p>dispositions and values are the product of awful constitutive</p><p>luck. At some point bad constitutive luck ceases to excuse,</p><p>because agents have had time to take responsibility for it.[1]</p><p>Sure, maybe no free will just now, but there was relevant free will in the</p><p>past.</p><p>As implied in Levy’s quote, the process of freely choosing what sort of</p><p>person you become, despite whatever bad constitutive luck you’ve had, is</p><p>usually framed as a gradual, usually maturational process. In a debate with</p><p>Dennett, incompatibilist Gregg Caruso outlined chapter 3’s essence—we</p><p>have no control over either the biology or the environment thrown at us.</p><p>Dennett’s response was “So what? The point I think you are missing is that</p><p>autonomy is something one grows into, and this is indeed a process that is</p><p>initially entirely beyond one’s control, but as one matures, and learns, one</p><p>begins to be able to control more and more of one’s activities, choices,</p><p>thoughts, attitudes, etc.” This is a logical outcome of Dennett’s claim that</p><p>bad and good luck average out over time: Come on, get your act together.</p><p>You’ve had enough time to take responsibility, to choose to catch up to</p><p>everyone else in the marathon.[2]</p><p>A similar view comes from the distinguished philosopher Robert Kane,</p><p>of the University of Texas: “Free will in my view involves more than</p><p>merely free of action. It concerns self-formation. The relevant question for</p><p>free will is this: How did you get to be the kind of person you now are?”</p><p>Roskies and Shadlen write, “It is plausible to think that agents might be</p><p>held morally responsible even for decisions that are not conscious, if those</p><p>decisions are due to policy settings which are expressions of the agent [in</p><p>other words, acts of free will in the past].”[3]</p><p>Not all versions of this idea require gradual acquisition of past-tense free</p><p>will. Kane believes that “choose what sort of person you’re going to be”</p><p>happens at moments of crisis, at major forks in the road, at moments of</p><p>what he calls “Self-Forming Actions” (and he proposes a mechanism by</p><p>which this supposedly occurs, which we’ll touch on briefly in chapter 10).</p><p>In contrast, psychiatrist Sean Spence, of the University of Sheffield,</p><p>believes that those I-had-free-will-back-then</p><p>moments happen when life is</p><p>at its optimal, rather than in crisis.[4]</p><p>Whether that free will was-ness was a slow maturational process or</p><p>occurred in a flash of crisis or propitiousness, the problem should be</p><p>obvious. Was was once now. If the function of a neuron right now is</p><p>embedded in its neuronal neighborhood, effects of hormones, brain</p><p>development, genes, and so on, you can’t go away for a week and then</p><p>show that the function a week prior wasn’t embedded after all.</p><p>A variant on this idea is that you may not have free will now about now,</p><p>you have free will now about who you are going to be in the future.</p><p>Philosopher Peter Tse, who calls this second-order free will, writes how the</p><p>brain can “cultivate and create new types of options for itself in the future.”</p><p>Not just any brains, however. Tigers, he notes, can’t have this sort of free</p><p>will (e.g., choosing that they’re going to become vegans). “Humans, in</p><p>contrast, bear a degree of responsibility for having chosen to become the</p><p>kind of chooser who they now are.” Combine this with Dennett’s</p><p>retrospective view and we have something akin to the idea that somewhere</p><p>in the future, you will have had free will in the past—I will freely choosed.</p><p>[5]</p><p>Rather than there being free will, “just not when you’re looking,” there’s</p><p>free will, “just not where you’re looking”—you may have shown that free</p><p>will isn’t coming from the area of the brain you’re studying; it’s coming</p><p>from the area you aren’t. Roskies writes, “It is possible that an</p><p>indeterministic event elsewhere in the larger system affects the firing of</p><p>[neurons in brain region X], thus making the system as a whole</p><p>indeterministic, even though the relation between [neuronal activity in brain</p><p>region X] and behavior is deterministic.” And neuroscientist Michael</p><p>Gazzaniga moves the free will outside the brain entirely: “Responsibility</p><p>exists at a different level of organization: the social level, not in our</p><p>determined brains.” There are two big problems with this: First, it isn’t free</p><p>will and responsibility just because, on the social level, everyone says it is</p><p>—that’s a central point of this book. Second, sociality, social interactions,</p><p>organisms being social with each other, are as much an end product of</p><p>biology interacting with environment as is the shape of your nose.[6]</p><p>Throw down the gauntlet from chapter 3—present me with the neuron,</p><p>right here, right now, that caused that behavior, independent of any other</p><p>current or historical biological influence. The answer can’t be “Well, we</p><p>can’t, but that happened before.” Or “That’s going to occur, but not yet.” Or</p><p>“That’s occurring right now but not here—instead, over there; no, not that</p><p>there, that other there. . . .” It’s turtles in every place and time; there are no</p><p>cracks in the process by which was generates is in which to squeeze free</p><p>will.</p><p>We move now to probably the most important topic in this half of the</p><p>book, a way to erroneously see free will that isn’t there.</p><p>WHAT YOU WERE GIVEN AND WHAT YOU DO</p><p>WITH IT</p><p>Kato and Finn (names changed to protect their identities) have a good thing</p><p>going, backing each other in a fight and serving as each other’s wingman in</p><p>the sex department. Each has a fairly dominant personality, and working</p><p>together, they’re unstoppable.</p><p>I’m watching them racing across a field. Kato got the head start, but Finn</p><p>is catching up. They’re trying to run down a gazelle, which is tearing away</p><p>from them. Kato and Finn are baboons, intent on a meal. If they do catch</p><p>the gazelle, which seems increasingly likely, Kato will eat first, as he is</p><p>number two in the hierarchy, Finn, number three.</p><p>Finn is still catching up. I note a subtle shift in his running, something I</p><p>can’t describe, but having observed Finn for a long time, I know what’s</p><p>coming next. “Idiot, you’re going to blow it,” I think. Finn has seemingly</p><p>decided, “Screw it with this waiting for the leftovers. I want first dibs on the</p><p>best parts.” He accelerates. “What fools these baboons be,” I think. Finn</p><p>leaps on Kato’s back, biting him, knocking him over so that Finn can get</p><p>the gazelle himself. Naturally, he trips over Kato in the process and sprawls</p><p>ass over teakettle. They get up, glowering at each other, the gazelle long</p><p>gone; end of their cooperative coalition. With Kato no longer willing to</p><p>back him up in a fight, Finn is soon toppled by Bodhi, number four in the</p><p>hierarchy, followed by being trounced by number five, Chad.</p><p>Some baboons are just that way. They’re full of potential—big,</p><p>muscular, with sharp canines—but go nowhere in the hierarchy because</p><p>they never miss an opportunity to miss an opportunity. They break up their</p><p>coalition with an impulsive act, like Finn did. They can’t keep themselves</p><p>from challenging the alpha male for a female, and get pummeled. They’re</p><p>in a bad mood and can’t stop themselves from displacing aggression by</p><p>biting the wrong nearby female, then get chased out of the troop by her irate</p><p>high-ranking relatives. Major underachievers that can resist anything except</p><p>temptation.</p><p>We are replete with human examples, always featuring the word</p><p>squander. Athletes who squander their natural talents by partying. Smart</p><p>kids squandering their academic potential with drugs[*] or indolence.</p><p>Dissipated jet-setters who squander their families’ fortunes on crackpot</p><p>vanity projects—according to one study, 70 percent of family fortunes are</p><p>lost by the second generation of inheritors. From Finn on, squanderers all.[7]</p><p>And then there are the people who overcame bad luck with spectacular</p><p>tenacity and grit. Oprah, growing up wearing potato sack dresses. Harland</p><p>Sanders, eventually the Colonel, who failed to sell his fried chicken recipe</p><p>to 1,009 restaurants before striking gold. Marathoner Eliud Kibet, who</p><p>collapsed a few meters from the finish line and crawled to the end; fellow</p><p>Kenyan Hyvon Ngetich, who crawled the final fifty meters of her marathon;</p><p>Japanese runner Rei Iida, who fell, fracturing her leg, and crawled the final</p><p>two hundred meters to the finish line. Nobel laureate geneticist Mario</p><p>Capecchi, who was a homeless street kid in World War II Italy. Then, of</p><p>course, there’s Helen Keller and Anne Sullivan with the w-a-t-e-r. Desmond</p><p>Doss, an unarmed conscientious objector medic, who returned under enemy</p><p>fire to carry seventy-five injured servicemen to safety in the Battle of</p><p>Okinawa. Five-foot-three Muggsy Bogues playing in the NBA. Madeleine</p><p>Albright, future secretary of state, who, as a teenage Czechoslovakian</p><p>refugee, sold bras in a Denver department store. The Argentinian guy</p><p>working as a janitor and bouncer who put his nose to the grindstone and</p><p>became the pope.</p><p>Whether considering Finn and the squanderers or Albright selling bras,</p><p>we are moths pulled to the flame of the most entrenched free-will myth.</p><p>We’ve already examined versions of partial free will—not now but in the</p><p>past; not here but where you’re not looking. This is another version of</p><p>partial free will—yes, there are our attributes, gifts, shortcomings, and</p><p>deficiencies over which we had no control, but it is us, we agentic, free,</p><p>captain-of-our-own-fate selves who choose what we do with those</p><p>attributes. Yes, you had no control over that ideal ratio of slow- to fast-</p><p>twitch fibers in your leg muscles that made you a natural marathoner, but</p><p>it’s you who fought through the pain at the finish line. Yes, you didn’t</p><p>choose the versions of glutamate receptor genes you inherited that gave you</p><p>a great memory, but you’re responsible for being lazy and arrogant. Yes,</p><p>you may have inherited genes that predispose you to alcoholism, but it’s</p><p>you who commendably resists the temptation to drink.</p><p>A stunningly clear statement of this compatibilist dualism concerns Jerry</p><p>Sandusky, the Penn State football coach who was sentenced to sixty years</p><p>in prison in 2012 for being a horrific serial child molester. Soon after this, a</p><p>provocative CNN piece ran under the title “Do Pedophiles Deserve</p><p>Sympathy?” Psychologist James Cantor of the University of Toronto</p><p>reviewed the neurobiology of pedophilia.</p><p>The wrong mix of genes,</p><p>endocrine abnormalities in fetal life, and childhood head injury all increase</p><p>the likelihood. Does this raise the possibility that a neurobiological die is</p><p>cast, that some people are destined to be this way? Precisely. Cantor</p><p>concludes correctly, “One cannot choose to not be a pedophile.”</p><p>But then he does an Olympian leap across the Grand Canyon–size false</p><p>dichotomy of compatibilism. Does any of that biology lessen the</p><p>condemnation and punishment that Sandusky deserved? No. “One cannot</p><p>choose to not be a pedophile, but one can choose to not be a child molester”</p><p>(my emphasis).[8]</p><p>The following table formalizes this dichotomy. On the left are things that</p><p>most people accept as outside our control—biological stuff. Sure,</p><p>sometimes we have trouble remembering that. We praise, single out, the</p><p>chorus member who is an anchor of reliability because of their perfect pitch</p><p>(which is a biologically heritable trait).[*] We praise a basketball player’s</p><p>dunk, ignoring that being seven-foot-two has something to do with it. We</p><p>smile more at someone attractive, are more likely to vote for them in an</p><p>election, less likely to convict them of a crime. Yeah, yeah, we agree</p><p>sheepishly when this is pointed out, they obviously didn’t choose the shape</p><p>of their cheekbones. We’re usually pretty good at remembering that the</p><p>biological stuff on the left is out of our control.[9]</p><p>“Biological stuff” Do you have grit?</p><p>Having destructive sexual urges Do you resist acting upon them?</p><p>Being a natural marathoner Do you fight through the pain?</p><p>Not being all that bright Do you triumph by studying extra hard?</p><p>Having a proclivity toward alcoholism Do you order ginger ale instead?</p><p>Having a beautiful face Do you resist concluding that you’re entitled to</p><p>people being nice to you because of it?</p><p>And then on the right is the free will you supposedly exercise in</p><p>choosing what you do with your biological attributes, the you who sits in a</p><p>bunker in your brain but not of your brain. Your you-ness is made of</p><p>nanochips, old vacuum tubes, ancient parchments with transcripts of</p><p>Sunday-morning sermons, stalactites of your mother’s admonishing voice,</p><p>streaks of brimstone, rivets made out of gumption. Whatever that real you is</p><p>composed of, it sure ain’t squishy biological brain yuck.</p><p>When viewed as evidence of free will, the right side of the chart is a</p><p>compatibilist playground of blame and praise. It seems so hard, so</p><p>counterintuitive, to think that willpower is made of neurons,</p><p>neurotransmitters, receptors, and so on. There seems a much easier answer</p><p>—willpower is what happens when that nonbiological essence of you is</p><p>bespangled with fairy dust.</p><p>And as one of the most important points of this book, we have as little</p><p>control over the right side of the chart as over the left. Both sides are</p><p>equally the outcome of uncontrollable biology interacting with</p><p>uncontrollable environment.</p><p>To understand the biology of the right side of the chart, time to focus on</p><p>the fanciest part of the brain, the frontal cortex, which was lightly touched</p><p>on in the last two chapters.</p><p>DOING THE RIGHT THING WHEN IT’S THE HARDER</p><p>THING TO DO</p><p>Bragging for the frontal cortex, it’s the newest part of the brain; we</p><p>primates have, proportionately, more of it than other mammals; when you</p><p>examine gene variants that are unique to primates, a disproportionate</p><p>percentage of them are expressed in the frontal cortex. Our human frontal</p><p>cortex is proportionately bigger and/or more complexly wired than that of</p><p>any other primate. As noted in the last chapter, it’s the last part of the brain</p><p>to fully mature, not being fully constructed until your midtwenties; this is</p><p>outrageously delayed, given that most of the brain is up and running within</p><p>a few years of birth. And as a major implication of this delay, a quarter</p><p>century of environmental influences shape how the frontal cortex is being</p><p>put together. It’s one of the hardest-working parts of the brain, in terms of</p><p>energy consumption. It has a type of neuron found nowhere else in the</p><p>brain. And the most interesting part of the frontal cortex—the prefrontal</p><p>cortex (PFC)—is proportionately even larger than the rest of the frontal</p><p>cortex, and more recently evolved.[*], [10]</p><p>As a reminder, the PFC is central to executive function, decision-</p><p>making. We saw this in chapter 2, where, way up in the chain of Libetian</p><p>commands, there was the PFC making decisions up to ten seconds before</p><p>subjects first became aware of that intent. What the PFC is most about is</p><p>making tough decisions in the face of temptation—gratification</p><p>postponement, long-term planning, impulse control, emotional regulation.</p><p>The PFC is essential for getting you to do the right thing when it is the</p><p>harder thing to do. Which is so pertinent to that false dichotomy between</p><p>what attributes fate hands you and what you do with them.</p><p>THE COGNITIVE PFC</p><p>As a warm-up, let’s examine “doing the right thing” in the cognitive realm.</p><p>It’s the PFC that inhibits you from doing something the habitual way when</p><p>you’re supposed to be doing it in a novel manner. Sit someone in front of a</p><p>computer and say to them, “Here’s the rule—when a blue light flashes on</p><p>the screen, hit the button on the left as fast as possible; red light, hit the</p><p>button on the right.” Have them do that a bunch of times, get the hang of it.</p><p>“Now reverse that—blue light, button on the right; red, left.” Have them do</p><p>that awhile. “Now switch back again.” Each time the rule changes, the PFC</p><p>is in charge of “Remember, blue now means . . .”</p><p>Now, quick, say the months of the year backward. The PFC activates,</p><p>suppressing the overlearned response—“Remember, September-August this</p><p>time, not September-October.” More frontal activation predicts a better</p><p>performance here.</p><p>One of the best ways to appreciate these frontal functions is to examine</p><p>people with a damaged PFC (as after certain types of strokes or dementias).</p><p>There are huge problems with “reversal” tasks like these. It’s too hard to do</p><p>that right thing when it is a change from the usual.</p><p>Thus, the PFC is for learning a new rule, or a new variant of a rule.</p><p>Implied in that is that the functioning of the PFC can change. Once that</p><p>novel rule persists and has stopped being novel, it becomes the task of</p><p>other, more automatic brain circuitry. Few of us need to activate the PFC to</p><p>pee nowhere but in the bathroom; but we sure did when we were three.</p><p>“Doing the right thing” requires two different skills from the PFC.</p><p>There’s sending the decisive “do this” signal along the path from the PFC to</p><p>the frontal cortex to the supplementary motor area (the SMA of chapter 2)</p><p>to the motor cortex. But even more important, there is the “and don’t do</p><p>that, even if that’s the usual” signal. Even more than sending excitatory</p><p>signals to the motor cortex, the PFC is about inhibiting habitual brain</p><p>circuits. To hark back again to chapter 2, the PFC is central to showing that</p><p>we lack both free will and the conscious veto power of free won’t.[11]</p><p>THE SOCIAL PFC</p><p>Obviously, the crowning achievement of millions of years of frontocortical</p><p>evolution is not reciting months backward. It’s social—it’s suppressing the</p><p>emotionally easier thing to do. The PFC is the center of our social brain.</p><p>The bigger the average size of the social group in a primate species, the</p><p>greater a percentage of the brain is devoted to the PFC; the bigger the size</p><p>of some human’s texting network, the larger a particular subregion of the</p><p>PFC and its connectivity with the limbic system. So does sociality enlarge</p><p>the PFC, or does a large PFC drive sociality? At least partially the former—</p><p>take individually housed monkeys and put them together in big, complex</p><p>social groups, and a year later, everyone’s PFC will have enlarged;</p><p>moreover, the individual who emerges at the top of the hierarchy shows the</p><p>largest increase.[*], [12]</p><p>Neuroimaging studies show the PFC reining in more emotional brain</p><p>regions in the name of doing (or thinking) the right thing. Stick a volunteer</p><p>in a brain scanner and flash up pictures</p><p>of faces. And in a depressing, well-</p><p>replicated finding, flash up the face of someone of another race and in about</p><p>75 percent of subjects, there is activation of the amygdala, the brain region</p><p>central to fear, anxiety, and aggression.[*] In under a tenth of a second.[*]</p><p>And then the PFC does the harder thing. In most of those subjects, a few</p><p>seconds after the amygdala activates, the PFC kicks in, turning off the</p><p>amygdala. It’s a delayed frontocortical voice—“Don’t think that way. That’s</p><p>not who I am.” And who are the folks in which the PFC doesn’t muzzle the</p><p>amygdala? People whose racism is avowedly, unapologetically explicit</p><p>—“That is who I am.”[13]</p><p>In another experimental paradigm, a subject in a brain scanner plays an</p><p>online game with two other people—each is represented by a symbol on the</p><p>screen, forming a triangle. They toss a virtual ball around—the subject</p><p>presses one of two buttons, determining which of the two symbols the ball</p><p>is tossed to; the other two toss it to each other, toss it back to the subject.</p><p>This goes on for a while, everyone having a fine time, and then, oh no, the</p><p>other two people stop tossing the ball to the subject. It’s the middle-school</p><p>nightmare: “They know I’m a dork.” The amygdala rapidly activates, along</p><p>with the insular cortex, a region associated with disgust and distress. And</p><p>then, after a delay, the PFC inhibits these other regions—“Get this in</p><p>perspective; this is just a stupid game.” In a subset of individuals, however,</p><p>the PFC doesn’t activate as much, and the amygdala and insular cortex just</p><p>keep going, as the subject feels more subjective distress. Who are these</p><p>impaired individuals? Teenagers—the PFC isn’t up to the task yet of</p><p>dismissing social ostracism as meaningless. There you have it.[*], [14]</p><p>More of the PFC reining in the amygdala. Give a volunteer a mild shock</p><p>now and then; the amygdala majorly wakes up each time. Now condition</p><p>the volunteer: just before each shock, show them a picture of some object</p><p>with completely neutral associations—say, a pot, a pan, a broom, or a hat.</p><p>Soon the mere sight of that previously innocuous object activates the</p><p>amygdala.[*] The next day, show the subject a picture of that object that</p><p>activates a conditioned fear response in them. Amygdala activation. Except</p><p>today, there’s no shock. Do it again, and again. Each time, no shock. And</p><p>slowly you “extinguish” the fear response; the amygdala stops reacting.</p><p>Unless the PFC isn’t working. Yesterday it was the amygdala that learned</p><p>“brooms are scary.” Today it is the PFC that learns, “but not today,” and</p><p>calms down the amygdala.[*],[15]</p><p>More insight into the PFC comes from brilliant studies by neuroscientist</p><p>Josh Greene of Harvard. Subjects in a brain scanner play repeated rounds of</p><p>a chance guessing game with a 50 percent success rate. Then comes the</p><p>fiendishly clever manipulation. Tell subjects there’s been a computer glitch</p><p>so that they can’t enter their guess; that’s okay, they’re told, we’ll show you</p><p>the answer and you can just tell us whether you were right. In other words,</p><p>an opportunity to cheat. Throw in enough of those there-goes-that-</p><p>computer-glitch-again opportunities, and you can tell if someone starts</p><p>cheating—their success rate averages above 50 percent. What happens in</p><p>the brains of cheaters when temptation arises? Massive activation of the</p><p>PFC, the neural equivalent of the person wrestling with whether to cheat.[16]</p><p>And then for the profound additional finding. What about the people</p><p>who never cheated—how do they do it? Maybe their astonishingly strong</p><p>PFC pins Satan to the mat each time. Major willpower. But that’s not what</p><p>happens. In those folks, the PFC doesn’t stir. At some point after “don’t pee</p><p>in your pants” no longer required the PFC to flex its muscles, an equivalent</p><p>happened in such individuals, generating an automatic “I don’t cheat.” As</p><p>framed by Greene, rather than withstanding the siren call of sin thanks to</p><p>“will,” this instead represents a state of “grace.” Doing the right thing isn’t</p><p>the harder thing.</p><p>The frontal cortex reins in inappropriate behavior in additional ways.</p><p>One example involves a brain region called the striatum that has to do with</p><p>automatic, habitual behaviors, exactly the sort of things that the amygdala</p><p>can take advantage of by activating. The PFC sends inhibitory projections</p><p>to the striatum as a backup plan—“I warned the amygdala not to do it, but if</p><p>that hothead does it anyway, don’t listen to it.”[17]</p><p>What happens to social behavior if the PFC is damaged? A syndrome of</p><p>“frontal disinhibition.” We all have thoughts—hateful, lustful, boastful,</p><p>petulant—we’d be mortified if anyone knew. Be frontally disinhibited and</p><p>you say and do exactly those things. When one of those diseases[*] occurs in</p><p>an eighty-year-old, it’s off to a neurologist. When it’s a fifty-year-old, it’s</p><p>usually a psychiatrist. Or the police. As it turns out, a substantial percentage</p><p>of people incarcerated for violent crime have a history of concussive head</p><p>trauma to the PFC.[18]</p><p>COGNITION VERSUS EMOTION, COGNITION AND</p><p>EMOTION, OR COGNITION VIA EMOTION?</p><p>Thus, the frontal cortex isn’t just this cerebral, eggheady brain region</p><p>weighing the pluses and minuses of each decision, sending nice rational</p><p>Libetian commands to the motor cortex—i.e., an excitatory role. It’s also an</p><p>inhibitory, rule-bound goody-goody telling more emotional parts of the</p><p>brain not to do something because they’re going to regret it. And basically,</p><p>those other brain regions think of the PFC as this moralizing pain with a</p><p>stick up its butt, especially when it turns out to be right. This generates a</p><p>dichotomy (spoiler alert: it’s false), that there is a major fault line between</p><p>thought and emotion, between the cortex, captained by the PFC, and the</p><p>part of the brain that processes emotions (broadly called the limbic system,</p><p>containing the amygdala along with other structures[*] related to sexual</p><p>arousal, maternal behavior, sadness, pleasure, aggression . . .).</p><p>A picture of a war of wills between the PFC and the limbic system</p><p>certainly makes sense by now. After all, it’s the former telling the latter to</p><p>stop those implicit racist thoughts, to put a stupid game in perspective, to</p><p>resist cheating. And it’s the latter that runs wild with crazy stuff when the</p><p>PFC is silent—e.g., during REM sleep, when you’re dreaming. But it’s not</p><p>always the two regions wrestling.[*] Sometimes they simply have different</p><p>purviews. The PFC handles April 15; the limbic system, February 14. The</p><p>former makes you grudgingly respect Into the Woods; the latter makes you</p><p>tearful during Les Mis, despite knowing that you’re being manipulated. The</p><p>former is engaged when juries decide guilt or innocence; the latter, when</p><p>they decide how much to punish the guilty.[19]</p><p>But—and this is a truly key point—rather than the PFC and limbic</p><p>system either being in opposition or ignoring each other, they are usually</p><p>intertwined. In order to do the correct, harder thing, the PFC requires a huge</p><p>amount of limbic, emotional input.</p><p>To appreciate this, we must sink deeper into minutiae, considering two</p><p>subregions of the PFC.</p><p>The first is the dorsolateral PFC (dlPFC), the definitive rational decider</p><p>in the frontal cortex. Like a Russian nesting doll, the cortex is the newest</p><p>part of the brain to evolve, the frontal cortex is the newest part of the cortex,</p><p>the PFC is the newest part of the frontal cortex, and the dlPFC is the newest</p><p>part of the PFC. The dlPFC is the last part of the PFC to fully mature.</p><p>The dlPFC is the essence of the PFC as tight-assed superego. It’s the</p><p>most active part of the PFC during “count the months backward” tasks, or</p><p>when considering temptation. It is fiercely utilitarian—more dlPFC activity</p><p>during a moral-judgment task predicts that the subject chooses to kill an</p><p>innocent person to save five.[20]</p><p>What happens when the dlPFC is silenced is really informative. This can</p><p>be done experimentally with an immensely cool technique called</p><p>transcranial magnetic stimulation (TMS—introduced on page 26 in the</p><p>footnote), in which a strong magnetic pulse to the scalp can temporarily</p><p>activate or inactivate the small patch of cortex just below. Activate the</p><p>dlPFC this way, and subjects become more utilitarian in deciding to</p><p>sacrifice one to save many. Inactivate the dlPFC, and subjects become more</p><p>impulsive—they rate a lousy offer in an economic game as unfair but lack</p><p>the self-control needed to hold out for a better reward. This is all about</p><p>sociality—manipulating the dlPFC has no effect if subjects think their</p><p>opponent is a computer.[*], [21]</p><p>Then there are people who have sustained selective damage to their</p><p>dlPFC. The outcome is just what you’d expect—impaired planning or</p><p>gratification postponement, perseveration on strategies that offer immediate</p><p>reward, plus poor executive control over socially inappropriate behavior. A</p><p>brain with no voice saying, “I wouldn’t do that if I were you.”</p><p>The other key subregion of the PFC is called the ventromedial PFC</p><p>(vmPFC), and to savagely simplify, it’s the opposite of the dlPFC. That</p><p>cerebral dlPFC is mostly getting inputs from other cortical regions,</p><p>canvassing the outer districts to find out their well-considered thoughts. But</p><p>the vmPFC carries in information from the limbic system, that brain region</p><p>that’s swoony or overwrought with emotion—the vmPFC is how the PFC</p><p>finds out what you’re feeling.[*]</p><p>What happens if the vmPFC is damaged? Great things, if you’re not big</p><p>on emotion. For that crowd, we are at our best when we are rational,</p><p>optimizing machines, thinking our way to our best moral decisions. In this</p><p>view, the limbic system gums up decision-making by being all sentimental,</p><p>sings too loud, dresses flamboyantly, has unsettling amounts of armpit hair.</p><p>In this view, if we just could get rid of the vmPFC, we’d be calmer, more</p><p>rational, and function better.</p><p>As a deeply significant finding, someone with vmPFC damage makes</p><p>terrible decisions, but of a very different type from those with dlPFC</p><p>damage. For starters, people with vmPFC damage have trouble making</p><p>decisions, because they’re not getting gut feelings about how they should</p><p>decide. When we are making a decision, the dlPFC is musing</p><p>philosophically, running thought experiments about what decision to make.</p><p>What the vmPFC is reporting to the dlPFC are the results of a feel</p><p>experiment. “How will I feel if I do X and Z then happens?” And without</p><p>that gut-feeling input, it’s immensely hard to make decisions.[22]</p><p>Moreover, the decisions made can be wrong by anyone’s standards.</p><p>People with vmPFC damage don’t shift their behavior based on negative</p><p>feedback. Suppose subjects are repeatedly choosing between two tasks, one</p><p>of which is more rewarding. Switch which task is the more rewarding one,</p><p>and people typically shift their strategy accordingly (even if they’re not</p><p>consciously aware of the change in reward rates). But with vmPFC damage,</p><p>the person can even say that it’s the other task that is now more</p><p>rewarding . . . while sticking with the previous task. Without a vmPFC, you</p><p>still know what negative feedback means, but not how it feels.[23]</p><p>As we saw, dlPFC damage produces inappropriate, emotionally</p><p>disinhibited behaviors. But without a vmPFC, you desiccate into heartless</p><p>detachment. This is the person who, meeting someone, says, “Hello, good</p><p>to meet you. I see that you’re quite overweight.” And when castigated later</p><p>by their mortified partner will ask with calm puzzlement, “What’s wrong?</p><p>It’s true.” Unlike most people, those with vmPFC damage don’t advocate</p><p>harsher punishment for violent versus nonviolent crimes, don’t alter game</p><p>play if they think they’re playing against a computer rather than a human,</p><p>and don’t distinguish between a loved one and a stranger when deciding</p><p>whether to sacrifice them in order to save five people. The vmPFC is not</p><p>the vestigial appendix of the PFC, where emotion is like appendicitis,</p><p>inflaming a sensible brain. Instead, it’s essential.</p><p>So the PFC does the harder thing when it’s the right thing to do. But as a</p><p>crucial point, right is used in a neurobiological and instrumental sense</p><p>rather than a moral one.</p><p>Consider lying, and the obvious role the PFC plays in resisting the</p><p>temptation to lie. But you also use the PFC to lie competently; pathological</p><p>liars, for example, have atypically complex wiring in the PFC. Moreover,</p><p>lying competently is value-free, amoral. A child schooled in situational</p><p>ethics lies about how she loves the dinner that Grandma made. A Buddhist</p><p>monk plays liar’s dice superbly. A dictator fabricates the occurrence of a</p><p>massacre as an excuse to invade a country. A spawn of Ponzi defrauds</p><p>investors. As with much about the frontal cortex, it’s context, context,</p><p>context.</p><p>With this tour of the PFC complete, we return to the hideously</p><p>destructive false dichotomy between your attributes, those natural gifts and</p><p>weaknesses that you just happen to have, and your supposedly freely</p><p>chosen choices as to what you do with those attributes.</p><p>“Biological stuff” Do you have grit?</p><p>Having destructive sexual urges Do you resist acting upon them?</p><p>Being a natural marathoner Do you fight through the pain?</p><p>Not being all that bright Do you triumph by studying extra hard?</p><p>Having a proclivity toward alcoholism Do you order ginger ale instead?</p><p>Having a beautiful face Do you resist concluding that you’re entitled to</p><p>people being nice to you because of it?</p><p>THE SAME EXACT STUFF</p><p>Look once again at the actions in the right column, those crossroads that</p><p>test our mettle. Do you resist acting on your destructive sexual urges? Do</p><p>you fight through the pain, work extra hard to overcome your weaknesses?</p><p>You can see where this is heading. If you want to finish this paragraph and</p><p>then skip the rest of the chapter, here are the three punch lines: (a) grit,</p><p>character, backbone, tenacity, strong moral compass, willing spirit winning</p><p>out over weak flesh, are all produced by the PFC; (b) the PFC is made of</p><p>biological stuff identical to the rest of your brain; (c) your current PFC is</p><p>the outcome of all that uncontrollable biology interacting with all that</p><p>uncontrollable environment.</p><p>Chapter 3 explored the biological answer to the question, Why did that</p><p>behavior just occur?, the answer being, because of what came a second</p><p>before, and a minute before, and . . . Now we ask the more focused question</p><p>of why that PFC functioned the way it did just now. And it’s the same</p><p>answer.</p><p>THE LEGACY OF THE PRECEDING SECONDS TO AN</p><p>HOUR</p><p>You sit there, alert, on task. Each time the blue light comes on, you rapidly</p><p>hit the button on the left; red light, button on the right. Then, the rule</p><p>reverses—blue right, red left. Then it reverses again, and then again . . .</p><p>What’s going on in your brain during this task? Each time a light flashes,</p><p>your visual cortex briefly activates. An instant later, there’s brief activation</p><p>of the pathway carrying that information from the visual cortex to the PFC.</p><p>An instant later, the pathways from there to your motor cortex and then</p><p>from your motor cortex to your muscles activate your motor cortex to your</p><p>muscles. What’s happening IN the PFC? It’s sitting there having to focus,</p><p>repeating, “Blue left, red right” or “Blue right, red left.” It’s working hard</p><p>the entire time, chanting which rule is in effect. When you’re trying to do</p><p>the right, harder thing, the PFC becomes the most expensive part of the</p><p>brain.</p><p>Expensive. Nice metaphor. But it’s not a metaphor. Any given neuron in</p><p>the PFC is firing nonstop, each action potential triggering waves of ions</p><p>flowing across membranes and then having to be corralled and pumped</p><p>back to where they started. And those action potentials can occur a hundred</p><p>times a second while you’re concentrating on the rule that is now in place.</p><p>Those PFC neurons consume mammoth amounts of energy.</p><p>You can demonstrate this with brain-imaging techniques, showing how a</p><p>working PFC consumes tons of glucose and oxygen from the bloodstream,</p><p>or by measuring how much biochemical cash is available in each neuron at</p><p>any given time.[*] Which leads to the main point</p><p>of this section—when the</p><p>PFC doesn’t have enough energy on board, it doesn’t work well.</p><p>This is the cellular underpinning of concepts like “cognitive load” or</p><p>“cognitive reserve,” alluded to in chapter 3.[*] As your PFC works hard on a</p><p>task, those reserves are depleted.[24]</p><p>For example, place a bowl of M&M’s in front of someone dieting.</p><p>“Here, have all you want.” They’re trying to resist. And if the person has</p><p>just done something frontally demanding, even some idiotically irrelevant</p><p>red light / blue light task, the person snacks on more candy than usual. In</p><p>the words of part of the charming title of a paper on the subject, “Deplete us</p><p>not into temptation.” Same thing in reverse—deplete frontal reserve by</p><p>sitting for fifteen minutes resisting those M&M’s, and afterward you’ll be</p><p>lousy at red light / blue light.[25]</p><p>PFC function and self-regulation go down the tubes if you’re terrified or</p><p>in pain—the PFC is using up energy dealing with the stress. Recall the</p><p>Macbeth effect, where reflecting on something unethical you once did</p><p>impairs frontal cognition (unless you’ve relieved yourself of that</p><p>burdensome soiling by washing your hands). Frontal competence even</p><p>declines if it’s keeping you from being distracted by something positive—</p><p>patients are more likely to die as a result of surgery if it is the surgeon’s</p><p>birthday.[26]</p><p>Fatigue also depletes frontal resources. As the workday progresses,</p><p>doctors take the easier way out, ordering up fewer tests, being more likely</p><p>to prescribe opiates (but not a nonproblematic drug like an anti-</p><p>inflammatory, or physical therapy). Subjects are more likely to behave</p><p>unethically and become less morally reflective as the day progresses, or</p><p>after they’ve struggled with a cognitively challenging task. In an immensely</p><p>unsettling study of emergency room doctors, the more cognitively</p><p>demanding the workday (as measured by patient load), the higher the levels</p><p>of implicit racial bias by the end of the day.[27]</p><p>It’s the same with hunger. Here’s one study that should stop you in your</p><p>tracks (and was first referred to in the last chapter). The researchers studied</p><p>a group of judges overseeing more than a thousand parole board decisions.</p><p>What best predicted whether a judge granted someone parole versus more</p><p>jail time? How long it had been since they had eaten a meal. Appear before</p><p>the judge soon after she’s had a meal, and there was a roughly 65 percent</p><p>chance of parole; appear a few hours after a meal, and there was close to a 0</p><p>percent chance.[*], [28]</p><p>What’s that about? It’s not like judges would get light-headed by late</p><p>afternoon, slurring their words, getting all confused, and jailing the court</p><p>stenographer. Nobel laureate psychologist Daniel Kahneman, in discussing</p><p>this study, suggests that as the hours since a meal creep by, and the PFC</p><p>becomes less adept at focusing on the details of each case, the judge</p><p>becomes more likely to default into the easiest, most reflexive thing, which</p><p>is sending the person back to jail. Important support for this idea comes</p><p>from a study in which subjects had to make judgments of increasing</p><p>complexity; as this progressed, the more sluggish the dlPFC became during</p><p>deliberating, the more likely subjects were to fall back on a habitual</p><p>decision.[29]</p><p>Why is denying parole the easy, habitual response to fall back on?</p><p>Because it’s less demanding of the PFC. Someone is facing you who has</p><p>done bad things but has been behaving himself in jail. It takes a mighty</p><p>energetic PFC to try to understand, to feel, what the prisoner’s life—filled</p><p>with horrible luck—has been like, to view the world from his perspective,</p><p>to search his face and see those hints of change and potential beneath the</p><p>toughness. It takes a lot of frontal effort for a judge to walk in a prisoner’s</p><p>shoes before deciding on his parole. And reflecting that, across all those</p><p>judicial decisions, judges averaged a longer length of time before deciding</p><p>to parole the person rather than before sending them back to jail.[*],[*], [30]</p><p>Thus, events in the world around you will be modulating the ability of</p><p>your PFC to resist those M&M’s, or a quick, easy judicial decision. Another</p><p>relevant factor is the brain chemistry of just how tempting the temptation is.</p><p>This has a lot to do with the neurotransmitter dopamine being released into</p><p>the PFC from neurons originating back in the nucleus accumbens in the</p><p>limbic system. What is the dopamine doing in the PFC? Signaling the</p><p>salience of a temptation, how much your neurons are imagining how great</p><p>M&M’s taste. The more of a dopamine dump in the PFC, the stronger the</p><p>salience signal of the temptation, the more of a challenge it is for the PFC to</p><p>resist. Boost dopamine levels in your PFC, and you’ll suddenly have trouble</p><p>keeping a lid on your impulses.[*] And exactly as you’d expect, there’s a</p><p>whole world of factors out of your control influencing the amount of</p><p>dopamine that is going to be soaking your PFC (i.e., understanding the</p><p>dopamine system also requires a one-second-before, one-century-before . . .</p><p>analysis).[31]</p><p>In those seconds to hours before, sensory information modulates PFC</p><p>function without your awareness. Have a subject smell a vial of sweat from</p><p>someone frightened, and her amygdala activates, making it harder for the</p><p>PFC to rein it in.[*] How’s this for rapidly altering frontal function—take an</p><p>average heterosexual male and expose him to a particular stimulus, and his</p><p>PFC becomes more likely to decide that jaywalking is a good idea. What’s</p><p>the stimulus? The proximity of an attractive woman. I know, pathetic.[*],[32]</p><p>Thus, all sorts of things often out of your control—stress, pain, hunger,</p><p>fatigue, whose sweat you’re smelling, who’s in your peripheral vision—can</p><p>modulate how effectively your PFC does its job. Usually without your</p><p>knowing it’s happening. No judge, if asked why she just made her judicial</p><p>decision, cites her blood glucose levels. Instead, we’re going to hear a</p><p>philosophical discourse about some bearded dead guy in a toga.</p><p>To ask a question derived from the last chapter, do findings like these</p><p>prove that there’s no such thing as freely chosen grit? Even if the sizes of</p><p>these effects were enormous (which they rarely are, although 65 percent</p><p>versus nearly 0 percent parole rates in the judge/hunger study sure isn’t</p><p>minor), not on their own. We now zoom out more.</p><p>THE LEGACY OF THE PRECEDING HOURS TO DAYS</p><p>This lands us in the realm of what hormones have been doing to the PFC</p><p>when you need to show what would be interpreted as some agentic grit.</p><p>As a reminder from the last chapters, elevations of testosterone during</p><p>this time frame make people more impulsive, more self-confident and risk-</p><p>taking, more self-centered, less generous or empathic, and more likely to</p><p>react aggressively to a provocation. Glucocorticoids and stress make people</p><p>poorer at executive function and impulse control and more likely to</p><p>perseverate on a habitual response to a challenge that isn’t working, instead</p><p>of changing strategies. Then there’s oxytocin, which enhances trust,</p><p>sociality, and social recognition. Estrogen enhances executive function,</p><p>working memory, and impulse control and makes people better at rapidly</p><p>switching tasks when needed.[33]</p><p>Lots of these hormonal effects play out in the PFC. Have a horribly</p><p>stressed morning, and by noon, glucocorticoids will have changed gene</p><p>expression in the dlPFC, making it less excitable and less able to couple to</p><p>the amygdala and calm it down. Meanwhile, stress and glucocorticoids</p><p>make that emotional vmPFC more excitable and more impervious to</p><p>negative feedback about social behavior. Stress also causes release in the</p><p>PFC of a neurotransmitter called norepinephrine (sort of the brain’s</p><p>equivalent of adrenaline), which also disrupts the dlPFC.[34]</p><p>In that time span, testosterone will have changed the expression of genes</p><p>in neurons in another part of the PFC (called the orbitofrontal cortex),</p><p>making them more sensitive to an inhibitory neurotransmitter, quieting the</p><p>neurons, and decreasing their ability</p><p>to talk sense to the limbic system.</p><p>Testosterone also reduces the coupling between one part of the PFC and a</p><p>region implicated in empathy; this helps explain why the hormone makes</p><p>people less accurate at assessing someone’s emotions by looking at their</p><p>eyes. Meanwhile, oxytocin has its prosocial effects by strengthening the</p><p>orbitofrontal cortex and by changing the rates at which the vmPFC utilizes</p><p>the neurotransmitters serotonin and dopamine. Then there’s estrogen, which</p><p>not only increases the number of receptors for the neurotransmitter</p><p>acetylcholine but even changes the structure of neurons in the vmPFC.[*],</p><p>[35]</p><p>Please tell me that you haven’t been writing down and starting to</p><p>memorize these factoids. The point is the mechanistic nature of all this.</p><p>Depending on where you are in your ovulatory cycle, if it’s the middle of</p><p>the night or day, if someone gave you a wonderful hug that’s left you still</p><p>tingling, or someone gave you a threatening ultimatum that’s left you still</p><p>trembling—gears and widgets in your PFC will be working differently.</p><p>And, as before, rarely with large enough effects to spell doom for the myth</p><p>of grit all on their own. Just another piece.</p><p>THE LEGACY OF THE PRECEDING DAYS TO YEARS</p><p>Chapter 3 covered how over this time span, the structure and function of the</p><p>brain can change dramatically. Recall how years of depression can cause the</p><p>hippocampus to atrophy, how the sort of trauma that produces PTSD can</p><p>enlarge the amygdala. Naturally, neuroplasticity in response to experience</p><p>occurs in the PFC as well. Suffer from major depression or, to a lesser</p><p>extent, a major anxiety disorder for years, and the PFC atrophies; the longer</p><p>the mood disorder persists, the greater the atrophy. Prolonged stress or</p><p>exposure to stress levels of glucocorticoids accomplishes the same; the</p><p>hormone suppresses the level or efficacy of a key neuronal growth factor</p><p>called BDNF[*] in the PFC, causing dendritic spines and dendritic branches</p><p>to retract so much that the layers of the PFC thin out. This impairs PFC</p><p>function, including a really unhelpful twist: As noted, when activated, the</p><p>amygdala helps initiate the body’s stress response (including the secretion</p><p>of glucocorticoids). The PFC works to end this stress response by calming</p><p>down the amygdala. Elevated glucocorticoid levels impair PFC function;</p><p>the PFC isn’t as good at calming the amygdala, resulting in the person</p><p>secreting ever higher levels of glucocorticoids, which then impair . . . A</p><p>vicious cycle.[36]</p><p>The list of other regulators stretches out. Estrogen causes PFC neurons</p><p>to form thicker, more complex branches connecting to other neurons;</p><p>remove estrogen entirely and some PFC neurons die. Alcohol abuse</p><p>destroys neurons in that orbitofrontal cortex, causing it to shrink; the more</p><p>shrinkage, the more likely an abstinent alcoholic is to relapse. Chronic</p><p>cannabis use decreases blood flow and activity in both the dlPFC and the</p><p>vmPFC. Exercise aerobically on a regular basis, and genes related to</p><p>neurotransmitter signaling are turned on in the PFC, more BDNF growth</p><p>factor is made, and coupling of activity among various PFC subregions</p><p>becomes tighter and more efficient; roughly the opposite happens with</p><p>eating disorders. The list goes on and on.[37]</p><p>Some of these effects are subtle. If you want to see something unsubtle,</p><p>watch what happens days to years after the PFC is damaged by a traumatic</p><p>brain injury (TBI—à la Phineas Gage), or frontotemporal dementia redux.</p><p>Extensive damage to the PFC increases the likelihood long after of</p><p>disinhibited behavior, antisocial tendencies, and violence, a phenomenon</p><p>that has been called “acquired sociopathy”[*]—remarkably, such individuals</p><p>can tell you that, say, murder is wrong; they know, but they just can’t</p><p>regulate their impulses. Roughly half the people incarcerated for violent</p><p>antisocial criminality have a history of TBI, versus about 8 percent of the</p><p>general population; having had a TBI increases the likelihood of recidivism</p><p>in prison populations. Moreover, neuroimaging studies reveal elevated rates</p><p>of structural and functional abnormalities in the PFC among prisoners with</p><p>a history of violent, antisocial criminality.[*],[38]</p><p>Then there’s the effect of decades of experiencing racial discrimination,</p><p>which is a predictor of poor health in every corner of the body. African</p><p>Americans with more severe histories of suffering discrimination (based on</p><p>the score from a questionnaire, after controlling for PTSD and trauma</p><p>history) have greater resting levels of activity in the amygdala and greater</p><p>coupling between the amygdala and the downstream brain regions that it</p><p>activates. If the subjects in that miserable social-exclusion paradigm (where</p><p>the other two players stop throwing the virtual ball to you) are African</p><p>American, the more the ostracizing is attributed to racism, the more vmPFC</p><p>activation there is. In another neuroimaging study, performance on a frontal</p><p>task declined in subjects primed with pictures of spiders (versus birds);</p><p>among African American subjects, the more of a history of discrimination,</p><p>the more spiders activated the vmPFC and the more performance declined.</p><p>What are the effects of a history of prolonged discrimination? A brain that</p><p>is in a resting state of don’t-let-your-guard-down vigilance, that is more</p><p>reactive to perceived threat, and a PFC burdened by a torrent of reporting</p><p>from the vmPFC about this constant state of dis-ease.[39]</p><p>To summarize this section, when you try to do the harder thing that’s</p><p>better, the PFC you’re working with is going to be displaying the</p><p>consequences of whatever the previous years have handed you.</p><p>THE LEGACY OF THE TIME OF PIMPLES</p><p>Take the previous paragraph, replace the previous years with adolescence,</p><p>underline the entire section, and you’re all set. Chapter 3 provided the basic</p><p>facts: (a) when you’re an adolescent, your PFC still has a ton of</p><p>construction ahead of it; (b) in contrast, the dopamine system, crucial to</p><p>reward, anticipation, and motivation, is already going full blast, so the PFC</p><p>hasn’t a prayer of effectively reining in thrill seeking, impulsivity, craving</p><p>of novelty, meaning that adolescents behave in adolescent ways; (c) if the</p><p>adolescent PFC is still a construction site, this time of your life is the last</p><p>period that environment and experience will have a major role in</p><p>influencing your adult PFC;[*] (d) delayed frontocortical maturation has to</p><p>have evolved precisely so that adolescence has this influence—how else are</p><p>we going to master discrepancies between the letter and the spirit of laws of</p><p>sociality?</p><p>Thus, adolescent social experience, for example, will alter how the PFC</p><p>regulates social behavior in adults. How? Round up all the usual suspects.</p><p>Lots of glucocorticoids, lots of stress (physical, psychological, social)</p><p>during adolescence, and your PFC won’t be its best self in adulthood. There</p><p>will be fewer synapses and less complex dendritic branching in the mPFC</p><p>and orbitofrontal cortex, along with permanent changes in how PFC</p><p>neurons respond to the excitatory neurotransmitter glutamate (due to</p><p>persistent changes in the structure of one of the main glutamate receptors).</p><p>The adult PFC will be less effective in inhibiting the amygdala, making it</p><p>harder to unlearn conditioned fear and less effective at inhibiting the</p><p>autonomic nervous system from overreacting to being startled. Impaired</p><p>impulse control, impaired PFC-dependent cognitive tasks. The usual.[40]</p><p>Conversely, an enriched, stimulating environment during adolescence</p><p>has great effects on the resulting adult PFC and can reverse some of the</p><p>effects of childhood adversity. For example, an enriched environment</p><p>during adolescence causes permanent changes in gene regulation in the</p><p>PFC, producing higher adult levels of neuronal growth factors like BDNF.</p><p>Furthermore, while prenatal stress causes reductions in BDNF levels in the</p><p>adult PFC (stay tuned), adolescent enrichment can reverse this effect. All</p><p>changes that impair the PFC’s ability for impulse control and gratification</p><p>postponement. So if you want to be better at doing the harder thing as an</p><p>adult, make sure you pick the right adolescence.[41]</p><p>FURTHER BACK</p><p>Now go back to the paragraph you underlined, discussing “whatever</p><p>adolescence has handed you,” replace adolescence with childhood, and</p><p>underline the paragraph eighteen more times. Whaddaya know, the sort of</p><p>childhood you had shapes the construction of the PFC at the time and the</p><p>sort of PFC you’ll have in adulthood.[*]</p><p>For example, no surprise, childhood abuse produces kids with a smaller</p><p>PFC, with less gray matter and with changes in circuitry: less</p><p>communication among different subregions of the PFC, less coupling</p><p>between the vmPFC and the amygdala (and the bigger the effect, the more</p><p>prone the child is to anxiety). Synapses in the brain are less excitable; there</p><p>are changes in the numbers of receptors for various neurotransmitters and</p><p>changes in gene expression and patterns of epigenetic marking of genes—</p><p>along with impaired executive function and impulse control in the child.</p><p>Many of these effects occur in the first half decade or so of life. One might</p><p>raise a cart-and-horse issue—the assumption in this section is that abuse</p><p>causes these changes in the brain. What about the possibility that kids who</p><p>already have these differences behave in ways that make them more likely</p><p>to be abused? This is highly unlikely—the abuse typically precedes the</p><p>behavioral changes.[42]</p><p>Unsurprising as well is that these changes in the PFC in childhood can</p><p>persist into adulthood. Childhood abuse produces an adult PFC that is</p><p>smaller, thinner, and with less gray matter, altered PFC activity in response</p><p>to emotional stimuli, altered levels of receptors for various</p><p>neurotransmitters, weakened coupling between both the PFC and</p><p>dopaminergic “reward” regions (predicting increased depression risk), and</p><p>weakened coupling with the amygdala as well, predicting more of a</p><p>tendency to respond to frustration with anger (“trait anger”). And once</p><p>again, all of these changes are associated with an adult PFC that isn’t at its</p><p>best.[43]</p><p>Thus, childhood abuse produces a different adult PFC. And grimly,</p><p>having been abused as a child produces an adult with an increased</p><p>likelihood of abusing their own child; at one month of age, PFC circuitry is</p><p>already different in children whose mothers were abused in childhood.[44]</p><p>These findings concern two groups of people—abused in childhood or</p><p>not. What about looking at the full spectrum of luck? How about the effects</p><p>of childhood socioeconomic status on our realm of supposed grit?</p><p>No surprise, the socioeconomic status of a child’s family predicts the</p><p>size, volume, and gray matter content of the PFC in kindergarteners. Same</p><p>thing in toddlers. In six-month-olds. In four-week-olds. You want to scream</p><p>at how unfair life can be.[45]</p><p>All the individual pieces of these findings flow from that.</p><p>Socioeconomic status predicts how much a young child’s dlPFC activates</p><p>and recruits other brain regions during an executive task. It predicts more</p><p>responsiveness of the amygdala to physical or social threat, a stronger</p><p>activation signal carrying this emotional response to the PFC via the</p><p>vmPFC. And such status predicts every possible measure of frontal</p><p>executive function in kids; naturally, lower socioeconomic status predicts</p><p>worse PFC development.[46]</p><p>There are hints as to the mediators. By age six, low status is already</p><p>predicting elevated glucocorticoid levels; the higher the levels, the less</p><p>activity in the PFC on average.[*] Moreover, glucocorticoid levels in kids</p><p>are influenced not only by the socioeconomic status of the family but by</p><p>that of the neighborhood as well.[*] Increased amounts of stress mediate the</p><p>relationship between low status and less PFC activation in kids. As a related</p><p>theme, lower socioeconomic status predicts a less stimulating environment</p><p>for a child—all those enriching extracurricular activities that can’t be</p><p>afforded, the world of single mothers working multiple jobs who are too</p><p>exhausted to read to their child. As one shocking manifestation of this, by</p><p>age three, your average high-socioeconomic status kid has heard about</p><p>thirty million more words at home than a poor kid, and in one study, the</p><p>relationship between socioeconomic status and the activity of a child’s PFC</p><p>was partially mediated by the complexity of language use at home.[47]</p><p>Awful. Given the start of constructing the frontal cortex during this</p><p>period, it wouldn’t be crazy to predict that childhood socioeconomic status</p><p>predicts things in adults. Childhood status (independent of the status</p><p>achieved in adulthood) is a significant predictor of glucocorticoid levels, the</p><p>size of the orbitofrontal cortex, and performance of PFC-dependent tasks in</p><p>adulthood. Not to mention incarceration rates.[48]</p><p>Miseries like childhood poverty and childhood abuse are incorporated in</p><p>someone’s Adverse Childhood Experiences (ACE) score. As we saw in the</p><p>last chapter, it queries whether someone experienced or witnessed physical,</p><p>emotional, or sexual childhood abuse, physical or emotional neglect, or</p><p>household dysfunction, including divorce, spousal abuse, or a family</p><p>member mentally ill, incarcerated, or struggling with substance abuse. With</p><p>each increase in someone’s ACE score, there’s an increased likelihood of a</p><p>hyperreactive amygdala that has expanded in size and a sluggish PFC that</p><p>never fully developed.[49]</p><p>Let’s push the bad news one step further, into chapter 3’s realm of</p><p>prenatal environmental effects. Low socioeconomic status for a pregnant</p><p>woman or her living in a high-crime neighborhood both predict less cortical</p><p>development at the time of the baby’s birth. Even back when the child was</p><p>still in utero.[*] And naturally, high levels of maternal stress during</p><p>pregnancy (e.g., loss of a spouse, natural disasters, or maternal medical</p><p>problems that necessitate treatment with lots of synthetic glucocorticoids)</p><p>predict cognitive impairment across a wide range of measures, poorer</p><p>executive function, decreased gray matter volume in the dlPFC, a</p><p>hyperreactive amygdala, and a hyperreactive glucocorticoid stress response</p><p>when those fetuses become adults.[*],[50]</p><p>An ACE score, a fetal adversity score, last chapter’s Ridiculously Lucky</p><p>Childhood Experience score—they all tell the same thing. It takes a certain</p><p>kind of audacity and indifference to look at findings like these and still</p><p>insist that how readily someone does the harder things in life justifies</p><p>blame, punishment, praise, or reward. Just ask those fetuses in the womb of</p><p>a low-socioeconomic-status woman, already paying a neurobiological price.</p><p>THE LEGACY OF THE GENES YOU WERE HANDED,</p><p>AND THEIR EVOLUTION</p><p>Genes have something to do with the sort of PFC you have. Big shocker—</p><p>as described in the last chapter, the growth factors, enzymes that generate or</p><p>break down neurotransmitters, receptors for neurotransmitters and</p><p>hormones, etc., etc., are all made of protein, meaning that they are coded for</p><p>by genes.</p><p>The notion that genes have something to do with all this can be totally</p><p>superficial and uninteresting. Differences between the type of genes</p><p>possessed by particular species help explain why a frontal cortex occurs in</p><p>humans but not in barnacles in the sea or heather on the hill. The types of</p><p>genes possessed by humans help explain why the frontal cortex (like the</p><p>rest of the cortex) consists of six layers of neurons and isn’t bigger than</p><p>your skull. However, the sort of genetics that interests us when “genes”</p><p>come into the picture concerns the fact that that particular gene can come in</p><p>different flavors, with these variants differing from one person to the next.</p><p>Thus, in this section, we’re not interested in genes that help form a frontal</p><p>cortex in humans but don’t exist in fungi. We’re interested in the variation</p><p>in versions of genes that helps explain variation in the volume of the frontal</p><p>cortex, its level of activity (as detected with EEG), and performance on</p><p>PFC-dependent tasks.[*] In other words, we’re interested in the variants</p><p>of</p><p>those genes that help explain why two people differ in their likelihood of</p><p>stealing a cookie.[51]</p><p>Nicely, the field has progressed to the point of understanding how</p><p>variants of specific genes relate to frontal function. A bunch of them relate</p><p>to the neurotransmitter serotonin; for example, there’s a gene that codes for</p><p>a protein that removes serotonin from the synapse, and which version of</p><p>that gene you have influences the tightness of coupling between the PFC</p><p>and amygdala. Variation in a gene related to the breakdown of serotonin in</p><p>the synapse helps predict people’s performance on PFC-dependent reversal</p><p>tasks. Variation in the gene for one of the serotonin receptors (there are a</p><p>lot) helps predict how good people are at impulse control.[*] Those are just</p><p>about the genetics of serotonin signaling. In a study of the genomes of</p><p>thirteen thousand people, a complex cluster of gene variants predicted an</p><p>increased likelihood of impulsive, risky behavior; the more of those variants</p><p>someone had, the smaller their dlPFC.[52]</p><p>A crucial point about genes related to brain function (well, pretty much</p><p>all genes) is that the same gene variant will work differently, sometimes</p><p>even dramatically differently, in different environments. This interaction</p><p>between gene variant and variation in environment means that, ultimately,</p><p>you can’t say what a gene “does,” only what it does in each particular</p><p>environment in which it has been studied. And as a great example of this, in</p><p>variants in the gene for one type of serotonin receptor helps explain</p><p>impulsivity in women . . . but only if they have an eating disorder.[53]</p><p>The section on adolescence considered why dramatic delayed maturation</p><p>of the PFC evolved in humans and how that makes that region’s</p><p>construction so subject to environmental influences. How do genes code for</p><p>freedom from genes? In at least two ways. The first, straightforward, way</p><p>involves the genes that influence how rapidly PFC maturation occurs.[*]</p><p>The second way is subtler and elegant—genes relevant to how sensitive the</p><p>PFC will be to different environments. Consider an (imaginary) gene,</p><p>coming in two variants, that influences how prone someone is to stealing. A</p><p>person, on their own, has the same low likelihood, regardless of variant.</p><p>However, if there’s a peer group egging the person on, one variant results in</p><p>a 5 percent increase in likelihood of succumbing, the other 50 percent. In</p><p>other words, the two variants produce dramatic differences in sensitivity to</p><p>peer pressure.</p><p>Let’s frame this sort of difference more mechanically. Suppose you have</p><p>an electrical cord that plugs into a socket; when it’s plugged in, you don’t</p><p>steal. The socket is made of an imaginary protein that comes in two</p><p>variants, which determine how wide the slots are that the plug plugs into. In</p><p>a silent, hermetically sealed room, a plug remains in the socket, regardless</p><p>of variant. But if a group of taunting, peer-pressuring elephants thunders</p><p>past, the plug is ten times more likely to vibrate out of the loose-slot socket</p><p>than the tight one.</p><p>And that turns out to be something like a genetic basis for being freer</p><p>from genes. Work by Benjamin de Bivort at Harvard concerns a gene</p><p>coding for a protein called teneurin-A, which is involved in synapse</p><p>formation between neurons. The gene comes in two variants that influence</p><p>how tightly a cable from one neuron plugs into a teneurin-A socket on the</p><p>other (to simplify enormously). Have the loose-socket variant, and the</p><p>result will be more variability in synaptic connectiveness. Or stated our</p><p>way, the loose-socket variant codes for neurons that are more sensitive to</p><p>environmental influences during synapse formation. It’s not known yet if</p><p>teneurins work this way in our brains (these were studies of flies—yes,</p><p>environmental influences even affect synapse formation in flies), but things</p><p>conceptually similar to this have to be occurring in umpteen dimensions in</p><p>our brains.[54]</p><p>THE CULTURAL LEGACY BEQUEATHED TO YOUR</p><p>PFC BY YOUR ANCESTORS</p><p>As we saw in the previous chapter’s overview, different sorts of ecosystems</p><p>generate different sorts of cultures, which affects a child’s upbringing from</p><p>virtually the moment of birth, tilting the brain construction toward ways</p><p>that make it easier for them to fit into the culture. And thus pass its values</p><p>on to the next generation . . .</p><p>Of course, cultural differences majorly influence the PFC. Essentially all</p><p>the studies done concern comparisons between Southeast Asian collectivist</p><p>cultures valuing harmony, interdependence, and conformity, and North</p><p>American individualist ones emphasizing autonomy, individual rights, and</p><p>personal achievement. And their findings make sense.[*]</p><p>Here’s one you couldn’t make up—in Westerners, the vmPFC activates</p><p>in response to seeing a picture of your own face but not your mother’s; in</p><p>East Asians, the vmPFC activates equally for both; these differences</p><p>become even more extreme if you prime subjects beforehand to think about</p><p>their cultural values. Study bicultural individuals (i.e., with one collectivist</p><p>culture parent, one individualist); prime them to think about one culture or</p><p>the other, and they then show that culture’s typical profile of vmPFC</p><p>activation.[55]</p><p>Other studies show differences in PFC and emotion regulation. A meta-</p><p>analysis of thirty-five studies neuroimaging subjects during social-</p><p>processing tasks showed that East Asians average higher activity in the</p><p>dlPFC than Westerners (along with activation of a brain region called the</p><p>temporoparietal junction, which is central to theory of mind); this is</p><p>basically a brain more actively working on emotion regulation and</p><p>understanding other people’s perspectives. In contrast, Westerners present a</p><p>picture of more emotional intensity, self-reference, capacity for strongly</p><p>emotional disgust or empathy—higher levels of activity in the vmPFC,</p><p>insula, and anterior cingulate. And these neuroimaging differences are</p><p>greatest in subjects who most strongly espouse their cultural values.[56]</p><p>There are also PFC differences in cognitive style. In general, collectivist-</p><p>culture individuals prefer and excel at context-dependent cognitive tasks,</p><p>while it’s context-independent tasks for individualistic-culture folks. And in</p><p>both populations, the PFC must work harder when subjects struggle with</p><p>the type of task less favored by their culture.</p><p>Where do these differences come from on a big-picture level?[*] As</p><p>discussed in the last chapter, East Asian collectivism is generally thought to</p><p>arise from the communal work demands of floodplain rice farming. Recent</p><p>Chinese immigrants to the United States already show the Western</p><p>distinction between activating your vmPFC when thinking about yourself</p><p>and activating it when thinking about your mother. This suggests that</p><p>people back home who were more individualistic were the ones more likely</p><p>to choose to emigrate, a mechanism of self-selection for these traits.[57]</p><p>Where do these differences come from on a smaller-picture level? As</p><p>covered in the last chapter, children are raised differently in collectivist</p><p>versus individualist cultures, with implications for how the brain is</p><p>constructed.</p><p>But in addition, there are probably genetic influences. People who are</p><p>spectacularly successful at expressing their culture’s values tend to leave</p><p>copies of their genes. In contrast, fail to show up with the rest of the village</p><p>during rice-harvesting day because you decided to go snowboarding, or</p><p>disrupt the Super Bowl by trying to persuade the teams to cooperate rather</p><p>than compete—well, such cultural malcontents, contrarians, and weirdos</p><p>are less likely to pass on their genes. And if these traits are influenced at all</p><p>by genes (which they are, as seen in the previous section), this can produce</p><p>cultural differences in gene frequencies. Collectivist and individualist</p><p>cultures differ in the incidence of gene variants related to dopamine and</p><p>norepinephrine processing, variants of the gene coding for the pump that</p><p>removes serotonin from the synapse, and</p><p>variants of the gene coding for the</p><p>receptor in the brain for oxytocin.[58]</p><p>In other words, there’s coevolution of gene frequencies, cultural values,</p><p>child development practices, reinforcing each other over the generations,</p><p>shaping what your PFC is going to be like.</p><p>THE DEATH OF THE MYTH OF FREELY CHOSEN</p><p>GRIT</p><p>We’re pretty good at recognizing that we have no control over the attributes</p><p>that life has gifted or cursed us with. But what we do with those attributes at</p><p>right/wrong crossroads powerfully, toxically invites us to conclude, with the</p><p>strongest of intuitions, that we are seeing free will in action. But the reality</p><p>is that whether you display admirable gumption, squander opportunity in a</p><p>murk of self-indulgence, majestically stare down temptation or belly flop</p><p>into it, these are all the outcome of the functioning of the PFC and the brain</p><p>regions it connects to. And that PFC functioning is the outcome of the</p><p>second before, minutes before, millennia before. The same punch line as in</p><p>the previous chapter concerning the entire brain. And invoking the same</p><p>critical word—seamless. As we’ve seen, talk about the evolution of the</p><p>PFC, and you’re also talking about the genes that evolved, the proteins they</p><p>code for in the brain, and how childhood altered the regulation of those</p><p>genes and proteins. A seamless arc of influences bringing your PFC to this</p><p>moment, without a crevice for free will to lodge in.</p><p>Here’s my favorite finding pertinent to this chapter. There’s a task that</p><p>can be done in two different ways: in version one, do some amount of work</p><p>and you get some amount of reward, but if you do twice as much work you</p><p>get three times as much of a reward. Version two: do some amount of work</p><p>and you get some amount of reward, but if you do three times as much</p><p>work, you get a hundred zillion times as much reward. Which version</p><p>should you do? If you think you can freely choose to exercise self-</p><p>discipline, choose version two—you’re going to choose to do a little bit</p><p>more work and get a huge boost in reward as a result. People usually prefer</p><p>version two, independent of the sizes of the rewards. A recent study shows</p><p>that activity in the vmPFC[*] tracks the degree of preference for version</p><p>two. What does that mean? In this setting, the vmPFC is coding for how</p><p>much we prefer circumstances that reward self-discipline. Thus, this is the</p><p>part of the brain that codes for how wisely we think we’ll be exercising free</p><p>will. In other words, this is the nuts-and-bolts biological machinery coding</p><p>for a belief that there are no nuts or bolts.[59]</p><p>Sam Harris argues convincingly that it’s impossible to successfully think</p><p>of what you’re going to think next. The takeaway from chapters 2 and 3 is</p><p>that it’s impossible to successfully wish what you’re going to wish for. This</p><p>chapter’s punchline is that it’s impossible to successfully will yourself to</p><p>have more willpower. And that it isn’t a great idea to run the world on the</p><p>belief that people can and should.</p><p>S</p><p>5</p><p>A Primer on Chaos</p><p>uppose that just before you started reading this sentence, you</p><p>reached to scratch an itch on your shoulder, noted that it’s becoming</p><p>harder to reach that spot, thought of your joints calcifying with age,</p><p>which made you vow to exercise more, and then you got a snack. Well,</p><p>science has officially weighed in—each of those actions or thoughts,</p><p>conscious or otherwise, and every bit of neurobiology underpinning it, was</p><p>determined. Nothing just got it into its head to be a causeless cause.</p><p>No matter how thinly you slice it, each unique biological state was</p><p>caused by a unique state that preceded it. And if you want to truly</p><p>understand things, you need to break these two states down to their</p><p>component parts, and figure out how each component comprising Just-</p><p>Before-Now gave rise to each piece of Now. This is how the universe</p><p>works.</p><p>But what if that isn’t? What if some moments aren’t caused by anything</p><p>preceding them? What if some unique Nows can be caused by multiple,</p><p>unique Just-Before-Nows? What if the strategy of learning how something</p><p>works by breaking it down to its component parts is often useless? As it</p><p>turns out, all of these are the case. Throughout the past century, the previous</p><p>paragraph’s picture of the universe was overturned, giving birth to the</p><p>sciences of chaos theory, emergent complexity, and quantum indeterminacy.</p><p>To label these as revolutions is not hyperbolic. When I was a kid, I read</p><p>a novel called The Twenty-One Balloons,[*] about a utopian society on the</p><p>island of Krakatoa built on balloon technology, destined to be destroyed by</p><p>the famed 1883 eruption of the volcano there. It was fantastic, and the</p><p>second I got to the end, I immediately flipped to the front to reread it. And</p><p>it was then almost a quarter century before I immediately flipped to the</p><p>front to reread a different book,[*] an introduction to one of these scientific</p><p>revolutions.</p><p>Staggeringly interesting stuff. This chapter, and the five after it, reviews</p><p>these three revolutions, and how numerous thinkers believe that you can</p><p>find free will in their crevices. I will admit that the previous three chapters</p><p>have an emotional intensity for me. I am put into a detached, professorial,</p><p>eggheady sort of rage by the idea that you can assess someone’s behavior</p><p>outside the context of what brought them to that moment of intent, that their</p><p>history doesn’t matter. Or that even if a behavior seems determined, free</p><p>will lurks wherever you’re not looking. And by the conclusion that</p><p>righteous judgment of others is okay because while life is tough and we’re</p><p>unfairly gifted or cursed with our attributes, what we freely choose to do</p><p>with them is the measure of our worth. These stances have fueled profound</p><p>amounts of undeserved pain and unearned entitlement.</p><p>The revolutions in the next five chapters don’t have that same visceral</p><p>edge. As we’ll see, there aren’t a whole lot of thinkers out there citing, say,</p><p>subatomic quantum indeterminacy when smugly proclaiming that free will</p><p>exists and they earned their life in the top 1 percent. These topics don’t</p><p>make me want to set up barricades in Paris, singing revolutionary anthems</p><p>from Les Mis. Instead, these topics excite me immensely because they</p><p>reveal completely unexpected structure and pattern; this enhances rather</p><p>than quenches the sense that life is more interesting than can be imagined.</p><p>These are subjects that fundamentally upend how we think about how</p><p>complex things work. But nonetheless, they are not where free will dwells.</p><p>This and the next chapter focus on chaos theory, the field that can make</p><p>studying the component parts of complex things useless. After a primer</p><p>about the topic in this chapter, the next will cover two ways people</p><p>mistakenly believe they’ve found free will in chaotic systems. First is the</p><p>idea that if you start with something simple in biology and, unpredictably,</p><p>out of that comes hugely complex behavior, free will just happened. Second</p><p>is the belief that if you have a complex behavior that could have arisen from</p><p>either of two different preceding biological states and there’s no way to ever</p><p>tell which one caused it, then you can get away with claiming that it wasn’t</p><p>caused by anything, that the event was free of determinism.</p><p>BACK WHEN THINGS MADE SENSE</p><p>Suppose that</p><p>X = Y + 1</p><p>If that is the case, then</p><p>X + 1 = ?</p><p>—and you were readily able to calculate that the answer is</p><p>(Y + 1) + 1.</p><p>Do X + 3 and you’ve instantly got (Y + 1) + 3. And here’s the crucial</p><p>point—after solving X + 1, you were able to then solve X + 3 without first</p><p>having to figure out X + 2. You were able to extrapolate into the future</p><p>without examining each intervening step. Same thing for X + a gazillion, or</p><p>X + sorta a gazillion, or X + a star-nosed mole.</p><p>A world like this has a number of properties:</p><p>As we just saw, knowing the starting state of a system (for example, X = Y + 1) lets you</p><p>accurately predict what X + whatever will equal, without the intervening steps. This</p><p>property runs in both directions. If you’re given (Y + 1) + whatever,</p><p>would correctly conclude that this “scientific result” (plus the spin-</p><p>offs it has generated in the subsequent forty years) doesn’t prove there’s no</p><p>free will. Similarly, you can’t disprove free will with a “scientific result”</p><p>from genetics—genes in general are not about inevitability but, rather,</p><p>about vulnerability and potential, and no single gene, gene variant, or gene</p><p>mutation has ever been identified that falsifies free will;[*] you can’t even</p><p>do it when considering all our genes at once. And you can’t disprove free</p><p>will from a developmental/sociological perspective by emphasizing the</p><p>scientific result that a childhood filled with abuse, deprivation, neglect, and</p><p>trauma astronomically increases the odds of producing a deeply damaged</p><p>and damaging adult—because there are exceptions. Yeah, no single result or</p><p>scientific discipline can do that. But—and this is the incredibly important</p><p>point—put all the scientific results together, from all the relevant scientific</p><p>disciplines, and there’s no room for free will.[*]</p><p>Why is that? Something deeper than the idea that if you examine enough</p><p>different disciplines, one -ology after another, you’re bound to eventually</p><p>find one that provides a slam dunk, falsifying free will all by itself. It is also</p><p>deeper than the idea that even though each discipline has a hole that</p><p>precludes it from falsifying free will, at least one of the other disciplines</p><p>compensates for it.</p><p>Crucially, all these disciplines collectively negate free will because they</p><p>are all interlinked, constituting the same ultimate body of knowledge. If you</p><p>talk about the effects of neurotransmitters on behavior, you are also</p><p>implicitly talking about the genes that specify the construction of those</p><p>chemical messengers, and the evolution of those genes—the fields of</p><p>“neurochemistry,” “genetics,” and “evolutionary biology” can’t be</p><p>separated. If you examine how events in fetal life influence adult behavior,</p><p>you are also automatically considering things like lifelong changes in</p><p>patterns of hormone secretion or in gene regulation. If you discuss the</p><p>effects of mothering style on a kid’s eventual adult behavior, by definition</p><p>you are also automatically discussing the nature of the culture that the</p><p>mother passes on through her actions. There’s not a single crack of daylight</p><p>to shoehorn in free will.</p><p>As such, the first half of the book’s point is to rely on this biological</p><p>framework in rejecting free will. Which brings us to the second half of the</p><p>book. As noted, I haven’t believed in free will since adolescence, and it’s</p><p>been a moral imperative for me to view humans without judgment or the</p><p>belief that anyone deserves anything special, to live without a capacity for</p><p>hatred or entitlement. And I just can’t do it. Sure, sometimes I can sort of</p><p>get there, but it is rare that my immediate response to events aligns with</p><p>what I think is the only acceptable way to understand human behavior;</p><p>instead, I usually fail dismally.</p><p>As I said, even I think it’s crazy to take seriously all the implications of</p><p>there being no free will. And despite that, the goal of the second half of the</p><p>book is to do precisely that, both individually and societally. Some chapters</p><p>consider scientific insights about how we might go about dispensing with</p><p>free-will belief. Others examine how some of the implications of rejecting</p><p>free will are not disastrous, despite initially seeming that way. Some review</p><p>historical circumstances that demonstrate something crucial about the</p><p>radical changes we’d need to make in our thinking and feeling: we’ve done</p><p>it before.</p><p>The book’s intentionally ambiguous title reflects these two halves—it is</p><p>both about the science of why there is no free will and the science of how</p><p>we might best live once we accept that.</p><p>STYLES OF VIEWS: WHOM I WILL BE</p><p>DISAGREEING WITH</p><p>I’m going to be discussing some of the common attitudes held by people</p><p>writing about free will. These come in four basic flavors:[*]</p><p>The world is deterministic and there’s no free will. In this view, if the</p><p>former is the case, the latter has to be as well; determinism and free will are</p><p>not compatible. I am coming from this perspective of “hard</p><p>incompatibilism.”[*]</p><p>The world is deterministic and there is free will. These folks are</p><p>emphatic that the world is made of stuff like atoms, and life, in the elegant</p><p>words of psychologist Roy Baumeister (currently at the University of</p><p>Queensland in Australia), “is based on the immutability and relentlessness</p><p>of the laws of nature.”[5] No magic or fairy dust involved, no substance</p><p>dualism, the view where brain and mind are separate entities.[*] Instead, this</p><p>deterministic world is viewed as compatible with free will. This is roughly</p><p>90 percent of philosophers and legal scholars, and the book will most often</p><p>be taking on these “compatibilists.”</p><p>The world is not deterministic; there’s no free will. This is an oddball</p><p>view that everything important in the world runs on randomness, a</p><p>supposed basis of free will. We’ll get to this in chapters 9 and 10.</p><p>The world is not deterministic; there is free will. These are folks who</p><p>believe, like I do, that a deterministic world is not compatible with free will</p><p>—however, no problem, the world isn’t deterministic in their view, opening</p><p>a door for free-will belief. These “libertarian incompatibilists” are a rarity,</p><p>and I’ll only occasionally touch on their views.</p><p>There’s a related quartet of views concerning the relationship between</p><p>free will and moral responsibility. The last word obviously carries a lot of</p><p>baggage with it, and the sense in which it is used by people debating free</p><p>will typically calls forth the concept of basic desert, where someone can</p><p>deserve to be treated in a particular way, where the world is a morally</p><p>acceptable place in its recognition that one person can deserve a particular</p><p>reward, another a particular punishment. As such, these views are:</p><p>There’s no free will, and thus holding people morally responsible for</p><p>their actions is wrong. Where I sit. (And as will be covered in chapter 14,</p><p>this is completely separate from forward-looking issues of punishment for</p><p>deterrent value.)</p><p>There’s no free will, but it is okay to hold people morally responsible for</p><p>their actions. This is another type of compatibilism—an absence of free</p><p>will and moral responsibility coexist without invoking the supernatural.</p><p>There’s free will, and people should be held morally responsible. This is</p><p>probably the most common stance out there.</p><p>There’s free will, but moral responsibility isn’t justified. This is a</p><p>minority view; typically, when you look closely, the supposed free will</p><p>exists in a very narrow sense and is certainly not worth executing people</p><p>about.</p><p>Obviously, imposing these classifications on determinism, free will, and</p><p>moral responsibility is wildly simplified. A key simplification is pretending</p><p>that most people have clean “yes” or “no” answers as to whether these</p><p>states exist; the absence of clear dichotomies leads to frothy philosophical</p><p>concepts like partial free will, situational free will, free will in only a subset</p><p>of us, free will only when it matters or only when it doesn’t. This raises the</p><p>question of whether the edifice of free-will belief is crumbled by one</p><p>flagrant, highly consequential exception and, conversely, whether free-will</p><p>skepticism collapses when the opposite occurs. Focusing on gradations</p><p>between yes and no is important, since interesting things in the biology of</p><p>behavior are often on continua. As such, my fairly absolutist stance on these</p><p>issues puts me way out in left field. Again, my goal isn’t to convince you</p><p>that there’s no free will; it will suffice if you merely conclude that there’s so</p><p>much less free will than you thought that you have to change your thinking</p><p>about some truly important things.</p><p>Despite starting by separating determinism / free will and free will /</p><p>moral responsibility, I follow the frequent convention of merging them into</p><p>one. Thus, my stance is that because the world is deterministic, there can’t</p><p>be free</p><p>you know then that</p><p>your starting point was X + whatever.</p><p>Implicit in that, there is a unique pathway connecting the starting and ending states; it is</p><p>also inevitable that X + 1 cannot equal (Y + 1) + 1 only some of the time.</p><p>As shown dealing with something like “sorta a gazillion,” the magnitude of uncertainty</p><p>and approximation in the starting state is directly proportional to the magnitude at the</p><p>other end. You can know what you don’t know, can predict the degree of unpredictability.</p><p>[1]</p><p>This relationship between starting states and mature states helped give</p><p>rise to what has been the central concept of science for centuries. This is</p><p>reductionism, the idea that to understand something complicated, break it</p><p>down into its component parts, study them, add your insights about each</p><p>component part together, and you will understand the complicated whole.</p><p>And if one of those component parts is itself too complicated to understand,</p><p>study its eensy subcomponent parts and understand them.</p><p>Reductionism like this is vital. If your watch, running on the ancient</p><p>technology of gears, stops working, you apply a reductive approach to</p><p>solving the problem. You take the watch apart, identify the one tiny gear</p><p>that has a broken tooth, replace it, and put the pieces back together, and the</p><p>watch runs. This approach is also how you do detective work—you arrive at</p><p>a crime scene and interview the witnesses. The first witness observed only</p><p>parts 1, 2, and 3 of the event. The second saw only 2, 3, and 4. The third,</p><p>only 3, 4, and 5. Bummer, no one saw everything that happened. But thanks</p><p>to a reductive mindset, you can solve the problem by taking the fragmentary</p><p>component parts—each of the three witnesses’ overlapping observations,</p><p>and combine them to understand the complete sequence.[*] Or as another</p><p>example, in the first season of the pandemic, the world waited for answers</p><p>to reductive questions like what receptor on the surface of a lung cell binds</p><p>the spike protein of SARS-CoV-2, allowing it to enter and sicken that cell.</p><p>Mind you, a reductive approach doesn’t apply to everything. If there’s a</p><p>drought, the sky dotted with puffy clouds that haven’t rained in a year, you</p><p>don’t first isolate a cloud, study its left half and then its right half and then</p><p>half of each half, and so on, until you find the itty-bitty gear in the center</p><p>that has a broken tooth. Nonetheless, a reductive approach has long been</p><p>the gold standard for scientifically exploring a complex topic.</p><p>And then, starting in the early 1960s, a scientific revolution emerged that</p><p>came to be called chaoticism, or chaos theory. And its central idea is that</p><p>really interesting, complicated things are often not best understood, cannot</p><p>be understood, on a reductive level. To understand, say, a human whose</p><p>behavior is abnormal, approach the problem as if this were a cloud that does</p><p>not rain, rather than as a watch that does not tick. And naturally, humans-as-</p><p>clouds generate all sorts of nearly irresistible urges for concluding that you</p><p>are observing free will in action.</p><p>CHAOTIC UNPREDICTABILITY</p><p>Chaos theory has its creation story. When I was a kid in the 1960s,</p><p>inaccurate weather prediction was mocked with trenchant witticisms like</p><p>“The weatherman on the radio [invariably, indeed, a man] said it’s going to</p><p>be sunny today, so better bring an umbrella.” MIT meteorologist Edward</p><p>Lorenz began using some antediluvian computer to model weather patterns</p><p>in an attempt to increase prediction accuracy. Stick variables like</p><p>temperature and humidity into the model and see how accurate the</p><p>predictions became. See if additional variables, other variables, different</p><p>weightings of variables,[*] improved predictability.</p><p>So Lorenz was studying a model on his computer using twelve variables.</p><p>Time for lunch; halt the program in the middle of its cranking out a time</p><p>course of predictions. Come back postlunch and, to save time, restart the</p><p>program at a point before you stopped it, rather than starting all over. Punch</p><p>in the values of those twelve variables at that time point, and let the model</p><p>resume its predicting. That’s what Lorenz did, which is when our</p><p>understanding of the universe changed.</p><p>One variable at that time point had a value of 0.506127. Except that on</p><p>the printout, the computer had rounded it down to 0.506; maybe the</p><p>computer hadn’t wanted to overwhelm this Human 1.0. In any case,</p><p>0.506127 became 0.506, and Lorenz, not knowing about this slight</p><p>inaccuracy, ran the program with the variable at 0.506, thinking that it was</p><p>actually 0.506127.</p><p>Thus, he was now dealing with a value that was a smidgen different from</p><p>the real one. And we know just what should have happened now, in our</p><p>supposedly purely linear, reductive world: the degree to which the starting</p><p>state was off from what he thought it was (i.e., 0.506 rather than 0.506127)</p><p>predicted how inaccurate his ending state would be—the program would</p><p>generate a point that was only a smidgen different from that same point</p><p>before lunch—if you superimposed the before- and after-lunch tracings,</p><p>you’d barely see a difference.</p><p>Lorenz let the program, still depending on 0.506 instead of 0.506127,</p><p>continue to run, and out came a result that was even more discrepant than</p><p>he had expected from the prelunch run. Weird. And with each successive</p><p>point, things got weirder—sometimes things seemed to have returned to the</p><p>prelunch pattern but would then diverge again, with the divergences</p><p>increasingly different, unpredictably, crazily so. And eventually rather than</p><p>the program generating something even remotely close to what he saw the</p><p>first time, the discrepancy in the two tracings was about as different as was</p><p>possible.</p><p>This is what Lorenz saw—the pre- and postlunch tracings superimposed,</p><p>a printout now with the status of a holy relic in the field (see figure on the</p><p>next page).</p><p>Lorenz finally spotted that slight rounding error introduced after lunch</p><p>and realized that this made the system unpredictable, nonlinear, and</p><p>nonadditive.</p><p>By 1963, Lorenz announced this discovery in a dense technical paper,</p><p>“Deterministic Non-periodic Flow,” in the highly specialized Journal of</p><p>Atmospheric Sciences (and in the paper, Lorenz, while beginning to</p><p>appreciate how these insights were overturning centuries of reductive</p><p>thinking, still didn’t forget where he came from. Will it ever be possible to</p><p>perfectly predict all of future weather? readers of the journal plaintively</p><p>asked. Nope, Lorenz concluded; the chance of this is “non-existent”). And</p><p>the paper has since been cited in other papers a staggering 26,000+ times.[2]</p><p>If Lorenz’s original program had contained only two weather variables,</p><p>instead of the twelve he was using, the familiar reductiveness would have</p><p>held—after a slightly wrong number was fed into the computer, the output</p><p>would have been precisely as wrong at every step for the rest of time.</p><p>Predictably so. Imagine a universe that consists of just two variables, the</p><p>Earth and the Moon, exerting their gravitational forces on each other. In this</p><p>linear, additive world, it is possible to infer precisely where they were at</p><p>any point in the past and predict precisely where each will be at any point in</p><p>the future;[*] if an approximation was accidentally introduced, the same</p><p>magnitude of approximation would continue forever. But now add the Sun</p><p>into the mix, and the nonlinearity happens. This is because the Earth</p><p>influences the Moon, which means that the Earth influences how the Moon</p><p>influences the Sun, which means that the Earth influences how the Moon</p><p>influences the Sun’s influence on the Earth. . . . And don’t forget the other</p><p>direction, Earth to Sun to Moon. The interactions among the three variables</p><p>make linear predictability impossible. Once you’ve entered the realm of</p><p>what is known as the “three-body problem,” with three or more variables</p><p>interacting, things have inevitably become unpredictable.</p><p>When you have a nonlinear system, tiny differences in a starting state</p><p>from one time to the next can cause them to diverge from each other</p><p>enormously, even exponentially,[*] something since termed “sensitive</p><p>dependence on initial conditions.” Lorenz noted that the unpredictability,</p><p>rather than hurtling off forever into the exponential stratosphere, is</p><p>sometimes bounded, constrained, and “dissipative.” In other words, the</p><p>degree of unpredictability oscillates erratically around the predicted value,</p><p>repeatedly a little more, a little less than predicted in the series of numbers</p><p>you are generating, the degree of discrepancy always different, forever</p><p>after. It’s like each data point you are getting is sort of attracted to what the</p><p>data point is predicted to be, but not enough to actually reach the predicted</p><p>value. Strange. And thus, Lorenz named these strange attractors.[*],[3]</p><p>So a tiny difference in a starting state can magnify unpredictably over</p><p>time. Lorenz took to summarizing this idea with a metaphor about seagulls.</p><p>A friend suggested something more picturesque, and by 1972 this was</p><p>formalized into the title of a talk given by Lorenz. Here’s another holy relic</p><p>of the field (see figure on the next page).</p><p>Thus was born the symbol of the chaos theory revolution, the butterfly</p><p>effect.[*], [4]</p><p>CHAOTICISM YOU CAN DO AT HOME</p><p>Time to see what chaoticism and sensitive dependence on initial conditions</p><p>look like in practice. This makes use of a model system that is so cool and</p><p>fun that I’ve even fleetingly wished that I could do computer coding, as it</p><p>would make it easier to play with it.</p><p>Start off with a grid, like the one on a piece of graph paper, where the</p><p>first row is your starting condition. Specifically, each of the boxes in the</p><p>row can be in one of two states, either open or filled (or, in binary coding,</p><p>either zero or one). There are 16,384 possible patterns for that row;[*] here’s</p><p>our randomly chosen one:</p><p>Time now to generate the second row of boxes that are open or filled,</p><p>that new pattern determined[*] by the pattern in row 1. We need a rule for</p><p>how to do this. Here’s the most boring possible example: in row 2, a box</p><p>that is underneath a filled box gets filled; a box underneath an open box</p><p>remains open. Applying that rule over and over, using row 2 as the basis for</p><p>row 3, 3 for 4, and so on, is just going to produce some boring columns. Or</p><p>impose the opposite rule, such that if a box is filled, the one below it in the</p><p>next row becomes open, while an open box spawns a filled one, and the</p><p>outcome isn’t all that exciting, producing sort of a lopsided checkered</p><p>pattern:</p><p>As the main point, starting with either of these rules, if you know the</p><p>starting state (i.e., the pattern in row 1), you can accurately predict what a</p><p>row anywhere in the future will look like. Our linear universe again.</p><p>Let’s go back to our row 1:</p><p>Now whether a particular row 2 box will be open or filled is determined</p><p>by the state of three boxes—the row 1 box immediately above and the row</p><p>1 box’s neighbor on each side.</p><p>Here’s a random rule for how the state of a trio of adjacent row 1 boxes</p><p>determines what happens in the row 2 box below: A row 2 box is filled if</p><p>and only if one of the trio of boxes above it is filled in. Otherwise, the row 2</p><p>box will remain open.</p><p>Let’s start with the second box from the left in row 2. Here is the row 1</p><p>trio immediately above it (i.e., the first three boxes of row 1):</p><p>One of three boxes is filled, meaning that the row 2 box we’re</p><p>considering will get filled:</p><p>Look at the next trio in row 1 (i.e., boxes 2, 3, and 4). Only one box is</p><p>filled, so box 3 in row 2 will also be filled:</p><p>In the row 1 trio of boxes 3, 4, and 5, two boxes (4 and 5) are filled, so</p><p>the next row 2 box is left open. And so on. The rule we are working with—</p><p>if and only if one box of the trio is filled, fill in the row 2 box in question—</p><p>can be summarized like this:</p><p>There are eight possible trios (two possible states for the first box of a</p><p>trio times two possible for the second box times two for the third), and only</p><p>trios 4, 6, and 7 result in the row 2 box in question being filled.</p><p>Back to our starting state, and using this rule, the first two rows will look</p><p>like this:</p><p>But wait—what about the first and last boxes of row 2, where the box</p><p>above has only one neighbor? We wouldn’t have that problem if row 1 were</p><p>infinitely long in both directions, but we don’t have that luxury. What do we</p><p>do with each of them? Just look at the box above it and the single neighbor,</p><p>and use the same rule—if one of those two is filled, fill in the row 2 box; if</p><p>both or neither of the two is filled, row 2 box is open. Thus, with that</p><p>addendum in place, the first 2 rows look like this:</p><p>Now use the same rule to generate row 3:</p><p>Keep going, if you have nothing else to do.</p><p>Now let’s use this starting state with the same rule:</p><p>The first 2 rows will look like this:</p><p>Complete the first 250 or so rows and you get this:</p><p>Take a different, wider random starting state, apply the same rule over</p><p>and over, and you get this:</p><p>Whoa.</p><p>Now try this starting state:</p><p>By row 2, you get this:</p><p>Nothing. With this particular starting state, row 2 is all open boxes, as</p><p>will be the case in every subsequent row. Row 1’s pattern is snuffed out.</p><p>Let’s describe what we’ve learned so far in a metaphorical way, rather</p><p>than using terms like input, output, and algorithm. With some starting states</p><p>and the reproduction rule used to produce each subsequent generation,</p><p>things can evolve into wildly interesting mature states, but you can also get</p><p>some that go extinct, like that last example.</p><p>Why the biology metaphors? Because this world of generating patterns</p><p>like this applies to nature (see figure on the next page).</p><p>We have just been exploring an example of a cellular automaton, where</p><p>you start with a row of cells that are either open or filled, supply a</p><p>reproduction rule, and let the process iterate.[*],[5]</p><p>An actual shell on the left, a computer-generated pattern on the right</p><p>The rule we’ve been following (if and only if one box of the trio above is</p><p>filled . . .) is called rule 22 in the cellular automata universe, which consists</p><p>of 256 rules.[*] Not all of these rules generate something interesting—</p><p>depending on the starting state, some produce a pattern that just repeats for</p><p>infinity in an inert, lifeless sort of way, or that goes extinct by the second</p><p>row. Very few generate complex, dynamic patterns. And of the few that do,</p><p>rule 22 is one of the favorites. People have spent their careers studying its</p><p>chaoticism.</p><p>What is chaotic about rule 22? We’ve now seen that, depending on the</p><p>starting state, by applying rule 22 you can get one of three mature patterns:</p><p>(a) nothing, because it went extinct; (b) a crystallized, boring, inorganic</p><p>periodic pattern; (c) a pattern that grows and writhes and changes, with</p><p>pockets of structure giving way to anything but, a dynamic, organic profile.</p><p>And as the crucial point, there is no way to take any irregular starting state</p><p>and predict what row 100, or row 1,000, or row any-big-number will look</p><p>like. You have to march through every intervening row, simulating it, to find</p><p>out. It is impossible to predict if the mature form of a particular starting</p><p>state will be extinct, crystalline, or dynamic or, if either of the latter two,</p><p>what the pattern will be; people with spectacular mathematical powers have</p><p>tried and failed. And this limit, paradoxically, extends to showing that you</p><p>can’t prove that somewhere a few baby steps before reaching infinity, that</p><p>the chaotic unpredictability will suddenly calm down into a sensible,</p><p>repeating pattern. We have a version of the three-body problem, with</p><p>interactions that are neither linear nor additive. You cannot take a reductive</p><p>approach, breaking things down to its component parts (the eight different</p><p>possible trios of boxes and their outcomes), and predict what you’re going</p><p>to get. This is not a system for generating clocks. It’s for generating clouds.</p><p>[6]</p><p>So we’ve just seen that knowing the irregular starting state gives you no</p><p>predictive power about the mature state—you’ll just have to simulate each</p><p>intervening step</p><p>to find out.</p><p>Now consider rule 22 applied to each of these four starting states (see</p><p>top figure on the next page).</p><p>Two of these four, once taken out ten generations, produce an identical</p><p>pattern for the rest of time. I dare you to stare at these four and correctly</p><p>predict which two it is going to be. It cannot be done.</p><p>Get some graph paper and crank through this, and you’ll see that two of</p><p>these four converge. In other words, knowing the mature state of a system</p><p>like this gives you no predictive power as to what the starting state was, or</p><p>if it could have arisen from multiple different starting states, another</p><p>defining feature of the chaoticism of this system.</p><p>Finally, consider the following starting state:</p><p>Which goes extinct by row 3:</p><p>Introduce a smidgen of a difference in this nonviable starting state,</p><p>namely that the open/filled status of just one of the twenty-five boxes</p><p>differs—box 20 is filled instead of open:</p><p>And suddenly, life erupts into an asymmetrical pattern (see figure on the</p><p>next page).</p><p>Let’s state this biologically: a single mutation, in box 20, can have major</p><p>consequences.</p><p>Let’s state this with the formalism of chaos theory: this system shows</p><p>sensitive dependence on the initial condition of box 20.</p><p>Let’s state it in a way that is ultimately most meaningful: a butterfly in</p><p>box 20 either did or didn’t flap its wings.</p><p>I love this stuff. One reason is because of the ways in which you can</p><p>model biological systems with this, an idea explored at length by Stephen</p><p>Wolfram.[*] Cellular automata are also inordinately cool because you can</p><p>increase their dimensionality. The version we’ve been covering is one-</p><p>dimensional, in that you start with a line of boxes and generate more lines.</p><p>Conway’s Game of Life (invented by the late Princeton mathematician John</p><p>Conway) is a two-dimensional version where you start with a grid of boxes</p><p>and generate each subsequent generation’s grid. And produce absolutely</p><p>astonishingly dynamic, chaotic patterns that are typically described as</p><p>involving individual boxes that are “living” or “dying.” All with the usual</p><p>properties—you can’t predict the mature state from the starting state—you</p><p>have to simulate every intervening step; you can’t predict the starting state</p><p>from the mature state because of the possibility that multiple starting states</p><p>converged into the same mature one (we’re going to return to this</p><p>convergence feature in a big way); the system shows sensitive dependence</p><p>on initial conditions.[7]</p><p>(There’s an additional realm classically discussed when introducing</p><p>chaoticism. I’ve sidestepped covering it here, however, because I’ve learned</p><p>the hard way from my classrooms that it is very difficult and/or I’m very</p><p>bad at explaining it. If interested, read up about Lorenz’s waterwheel,</p><p>period doubling, and the significance of period 3 for the onset of chaos.)</p><p>With this introduction to chaoticism in hand, we can now appreciate the</p><p>next chapter of the field—unexpectedly, the concepts of chaos theory</p><p>became really popular, sowing the seeds for a certain style of free-will</p><p>belief.</p><p>6</p><p>Is Your Free Will Chaotic?</p><p>THE AGE OF CHAOS</p><p>The upheaval in the early 1960s caused by chaos theory, strange attractors,</p><p>and sensitive dependence on initial conditions was rapidly felt throughout</p><p>the world, fundamentally altering everything from the most highfalutin</p><p>philosophical musings to the concerns of everyday life.</p><p>Actually, not at all. Lorenz’s revolutionary 1963 paper was mostly met</p><p>with silence. It took years for him to begin to collect acolytes, mostly a</p><p>group of physics grad students at UC Santa Cruz who supposedly spent a</p><p>lot of time stoned and studied things like the chaoticism of how faucets</p><p>drip.[*] Mainstream theorists mostly ignored the implications.</p><p>Part of the neglect reflected the fact that chaos theory is a horrible name,</p><p>insofar as it is about the opposite of nihilistic chaos and is instead about the</p><p>patterns of structure hidden in seeming chaos. The more fundamental</p><p>reason for chaoticism getting off to a slow start was that if you have a</p><p>reductive mindset, unsolvable, nonlinear interactions among a large number</p><p>of variables is a total pain to study. Thus, most researchers tried to study</p><p>complicated things by limiting the number of variables considered so that</p><p>things remained tame and tractable. And this guaranteed the incorrect</p><p>conclusion that the world is mostly about linear, additive predictability and</p><p>nonlinear chaoticism was a weird anomaly that could mostly be ignored.</p><p>Until it couldn’t be anymore, as it became clear that chaoticism lurked</p><p>behind the most interesting complicated things. A cell, a brain, a person, a</p><p>society, was more like the chaoticism of a cloud than the reductionism of a</p><p>watch.[1]</p><p>By the eighties, chaos theory had exploded as an academic subject (this</p><p>was around the time that the pioneering generation of renegade stoner</p><p>physicists began to be things like a professor at Oxford or the founder of a</p><p>company using chaos theory to plunder the stock market). Suddenly, there</p><p>were specialized journals, conferences, departments, and interdisciplinary</p><p>institutes. Scholarly papers and books appeared about the implications of</p><p>chaoticism for education, corporate management, economics, the stock</p><p>market, art and architecture (with the interesting idea that we find nature to</p><p>be more beautiful than, say, modernist office buildings, because the former</p><p>has just the right amount of chaos), literary criticism, cultural studies of</p><p>television (with the observation that, like chaotic systems, television</p><p>“dramas are both complex and simple at the same time”), neurology and</p><p>cardiology (in both of which, interestingly, too little chaoticism was</p><p>appearing to be a bad thing[*]). There were even scholarly articles about the</p><p>relevance of chaos theory to theology (including one with the wonderful</p><p>title “Chaos at the Marriage of Heaven and Hell,” in which the author</p><p>wrote, “Those of us who seek to engage modern culture in our theological</p><p>reflection cannot afford to overlook chaos theory”).[2]</p><p>Meanwhile, interest in chaos theory, accurate or otherwise, burst into the</p><p>general public’s consciousness as well—who could have predicted that?</p><p>There were the ubiquitous wall calendars of fractals. Novels, books of</p><p>poetry, multiple movies, TV episodes, numerous bands, albums, and songs</p><p>commandeered strange attractor or the butterfly effect in their titles.[*]</p><p>According to a Simpsons fandom site, in one episode during her baseball-</p><p>coaching period, Lisa is seen reading a book called Chaos Theory in</p><p>Baseball Analysis. And as my favorite, in the novel Chaos Theory, part of</p><p>the Nerds of Paradise Harlequin romance series, our protagonist has her</p><p>eyes on handsome engineer Will Darling. Despite his unbuttoned shirt, six-</p><p>pack, and insouciant bedroom eyes, it is understood that Will must still be a</p><p>nerd, since he wears glasses.[3]</p><p>The growing interest in chaos theory</p><p>generated the sound of a zillion butterfly</p><p>wings flapping. Given that, it was</p><p>inevitable that various thinkers began to</p><p>proclaim that the unpredictable, chaotic</p><p>cloud-ness of human behavior is where</p><p>free will runs free. Hopefully, the material</p><p>already covered, showing what chaoticism</p><p>is and isn’t, will help show how this</p><p>cannot be.</p><p>The giddy conclusion that chaoticism</p><p>proves free will takes at least two forms.</p><p>WRONG CONCLUSION #1:</p><p>THE FREELY CHOOSING CLOUD</p><p>For free-will believers, the crux of the issue is lack of predictability—at</p><p>innumerable junctures in our lives, including highly consequential ones, we</p><p>choose between X and not-X. And even a vastly knowledgeable observer</p><p>could not have predicted every such choice.</p><p>In this vein, physicist Gert Eilenberger writes, “It is simply improbable</p><p>that reality is completely and exhaustively mappable by mathematical</p><p>constructs.” This is because “the mathematical abilities of the species Homo</p><p>sapiens are in principle limited because of their biological basis. . . .</p><p>Because of [chaoticism], the determinism of Laplace[*] cannot be absolute</p><p>and the question</p><p>of the possibility of chance and freedom is open again!”</p><p>The exclamation mark at the end is Eilenberger’s; a physicist means</p><p>business if he’s putting exclamation marks in his writing.[4]</p><p>Biophysicist Kelly Clancy makes a similar point concerning chaoticism</p><p>in the brain: “Over time, chaotic trajectories will gravitate toward [strange</p><p>attractors]. Because chaos can be controlled, it strikes a fine balance</p><p>between reliability and exploration. Yet because it’s unpredictable, it’s a</p><p>strong candidate for the dynamical substrate of free will.”[5]</p><p>Doyne Farmer weighs in as well in a way I found disappointing, given</p><p>that he was one of the faucet-drip apostles of chaos theory and should know</p><p>better. “On a philosophical level, it struck me [that chaoticism was] an</p><p>operational way to define free will, in a way that allowed you to reconcile</p><p>free will with determinism. The system is deterministic, but you can’t say</p><p>what it’s going to do next.”[6]</p><p>As a final example, philosopher David Steenburg explicitly links the</p><p>supposed free will of chaos with morality: “Chaos theory provides for the</p><p>reintegration of fact and value by opening each to the other in new ways.”</p><p>And to underline this linkage, Steenburg’s paper wasn’t published in some</p><p>science or philosophy journal. It was in the Harvard Theological Review.[7]</p><p>So a bunch of thinkers find free will in the structure of chaoticism.</p><p>Compatibilists and incompatibilists debate whether free will is possible in a</p><p>deterministic world, but now you can skip the whole brouhaha because,</p><p>according to them, chaoticism shows that the world isn’t deterministic. As</p><p>Eilenberger summarizes, “But since we now know that the slightest,</p><p>immeasurably small differences in the initial state can lead to completely</p><p>different final states (that is, decisions), physics cannot empirically prove</p><p>the impossibility of free will.”[8] In this view, the indeterminism of chaos</p><p>means that, although it doesn’t help you prove that there is free will, it lets</p><p>you prove that you can’t prove that there isn’t.</p><p>But now to the critical mistake running through all of this: determinism</p><p>and predictability are very different things. Even if chaoticism is</p><p>unpredictable, it is still deterministic. The difference can be framed a lot of</p><p>ways. One is that determinism allows you to explain why something</p><p>happened, whereas predictability allows you to say what happens next.</p><p>Another way is the woolly-haired contrast between ontology and</p><p>epistemology; the former is about what is going on, an issue of</p><p>determinism, while the latter is about what is knowable, an issue of</p><p>predictability. Another is the difference between “determined” and</p><p>“determinable” (giving rise to the heavy-duty title of one heavy-duty paper,</p><p>“Determinism Is Ontic, Determinability Is Epistemic,” by philosopher</p><p>Harald Atmanspacher).[9]</p><p>Experts tear their hair out over how fans of “chaoticism = free will” fail</p><p>to make these distinctions. “There is a persistent confusion about</p><p>determinism and predictability,” write physicists Sergio Caprara and</p><p>Angelo Vulpiani. The first name–less philosopher G. M. K. Hunt of the</p><p>University of Warwick writes, “In a world where perfectly accurate</p><p>measurement is impossible, classical physical determinism does not entail</p><p>epistemic determinism.” The same thought comes from philosopher Mark</p><p>Stone: “Chaotic systems, even though they are deterministic, are not</p><p>predictable [they are not epistemically deterministic]. . . . To say that</p><p>chaotic systems are unpredictable is not to say that science cannot explain</p><p>them.” Philosophers Vadim Batitsky and Zoltan Domotor, in their</p><p>wonderfully titled paper, “When Good Theories Make Bad Predictions,”</p><p>describe chaotic systems as “deterministically unpredictable.”[10]</p><p>Here’s a way to think about this extremely important point. I just went</p><p>back to that fantastic pattern in the last chapter, on page 138, and estimated</p><p>that it is around 250 rows long and 400 columns wide. This means that the</p><p>figure consists of about 100,000 boxes, each now either open or filled. Get</p><p>a hefty piece of graph paper, copy the row 1 starting state from the figure,</p><p>and then spend the next year sleeplessly applying rule 22 to each successive</p><p>row, filling in the 100,000 boxes with your #2 pencil. And you will have</p><p>generated the same exact pattern as in the figure. Take a deep breath and do</p><p>it a second time, same outcome. Have a trained dolphin with an</p><p>extraordinary capacity for repetition go at it, same result. Row eleventy-</p><p>three would not be what it is because at row eleventy-two, you or the</p><p>dolphin just happened to choose to let the open-or-filled split in the road</p><p>depend on the spirit moving you or on what you think Greta Thunberg</p><p>would do. That pattern was the outcome of a completely deterministic</p><p>system consisting of the eight instructions comprising rule 22. At none of</p><p>the 100,000 junctures could a different outcome have resulted (unless a</p><p>random mistake occurred; as we’ll see in chapter 10, constructing an edifice</p><p>of free will on random hiccups is quite iffy). Just as the search for an</p><p>uncaused neuron will prove fruitless, likewise for an uncaused box.</p><p>Let’s frame this in the context of human behavior. It’s 1922, and you’re</p><p>presented with a hundred young adults destined to live conventional lives.</p><p>You’re told that in about forty years, one of the hundred is going to diverge</p><p>from that picture, becoming impulsive and socially inappropriate to a</p><p>criminal extent. Here are blood samples from each of those people, check</p><p>them out. And there’s no way to predict which person is above chance</p><p>levels.</p><p>It’s 2022. Same cohort with, again, one person destined to go off the rails</p><p>forty years hence. Again, here are their blood samples. This time, this</p><p>century, you use them to sequence everyone’s genome. You discover that</p><p>one individual has a mutation in a gene called MAPT, which codes for</p><p>something in the brain called the tau protein. And as a result, you can</p><p>accurately predict that it will be that person, because by age sixty, he will be</p><p>showing the symptoms of behavioral variant frontotemporal dementia.[11]</p><p>Back to the 1922 cohort. The person in question has started shoplifting,</p><p>threatening strangers, urinating in public. Why did he behave that way?</p><p>Because he chose to do so.</p><p>Year 2022’s cohort, same unacceptable acts. Why will he have behaved</p><p>that way? Because of a deterministic mutation in one gene.[*]</p><p>According to the logic of the thinkers just quoted, the 1922 person’s</p><p>behavior resulted from free will. Not “resulted from behavior we would</p><p>erroneously attribute to free will.” It was free will. And in 2022, it is not</p><p>free will. In this view, “free will” is what we call the biology that we don’t</p><p>understand on a predictive level yet, and when we do understand it, it stops</p><p>being free will. Not that it stops being mistaken for free will. It literally</p><p>stops being. There is something wrong if an instance of free will exists only</p><p>until there is a decrease in our ignorance. As the crucial point, our intuitions</p><p>about free will certainly work that way, but free will itself can’t.</p><p>We do something, carry out a behavior, and we feel like we’ve chosen,</p><p>that there is a Me inside separate from all those neurons, that agency and</p><p>volition dwell there. Our intuitions scream this, because we don’t know</p><p>about, can’t imagine, the subterranean forces of our biological history that</p><p>brought it about. It is a huge challenge to overcome those intuitions when</p><p>you still have to wait for science to be able to predict that behavior</p><p>precisely. But the temptation to equate chaoticism with free will shows just</p><p>how much harder it is to overcome those intuitions when science will never</p><p>be able to predict precisely the outcomes of a deterministic system.</p><p>WRONG CONCLUSION #2: A CAUSELESS FIRE</p><p>Most of the fascination with chaoticism comes from the fact that you can</p><p>start with some simple deterministic rules for a system and produce</p><p>something ornate and wildly unpredictable. We’ve now seen how mistaking</p><p>this for indeterminism leads to a tragic</p><p>downward spiral into a cauldron of</p><p>free-will belief. Time now for the other problem.</p><p>Go back to the figure at the top of page 141 with its demonstration with</p><p>rule 22 that two different starting states can turn into the identical pattern</p><p>and thus, it is not possible to know which of those two was the actual</p><p>source.</p><p>This is the phenomenon of convergence. It’s a term frequently used in</p><p>evolutionary biology. In this instance, it’s not so much that you can’t tell</p><p>which of two different possible ancestors a particular species arose from</p><p>(e.g., “Was the ancestor of elephants three-legged or five-legged? Who can</p><p>tell?”). It’s more when two very different sorts of species have converged</p><p>on the same solution to the same sort of selective challenge.[*] Among</p><p>analytical philosophers, the phenomenon is termed overdetermination—</p><p>when two different pathways could each separately determine the</p><p>progression to the same outcome. Implicit in this convergence is a loss of</p><p>information. Plop down in some row in the middle of a cellular automaton,</p><p>and not only can’t you predict what is going to happen, but you can’t know</p><p>what did happen, which possible pathway led to the present state.</p><p>This issue of convergence has a surprising parallel in legal history.</p><p>Thanks to negligence, a fire starts in building A. Nearby, completely</p><p>unrelated, separate negligence gives rise to a fire in building B. The two</p><p>fires spread toward each other and converge, burning down building C in</p><p>the center. The owner of building C sues the other two owners. But which</p><p>negligent person was responsible for the fire? Not me, each would argue in</p><p>court—if my fire hadn’t happened, building C would still have burned</p><p>down. And it worked, in that neither owner would be held responsible. This</p><p>was the state of things until 1927, when the courts ruled in Kingston v.</p><p>Chicago and NW Railroad that it is possible to be partially responsible for</p><p>what happened, for there to be fractions of guilt.[12]</p><p>Similarly, consider a group of soldiers lining up in a firing squad to kill</p><p>someone. No matter how much one is pulling a trigger in glorious</p><p>obedience to God and country, there’s often some ambivalence, perhaps</p><p>some guilt about mowing down someone or worry that fortunes will shift</p><p>and you’ll wind up in front of a firing squad. And for centuries, this gave</p><p>rise to a cognitive manipulation—one soldier at random was given a blank</p><p>rather than a real bullet. No one knew who had it, and thus every shooter</p><p>knew that they might have gotten the blank and thus weren’t actually a</p><p>killer. When lethal injection machines were invented, some states stipulated</p><p>that there’d be two separate delivery routes, each with a syringe full of</p><p>poison. Two people would press each of two buttons, and a randomizer in</p><p>the machine would infuse the poison from one syringe into the person and</p><p>dump the contents of the other into a bucket. And not keep a record of</p><p>which did which. Each person thus knew that they might not have been the</p><p>executor. Those are nice psychological tricks for defusing a sense of</p><p>responsibility.[13]</p><p>Chaoticism pulls for a related type of psychological trick. The feature of</p><p>chaoticism where knowing a starting state doesn’t allow you to predict what</p><p>will happen is a crushing blow to classic reductionism. But the inability to</p><p>ever know what happened in the past demolishes what’s called radical</p><p>eliminative reductionism, the ability to rule out every conceivable cause of</p><p>something until you’ve gotten down to the cause.</p><p>So you can’t do radical eliminative reductionism and decide what single</p><p>thing caused the fire, which button presser delivered the poison, or what</p><p>prior state gave rise to a particular chaotic pattern. But that doesn’t mean</p><p>that the fire wasn’t actually caused by anything, that no one shot the bullet-</p><p>riddled prisoner, or that the chaotic state just popped up out of nowhere.</p><p>Ruling out radical eliminative reductionism doesn’t prove indeterminism.</p><p>Obviously. But this is subtly what some free-will supporters conclude—</p><p>if we can’t tell what caused X, then you can’t rule out an indeterminism that</p><p>makes room for free will. As one prominent compatibilist writes, it is</p><p>unlikely that reductionism will rule out the possibilities of free will,</p><p>“because the chain of cause and effect contains breaks of the type that</p><p>undermine radical reductionism and determinism, at least in the form</p><p>required to undermine freedom.” God help me that I’ve gotten to the point</p><p>of examining the split hair of and, but chaotic convergence does not</p><p>undermine radical reductionism and determinism. Just the former. And in</p><p>the view of that writer, this supposed undermining of determinism is</p><p>relevant to “policies upon which we hinge responsibility.” Just because you</p><p>can’t tell which of two towers of turtles propping you up goes all the way</p><p>down doesn’t mean that you’re floating in the air.[14]</p><p>CONCLUSION</p><p>Where have we gotten at this point? The crushing of knee-jerk</p><p>reductionism, the demonstration that chaoticism shows just the opposite of</p><p>chaos, the fact that there’s less randomness than often assumed and, instead,</p><p>unexpected structure and determinism—all of this is wonderful. Ditto for</p><p>butterfly wings, the generation of patterns on sea shells, and Will Darling.</p><p>But to get from there to free will requires that you mistake a failure of</p><p>reductionism that makes it impossible to precisely describe the past or</p><p>predict the future as proof of indeterminism. In the face of complicated</p><p>things, our intuitions beg us to fill up what we don’t understand, even can</p><p>never understand, with mistaken attributions.</p><p>On to our next, related topic.</p><p>T</p><p>7</p><p>A Primer on Emergent Complexity</p><p>he previous two chapters can basically be distilled to the following:</p><p>—“Break it down to its component parts” reductionism doesn’t work for</p><p>understanding some vastly interesting things about us. Instead, in such chaotic</p><p>systems, minuscule differences in starting states amplify enormously in their</p><p>consequences.</p><p>—This nonlinearity makes for fundamental unpredictability, suggesting to many that</p><p>there is an essentialism that defies reductive determinism, meaning that the “there can’t</p><p>be free will because the world is deterministic” stance goes down the drain.</p><p>—Nope. Unpredictable is not the same thing as undetermined; reductive determinism is</p><p>not the only kind of determinism; chaotic systems are purely deterministic, shutting</p><p>down that particular angle of proclaiming the existence of free will.</p><p>This chapter focuses on a related domain of amazingness that seems to</p><p>defy determinism. Let’s start with some bricks. Granting ourselves some</p><p>artistic license, they can crawl around on tiny invisible legs. Place one brick</p><p>in a field; it crawls around aimlessly. Two bricks, ditto. A bunch, and some</p><p>start bumping in to each other. When that happens, they interact in boringly</p><p>simple ways—they can settle down next to each other and stay that way, or</p><p>one can crawl up on top of another. That’s all. Now scatter a hundred zillion</p><p>of these identical bricks in this field, and they slowly crawl around, zillions</p><p>sitting next to each other, zillions crawling on top of others . . . and they</p><p>slowly construct the Palace of Versailles. The amazingness is not that, wow,</p><p>something as complicated as Versailles can be built out of simple bricks.[*]</p><p>It’s that once you made a big enough pile of bricks, all those witless little</p><p>building blocks, operating with a few simple rules, without a human in</p><p>sight, assembled themselves into Versailles.</p><p>This is not chaos’s sensitive dependence on initial conditions, where</p><p>these identical building blocks actually all differed when viewed at a high</p><p>magnification, and you then butterflew to Versailles. Instead, put enough of</p><p>the same simple elements together, and they spontaneously self-assemble</p><p>into something flabbergastingly complex, ornate, adaptive, functional, and</p><p>cool. With enough quantity, extraordinary quality just . . . emerges, often</p><p>even unpredictably.[*], [1]</p><p>As it turns out, such emergent complexity occurs in realms</p><p>very pertinent</p><p>to our interests. The vast difference between the pile of gormless, identical</p><p>building blocks and the Versailles they turned themselves into seems to defy</p><p>conventional cause and effect. Our sensible sides think (incorrectly . . .) of</p><p>words like indeterministic. Our less rational sides think of words like</p><p>magic. In either case, the “self” part of self-assembly seems so agentive, so</p><p>rife with “be the palace of bricks that you wish to be,” that dreams of free</p><p>will beckon. An idea that this and the next chapter will try to dispel.</p><p>WHY WE’RE NOT TALKING ABOUT MICHAEL</p><p>JACKSON MOONWALKING</p><p>Let’s start with what wouldn’t count as emergent complexity.</p><p>Put a beefy guy in a faux military uniform carrying a sousaphone in the</p><p>middle of a field. His behavior is simple—he can walk forward, to the left,</p><p>or to the right, and does so randomly. Scatter a bunch of other</p><p>instrumentalists there, and the same thing happens, all randomly moving,</p><p>collectively making no sense. But toss three hundred of them onto the field</p><p>and out of that emerges a giant Michael Jackson moonwalking past the</p><p>fifty-yard line during the halftime performance.[*]</p><p>There are all these interchangeable, fungible marching band marchers</p><p>with the same minuscule repertoire of movements. Why doesn’t this count</p><p>as emergence? Because there’s a master plan. Not inside the sousaphonist</p><p>but in the visionary who fasted in the desert, hallucinating pillars of salt</p><p>moonwalking, then returned to the marching band with the Good News.</p><p>This is not emergence.</p><p>Here’s real emergent complexity: Start with one ant. It wanders</p><p>aimlessly on the field. As do ten of them. A hundred interact with vague</p><p>hints of patterns. But put thousands of them together and they form a</p><p>society with job specialization, construct bridges or rafts out of their bodies</p><p>that float for weeks, build flood-proof underground nests with passageways</p><p>paved with leaves, leading to specialized chambers with their own</p><p>microclimates, some suited for farming fungi and others for brood rearing.</p><p>A society that even alters its functions in response to changing</p><p>environmental demands. No blueprint, no blueprint maker.[2]</p><p>What makes for emergent complexity?</p><p>—There is a huge number of ant-like elements, all identical or coming in just a few</p><p>different types.</p><p>—The “ant” has a very small repertoire of things it can do.</p><p>—There are a few simple rules based on chance interactions with immediate neighbors</p><p>(e.g., “walk with this pebble in your little ant mandibles until you bump into another ant</p><p>holding a pebble, in which case, drop yours”). No ant knows more than these few rules,</p><p>and each acts as an autonomous agent.</p><p>—Out of the hugely complicated phenomena this can produce emerge irreducible</p><p>properties that exist only on the collective level (e.g., a single molecule of water cannot</p><p>be wet; “wetness” emerges only from the collectivity of water molecules, and studying</p><p>single water molecules can’t predict much about wetness) and that are self-contained at</p><p>their level of complexity (i.e., you can make accurate predictions about the behavior of</p><p>the collective level without knowing much about the component parts). As summarized</p><p>by Nobel laureate physicist Philip Anderson, “More is different.”[*],[3]</p><p>—These emergent properties are robust and resilient—a waterfall, for example,</p><p>maintains consistent emergent features over time despite the fact that no water molecule</p><p>participates in waterfall-ness more than once.[4]</p><p>—A detailed picture of the maturely emergent system can be (but is not necessarily)</p><p>unpredictable, which should have echoes of the previous two chapters. Knowing the</p><p>starting state and reproduction rules (à la cellular automata) gives you the means to</p><p>develop the complexity but not the means to describe it. Or, to use a word offered by a</p><p>leading developmental neurobiologist of the past century, Paul Weiss, the starting state</p><p>can never contain an “itinerary.”[*],[5]</p><p>—Part of this unpredictability is due to the fact that in emergent systems, the road you</p><p>are traveling on is being constructed at the same time and, in fact, your being on it is</p><p>influencing the construction process by constituting feedback on the road-making</p><p>process.[*] Moreover, the goal you are traveling toward may not even exist yet—you are</p><p>destined to interact with a target spot that may not exist yet but, with any luck, will be</p><p>constructed in time. In addition, unlike last chapter’s cellular automata, emergent</p><p>systems are also subject to randomness (jargon: “stochastic events”), where the sequence</p><p>of random events makes a difference.[*]</p><p>—Often the emergent properties can be breathtakingly adaptive and, despite that, there’s</p><p>no blueprint or blueprint maker.[6]</p><p>Here’s a simple version of the adaptiveness: Two bees leave their hive,</p><p>each flying randomly until finding a food source. They both do, with one</p><p>source being better. Each returns to the hive, neither bee knowing anything</p><p>about both food sources. Nonetheless, all the bees fly straight to the better</p><p>site.</p><p>Here’s a more complex example: An ant forages for food, checking eight</p><p>different places. Little ant legs get tired, and ideally the ant visits each site</p><p>only once, and in the shortest possible path of the 5,040 possible ones (i.e.,</p><p>seven factorial). This is a version of the famed “traveling salesman</p><p>problem,” which has kept mathematicians busy for centuries, fruitlessly</p><p>searching for a general solution. One strategy for solving the problem is</p><p>with brute force—examine every possible route, compare them all, and pick</p><p>the best one. This takes a ton of work and computational power—by the</p><p>time you’re up to ten places to visit, there are more than 360,000 possible</p><p>ways to do it, more than 80 billion with fifteen places to visit. Impossible.</p><p>But take the roughly ten thousand ants in a typical colony, set them loose on</p><p>the eight-feeding-site version, and they’ll come up with something close to</p><p>the optimal solution out of the 5,040 possibilities in a fraction of the time it</p><p>would take you to brute-force it, with no ant knowing anything more than</p><p>the path that it took plus two rules (which we’ll get to). This works so well</p><p>that computer scientists can solve problems like this with “virtual ants,”</p><p>making use of what is now known as swarm intelligence.[*], [7]</p><p>There’s the same adaptiveness in the nervous system. Take a</p><p>microscopic worm that neurobiologists love;[*] the wiring of its neurons</p><p>shows close to traveling-salesman optimization, in terms of the cost of</p><p>wiring them all up; same in the nervous system of flies. And in primate</p><p>brains as well; examine the primate cortex, identify eleven different regions</p><p>that wire up with each other. And of several million possible ways of doing</p><p>it, the developing brain finds the optimal solution. As we’ll see, in all these</p><p>cases, this is accomplished with rules that are conceptually similar to what</p><p>the traveling-salesmen ants do.[8]</p><p>Other types of adaptiveness also abound. A neuron “wants” to spread its</p><p>array of thousands of dendritic branches as efficiently as possible for</p><p>receiving inputs from other neurons, even competing with neighboring</p><p>cells. Your circulatory system “wants” to spread its thousands of branching</p><p>arteries as efficiently as possible in delivering blood to every cell in the</p><p>body. A tree “wants” to branch skyward most efficiently to maximize the</p><p>sunlight its leaves are exposed to. And as we’ll see, all three solve the</p><p>challenge with similar emergent rules.[9]</p><p>How can this be? Time to look at examples of how emergence actually</p><p>emerges, using simple rules that work in similar ways in solving</p><p>optimization challenges for, among other things, ants, slime molds, neurons,</p><p>humans, and societies. This process will easily dispose of the first</p><p>temptation: to decide that emergence demonstrates indeterminacy. Same</p><p>answer as in the last chapter—unpredictable is not the same thing as</p><p>undetermined. Disposing of the second temptation is going to be more</p><p>challenging.</p><p>INFORMATIVE SCOUTS FOLLOWED BY RANDOM</p><p>ENCOUNTERS</p><p>Many examples</p><p>of emergence involve a motif that requires two simple</p><p>phases. In the first, “scouts” in a population explore an environment; when</p><p>they find some resource, they broadcast the news.[*] The broadcast must</p><p>include information about the quality of the resource, such as better</p><p>resources producing louder or longer signals. In the second phase, other</p><p>individuals wander randomly in their environment with a simple rule</p><p>regarding their response to the broadcast.</p><p>Back to honey bees as an example. Two bee scouts check out the</p><p>neighborhood for possible food sources. They each find one, come back to</p><p>the hive to report; they broadcast their news by way of the famed bee</p><p>waggle dance, where the features of the dance communicate the direction</p><p>and distance of the food. Crucially, the better the food source a scout found,</p><p>the longer it carries out one part of the dance—this is how quality is being</p><p>broadcast.[*] As the second phase, other bees wander about randomly in the</p><p>hive, and if they bump into a dancing scout, they fly away to check out the</p><p>food source the scout is broadcasting about . . . and then return to dance the</p><p>news as well. And because a better potential site = longer dancing, it’s more</p><p>likely that one of those random bees bumps into the great-news bee than the</p><p>good-news one. Which increases the odds that soon there will be two great-</p><p>news dancers, then four, then eight . . . until the entire colony converges on</p><p>going to the optimal site. And the original good-news scout will have long</p><p>since stopped dancing, bumped into a great-news dancer, and been recruited</p><p>to the optimal solution. Note—there is no decision-making bee that gets</p><p>information about both sites, compares the two options, picks the better</p><p>one, and leads everyone to it. Instead, longer dancing recruits bees that will</p><p>dance longer, and the comparison and optimal choice emerge implicitly;</p><p>this is the essence of swarm intelligence.[10]</p><p>Similarly, suppose the two scout bees discover two potential sites that</p><p>are equally good, but one is half as far from the hive as the other one. It will</p><p>take the local-news bee half the time to get to and back from its food source</p><p>that it takes the distant-news bee—meaning that the two, four, eight</p><p>doubling starts sooner, exponentially swamping the signal of distant-news</p><p>bee. Everyone soon heads to the closer source. Ants find the optimal site for</p><p>a new colony this way. Scouts go out, and each finds a possible site; the</p><p>better the site, the longer they stay there. Then the random wanderers</p><p>spread out with the rule that if you bump into an ant standing at a possible</p><p>site, maybe check the site out. Once again, better quality translates into a</p><p>stronger recruitment signal, which becomes self-reinforcing. Work by my</p><p>pioneering colleague Deborah Gordon shows an additional layer of</p><p>adaptiveness. A system like this has various parameters—how far do ants</p><p>wander, how much longer do you stay at a good site versus a mediocre one,</p><p>and so on. She shows that these parameters vary in different ecosystems as</p><p>a function of how abundant food sources are, how patchily they are</p><p>distributed, and how costly foraging is (for example, foraging is more</p><p>expensive, in terms of water loss, for desert ants than for forest ants); the</p><p>better a colony has evolved to get these parameters just right for its</p><p>particular environment, the more likely it is to survive and leave</p><p>descendants.[*],[*],[11]</p><p>The two steps of scout broadcasters followed by recruitment of random</p><p>wanderers explains virtual ant traveling-salesman optimization. Place a</p><p>bunch of ants at each of the virtual foraging sites; each ant then picks a</p><p>route at random that involves visiting each site once, and leaves a</p><p>pheromone trail in the process.[*] How does better quality translate into a</p><p>stronger broadcast? The shorter the route, the thicker the pheromone trail</p><p>that is laid down by a scout; pheromones evaporate, and thus shorter,</p><p>thicker pheromone trails last longer. A second generation of ants shows up;</p><p>they wander randomly, with the rule that if they encounter a pheromone</p><p>trail, they join it, adding their own pheromones. As a result, the thicker and</p><p>therefore longer-lasting the trail, the more likely another ant is to join it and</p><p>amplify its recruiting message. And soon the less efficient routes for</p><p>connecting the sites evaporate away, leaving the optimized solution. No</p><p>need to gather data about the length of every possible route and have a</p><p>centralized authority compare them and then direct everyone to the best</p><p>solution. Instead, something that comes close to the optimal solution</p><p>emerges on its own.[*]</p><p>(Something worth pointing out: As we’ll see, these rich-get-richer</p><p>recruitment algorithms explain optimized behavior in us as well, along with</p><p>other species. But “optimal” is not meant in the value-laden sense of</p><p>“good.” Just consider rich-get-richer scenarios where, thanks to the</p><p>recruitment signaling of economic inequality, it’s literally the rich who get</p><p>richer.)</p><p>Next we turn to how emergence helps slime molds solve problems.</p><p>Slime molds are these slimy, moldy, fungal, amoeboid, single-cell</p><p>protists, just to make a bunch of taxonomic errors, that grow and spread like</p><p>a carpet over surfaces, looking for microorganisms to eat.</p><p>In a slime mold, zillions of single-cell amoebas have joined forces by</p><p>merging into a giant, cooperative single cell that oozes over surfaces in</p><p>search of food, apparently an efficient food-hunting strategy[*] (and as a</p><p>hint of the emergence pending, a single, independent slime mold cell can no</p><p>more ooze than a molecule of water can be wet). What used to be the</p><p>individual cells are interconnected by tubules that can stretch or contract,</p><p>depending on the direction of oozing (see figure on the next page).</p><p>Out of these collectivities emerge problem-solving capabilities. Spritz a</p><p>dollop of slime mold into a little plastic well that leads to two corridors, one</p><p>with an oat flake at the end, the other with two oat flakes (beloved by slime</p><p>molds). Rather than sending out scouts, the entire slime mold expands to fill</p><p>both corridors, reaching both food sources. And within a few hours, the</p><p>slime mold retracts from the one–oat flake corridor and accumulates around</p><p>the two oats. Have two pathways of differing lengths leading to the same</p><p>food source; the slime mold initially fills both paths but eventually takes</p><p>only the shortest route. Same with a maze with multiple routes and dead</p><p>ends.[*],[12]</p><p>Initially, the slime mold fills every path (panel a); it then begins retracting from</p><p>superfluous paths (panel b), until eventually reaching the optimal solution (panel c).</p><p>(Ignore the various markings.)</p><p>As the tour de force of slime mold intelligence, Atsushi Tero at</p><p>Hokkaido University plopped a slime mold down into a strangely shaped</p><p>walled-off area with oat flakes at very specific locations. Initially, the mold</p><p>expanded, forming tubules connecting all the food sources to each other in</p><p>multiple ways. Eventually, most tubules retracted, leaving something close</p><p>to the shortest total path length of tubules connecting food sources. The</p><p>Traveling Slime Mold. Here’s the thing that makes the audience shout for</p><p>more—the wall outlines the coastline around Tokyo; the slime was plopped</p><p>onto where Tokyo would be, and the oat flakes corresponded to the</p><p>suburban train stations situated around Tokyo. And out of the slime mold</p><p>emerged a pattern of tubule linkages that was statistically similar to the</p><p>actual train lines linking those stations. A slime mold without a neuron to</p><p>its name, versus teams of urban planners.[13]</p><p>How do slime molds pull this off? A lot like ants and bees. Take the two</p><p>corridors leading to either one or two oat flakes. The slime mold initially</p><p>oozes into both corridors, and when food is found, tubules contract in the</p><p>direction of the food, pulling the rest of the slime mold toward it. Crucially,</p><p>the better the food source, the greater the contractile force generated on the</p><p>tubules. Then the tubules a bit farther away dissipate the force by</p><p>contracting</p><p>in the same orientation, increasing the force of contraction,</p><p>spreading outward until the whole slime mold has been pulled into the</p><p>optimal pathway. No part of the slime mold compares the two options and</p><p>makes a decision. Instead, the slime mold extensions into the two corridors</p><p>act as scouts, with the better route broadcast in a way that causes rich-get-</p><p>richer recruiting via mechanical forces.[14]</p><p>Now let’s consider a growing neuron. It extends a projection that has</p><p>branched into two scout arms (“growth cones”) heading toward two</p><p>neurons. Simplifying brain development to a single mechanism, each target</p><p>neuron is attracting the growth cone by secreting a gradient of “attractant”</p><p>molecules. One target is “better,” thus secreting more of the attractant,</p><p>resulting in a growth cone reaching it first—which causes a tubule inside</p><p>that growing neuron’s projection to bend in that direction, to be attracted to</p><p>that direction. Which makes the parallel tubule adjacent to it more likely to</p><p>do the same. Which increases the mechanical forces recruiting more and</p><p>more of these tubules. The other scout arm is retracted, and our growing</p><p>neuron has connected up with the better target.[*], [15]</p><p>Let’s look at our ant / bee / slime mold motif as applied to the</p><p>developing brain forming the cortex, the fanciest, most recently evolved</p><p>part of the brain.</p><p>The cortex is a six-layer-thick blanket over the surface of the brain, and</p><p>cut into cross section, each layer consists of different types of neurons (see</p><p>figure on the next page).</p><p>The multilayered architecture has lots to do with cortical function. In the</p><p>picture, think of that slab of cortex as being divided into six vertical</p><p>columns (best seen as the six dense clusters of neurons at the level of the</p><p>arrow). The neurons within any of these mini columns send lots of vertical</p><p>projections (i.e., axons) to each other, collectively working as a unit; for</p><p>example, in the visual cortex, one mini column might decode the meaning</p><p>of light falling on one spot of the retina, with the mini column next to it</p><p>decoding light on an adjacent spot.[*]</p><p>It’s ants redux in building a cortex. The first step in cortical development</p><p>is when a layer of cells at the bottom of each cross section of cortex sends</p><p>long, straight projections to the surface, serving as vertical scaffolding.</p><p>These are our ant scouts, called radial glia (ignore the letters in the diagram</p><p>on the next page). There is initially an excess of them, and the ones that</p><p>have blazed the less optimal, less direct paths are eliminated (through a</p><p>controlled type of cell death). As such, we have our first generation of</p><p>explorers, with the ones with the more optimal solution to cortex building</p><p>persisting longer.[16]</p><p>Radial glia radiating outward from the center of a cross section</p><p>You know what’s coming next. Newly born neurons wander randomly at</p><p>the base of the cortex until they bump into a radial glia. They then migrate</p><p>upward along the glial guide rail, leaving behind chemoattractant signals</p><p>that recruit more newbies to join the soon-to-be mini column.[*],[17]</p><p>Scouts, quality-dependent broadcasting, and rich-get-richer recruiting,</p><p>from insects and slime molds to your brain. All without a master plan, or</p><p>constituent parts knowing anything beyond their immediate neighborhood,</p><p>or any component comparing options and choosing the best one. With</p><p>remarkable prescience about these ideas in 1874, the biologist Thomas</p><p>Huxley wrote about the mechanistic nature of organisms, such that they</p><p>“only simulate intelligence as a bee simulates a mathematician.”[18]</p><p>Time for another motif in emergent systems.</p><p>FITTING INFINITELY LARGE THINGS INTO</p><p>INFINITELY SMALL SPACES</p><p>Consider the figure below. The top row consists of a single straight line.</p><p>Remove its middle third, producing the two lines that constitute the second</p><p>row; the length of those two together is two thirds the length of the original</p><p>line. Remove the middle third from each of those, producing four lines that,</p><p>collectively, are four ninths the total length of the original line. Do this</p><p>forever, and you generate something that seems impossible—an infinitely</p><p>large number of specks that have an infinitely short cumulative length.</p><p>Let’s do the same thing in two dimensions (below). Take an equilateral</p><p>triangle (#1). Generate another equilateral triangle on each face, using the</p><p>middle third as the base for the new triangle, resulting in a six-pointed star</p><p>(#2). Do the same to each of those points, producing an eighteen-pointed</p><p>star (#3), then a fifty-four-pointed star (#4), over and over. Do this forever</p><p>and you’ll generate a two-dimensional version of the same impossibility,</p><p>namely a shape whose increase in area from one iteration to the next is</p><p>infinitely small, while its perimeter is infinitely long:</p><p>Now three dimensions. Take a cube. Each of its faces can be thought of</p><p>as being a three-by-three grid of nine boxes. Take out the middle-most of</p><p>those nine boxes, leaving eight:</p><p>Now think of each of those remaining eight as a three-by-three grid, and</p><p>take out the middle-most box. Repeat that process forever, on all six faces</p><p>of the cube. And the impossibility achieved when you reach infinity is a</p><p>cube with infinitely small volume but infinitely large surface area (see</p><p>figure on the next page).</p><p>These are, respectively, called a Cantor set, a Koch snowflake, and a</p><p>Menger sponge. These are mainstays of fractal geometry, where you iterate</p><p>the same operation over and over, eventually producing something</p><p>impossible in traditional geometry.[19]</p><p>Which helps explain something about your circulatory system. Each cell</p><p>in your body is at most only a few cells away from a capillary, and the</p><p>circulatory system accomplishes this by growing around forty-eight</p><p>thousand miles of capillaries in an adult. Yet that ridiculously large number</p><p>of miles takes up only about 3 percent of the volume of your body. From</p><p>the perspective of real bodies in the real world, this begins to approach the</p><p>circulatory system being everywhere, infinitely present, while taking up an</p><p>infinitely small amount of space.[20]</p><p>Branching patterns in capillary beds</p><p>A neuron has a similar challenge, in that it wants to send out a tangle of</p><p>dendritic branches that can accommodate inputs at ten thousand to fifty</p><p>thousand synapses, all with the dendritic “tree” taking up as little space as</p><p>possible and costing as little as possible to construct:</p><p>A classic textbook drawing of an actual neuron</p><p>And of course, there are trees, forming real branches to generate the</p><p>maximal amount of surface area for foliage to absorb sunlight, while</p><p>minimizing the costs of growing it all.</p><p>The similarities and underlying mechanisms would be obvious to Cantor,</p><p>Koch, or Menger,[*] namely iterative bifurcation—something grows a</p><p>distance and splits in two; those two branches grow some distance and each</p><p>splits in two; those four branches . . . over and over, going from the aorta</p><p>down to forty-eight thousand miles of capillaries, from the first dendritic</p><p>branch in a neuron to two hundred thousand dendritic spines, from a tree</p><p>trunk to something like fifty thousand leafy branch tips.</p><p>How are bifurcating structures like these generated in biological</p><p>systems, on scales ranging from a single cell to a massive tree? Well, I’ll</p><p>tell you one way it doesn’t happen, which is to have specific instructions for</p><p>each bifurcation. In order to generate a bifurcating tree with 16 branch tips,</p><p>you have to generate 15 separate branching events. For 64 tips, 63</p><p>branchings. For 10,000 dendritic spines in a neuron, 9,999 branchings. You</p><p>can’t have one gene dedicated to overseeing each of those branching events,</p><p>because you’ll run out of genes (we only have about twenty thousand).</p><p>Moreover, as pointed out by Hiesinger, building a structure this way</p><p>requires a blueprint as complicated as the structure itself, raising the turtles</p><p>question: How is the blueprint generated, and how is the blueprint that</p><p>generated that blueprint generated . . . ? And it’s these sorts of</p><p>problems</p><p>writ large and larger for the circulatory system and for actual trees.</p><p>Instead, you need instructions that work the same way at every scale of</p><p>magnification. Scale-free instructions like this:</p><p>Step #1. Start with a tube of diameter Z (a tube because geometrically, a blood vessel</p><p>branch, a dendritic branch, and a tree branch can all be thought of that way).</p><p>Step #2. Extend that tube until it is, to pull a number out of a hat, four times longer than</p><p>its diameter (i.e., 4Z).</p><p>Step #3. At that point, the tube bifurcates, splits in two. Repeat.</p><p>This produces two tubes, each with a diameter of 1/2Z. And when those</p><p>two tubes are four times longer than that diameter (i.e., 2Z), they split in</p><p>two, producing four branches, each 1/4Z diameter, which will split in two</p><p>when each is 1Z (see figure on the following page).</p><p>While a mature tree sure seems immensely complex, the idealized</p><p>coding for it can be compressed into three instructions requiring only a</p><p>handful of genes to pull this off, rather than half your genome.[*] You can</p><p>even have the effects of those genes interact with the environment. Say</p><p>you’re a fetus inside someone living at high altitude, with low levels of</p><p>oxygen in the air and thus in your fetal circulation. This triggers an</p><p>epigenetic change (back to chapter 3) so that tubes in your circulation grow</p><p>only 3.9 times the width, instead of 4.0, before splitting. This will produce a</p><p>bushier spread of capillaries (I’m not sure if that would solve the high-</p><p>altitude problem—I’m making this up).[*]</p><p>So you can do this with just a handful of genes that can even interact</p><p>with the environment. But let’s turn this into the reality of real biological</p><p>tubes and what genes actually do. How can your genes code for something</p><p>abstract like “grow four times the diameter and then split, regardless of</p><p>scale”?</p><p>Various models have been proposed; here’s a totally beautiful one. Let’s</p><p>consider a fetal neuron that is about to generate a bifurcating tree of</p><p>dendrites (although this could be any of the other bifurcating systems we’ve</p><p>been covering). We start with a stretch of the neuron’s surface membrane</p><p>that is destined to be where the tree starts growing (see figure below, left).</p><p>Note that in this very artificial version, the membrane is made of two layers,</p><p>and in between the layers is some Growth Stuff (hatched), coded for by a</p><p>gene. The Growth Stuff triggers the area of the neuron just below to start</p><p>constructing a trunk that will rise from there (right):[21]</p><p>How much Growth Stuff was there at the beginning? 4Zs’ worth, which</p><p>will make the trunk grow 4Z in length before stopping. Why does it stop?</p><p>Critically, the inner layer of the growing front of the neuron grows a little</p><p>faster than the outer layer, such that right around a length of 4Z, the inner</p><p>layer touches the outer layer, splitting the pool of Growth Stuff in half. No</p><p>more Growth Stuff in the tip; things stop at 4Z. But crucially, there’s now</p><p>2Zs’ worth of Growth Stuff pooled on each side of the tip of the trunk (left).</p><p>Which triggers the area underneath to start growing (right):</p><p>Because these two branches are narrower, the inner layers touch the</p><p>outer layers after a length of only 2Z (below left), which splits the Growth</p><p>Stuff into four pools, each with 1Z’s worth. And so on (below right).[*],[22]</p><p>The key to this “diffusion-based geometry” model is the speed of growth</p><p>of the two layers differing. Conceptually, the outer layer is about growing,</p><p>the inner about stopping growing. Numerous other models produce</p><p>bifurcations just as emergently, with similar themes.[*] Wonderfully, two</p><p>genes, coding for molecules with growth and stopping-growth properties,</p><p>respectively, have been identified that are central to bifurcation in the</p><p>developing lung.[*],[23]</p><p>And the intensely cool thing is that these very different physiological</p><p>systems—neurons, blood vessels, the pulmonary system, and lymph nodes</p><p>—use some of the same genes, coding for the same proteins in the</p><p>construction process (a menagerie of proteins such as VEGF, ephrins,</p><p>netrins, and semaphorins). These are not genes used for, say, generating the</p><p>circulatory system. These are genes for generating bifurcating systems,</p><p>applicable to one single neuron and to vascular and pulmonary systems</p><p>using billions of cells.[24]</p><p>Aficionados will recognize that these bifurcating systems all form</p><p>fractals, where the relative degree of complexity is constant, no matter at</p><p>what scale of magnification you are considering the system (with the</p><p>recognition that unlike the fractals of mathematics, fractals in the body</p><p>don’t bifurcate forever—physical reality asserts itself at some point). We’re</p><p>now in very strange terrain, having to consider the molecules of the sort</p><p>mentioned in the previous paragraph being coded for by “fractal genes.”</p><p>Which means that there must be fractal mutations, disrupting normal</p><p>branching in everything from single neurons to entire organ systems; there</p><p>are some hints of these out there.[25]</p><p>These principles apply to nonbiological complexity as well—for</p><p>example, why rivers emptying into the sea bifurcate into river deltas. And it</p><p>even applies to cultures. Let’s consider one last emergent bifurcating tree,</p><p>one that shows either the deeply abstract ubiquity of the phenomenon or</p><p>how I’m running too far with a metaphor.</p><p>Look at the intensely bifurcated diagram below; don’t worry about what</p><p>the branch tips are—just note the branchings all over the place.</p><p>What is this tree? The perimeter represents the present. Each ring</p><p>represents one hundred years back into the past, reaching the year 0 AD at</p><p>the center, with a trunk going back millennia from there. And the branching</p><p>pattern? The history of the emergence of earth’s religions—a mass of</p><p>bifurcations, trifurcations, dead-end side branches, and so on. A partial</p><p>magnification:[26]</p><p>One tiny piece of the history of religious branching</p><p>What constitutes the diameter of each “tube” in this emergent history of</p><p>religions? Maybe measures of the intensity of religious belief—the number</p><p>of adherents, their cultural homogeneity, their collective wealth or power.</p><p>The wider the diameter, the longer the tube is likely to persist before</p><p>destabilizing, but in a scale-free way.[*] Would this be adaptive, in the same</p><p>sense as analyzing, say, bifurcating blood vessels? I think that right around</p><p>now, I should recognize that I’m on thin speculative ice and call it a day.</p><p>What has this section provided us? The same themes as in the prior</p><p>section about pathfinding ants, slime molds, and neurons—simple rules</p><p>about how components of a system interact locally, repeated a huge number</p><p>of times with huge numbers of those components, and out emerges</p><p>optimized complexity. All without centralized authorities comparing the</p><p>options and making freely chosen decisions.[*]</p><p>LET’S DESIGN A TOWN</p><p>You’re on the planning board for a new town, and after endless meetings,</p><p>you’ve collectively decided where it will be built, how big it will be.</p><p>You’ve laid out a grid of the streets, decided on locations for the schools,</p><p>hospitals, and bowling alleys. Time now to figure out where the stores will</p><p>go.</p><p>The Stores Committee first proposes that stores be randomly scattered</p><p>throughout town. Uh, that’s not ideal; people want stores conveniently</p><p>clustered. Right, says the committee, and then proposes that all the stores be</p><p>in a single cluster in the middle of town.</p><p>Uh, not quite right either. With this single cluster, there won’t be</p><p>convenient parking, and the stores in the center of this megamall will be so</p><p>inaccessible that they’ll go out of business—they’ll die from some</p><p>commercial equivalent of insufficient oxygen.</p><p>Next plan—have six malls of the same size, set equal distances from</p><p>each other. That’s good, but someone notices that all dozen coffee shops are</p><p>in the same mall; these shops will drive each other out of business, while</p><p>five malls will have no coffee shops.</p><p>Back to planning, paying attention now not just to “store-ness” but to the</p><p>type of store. In each mall,</p><p>one pharmacy, one market, two coffee shops.</p><p>Consider interactions between different types of stores. Separate the candy</p><p>shop and the dentist. The optometrist goes next to the bookstore. Get the</p><p>correct ratio of places for sinning—a gelato shop, a bar—to those for</p><p>repenting—a fitness center, a church. And whatever you do, don’t put the</p><p>store selling “God Bless America” sweatshirts next to the store selling</p><p>“God-Less America” ones.</p><p>Once that is implemented, there’s one last step, which is building major</p><p>thoroughfares that connect the malls to each other.</p><p>At last, the commercial districts in your town are planned, after all these</p><p>urban planning meetings filled with individuals with differing expertise,</p><p>careerism, personal agendas, cooperation taking a hit because one person</p><p>resents another for taking the last doughnut.</p><p>Take a beaker full of neurons. They’re newly born, so no axons or</p><p>dendrites yet, just rounded-up little cells destined for glory. Pour the</p><p>contents into a petri dish filled with a soup of nutrients that keep neurons</p><p>happy. The cells are now randomly scattered everywhere. Go away for a</p><p>few days, come back, look at those neurons under a microscope, and this is</p><p>what you see:</p><p>A bunch of neurons in a mall, er, I mean clumped together; to the far</p><p>right is the start of another cluster of cell bodies, with major thoroughfares</p><p>of projections linking the two, as well as to distant clusters outside the</p><p>picture.</p><p>No committee, no planning, no experts, no choices freely taken. Just the</p><p>same pattern as for the planned town, emerging from some simple rules:</p><p>—Each neuron that has been thrown randomly into the soup secretes a chemoattractant</p><p>signal; they’re all trying to get the others to migrate to them. Two neurons happen to be</p><p>closer than average to each other by chance, and they wind up being the first pair to be</p><p>clumped together in their neighborhood. This doubles the power of the attractant signal</p><p>emanating from there, making it more likely that they’ll attract a third neuron, then a</p><p>fourth . . . Thus, through a rich-get-richer scenario, this forms a nidus, the starting point</p><p>of a local cluster growing outward. Growing aggregates like these are scattered</p><p>throughout the neighborhood.</p><p>—Each clump of neurons reaches a certain size, at which point the chemoattractant stops</p><p>working. How would that work? Here’s one mechanism—as a ball of clumping neurons</p><p>gets bigger, the ones in the center are getting less oxygen, triggering them to start</p><p>secreting a molecule that inactivates chemoattractant molecules.</p><p>—All along, neurons have been secreting a second type of attractant signal in minuscule</p><p>amounts. It’s only when enough neurons have migrated into an optimally sized cluster</p><p>that there is collectively enough of the stuff to prompt the neurons in the cluster to start</p><p>forming dendrites, axons, and synapses with each other.</p><p>—Once this local network is wired up (detectable by, say, a certain density of synapses),</p><p>a chemorepellent is secreted, which now causes neurons to stop making connections to</p><p>their neighbors, and to instead start sending long projections to other clusters, following</p><p>a chemoattractant gradient to get there, forming the thoroughfares between clusters.[*]</p><p>This is a motif of how complex, adaptive systems, like neuronal</p><p>shopping malls, can emerge thanks to control over space and time of</p><p>attractant and repellent signals. This is the fundamental yin/yang polarity of</p><p>chemistry and biology—magnets attracting or repelling each other,</p><p>positively charged or negatively charged ions, amino acids attracted to or</p><p>repelled by water.[*] Long strings of amino acids form proteins, each with a</p><p>distinctive shape (and therefore function) that represents the most stable</p><p>formation for balancing the various attraction and repulsion forces.[*]</p><p>As just shown, constructing neuronal shopping malls in the developing</p><p>brain entailed two different types of attractant signals and one repellent one.</p><p>And things get fancier: Have a variety of attractant and repellent signals</p><p>that work individually or in combinations. Have emergent rules for which</p><p>part of a neuron a growing neuron forms a connection with. Have growth</p><p>cones with receptors that respond to only a subset of attractant or repellent</p><p>signals. Have an attractant signal pulling a growth cone toward it; however,</p><p>when it gets close, the attractant starts working as a repellent; as a result,</p><p>the growth cone swoops past—it’s how neurons make long-distance</p><p>projections, doing flybys of one signpost after another.[27]</p><p>Most neurobiologists spend their time figuring out minutiae like, say, the</p><p>structure of a particular receptor for a particular attractant signal. And then</p><p>there are those marching superbly to their own drummer, like Robin</p><p>Hiesinger, quoted earlier, who studies how brains develop with simple,</p><p>emergent informational rules like we’ve been looking at. Hiesinger, whose</p><p>review papers have puckish section titles like “The Simple Rules That</p><p>Can,” has shown things like the three simple rules needed for neurons in the</p><p>eye of a fly to wire up correctly. Simple rules about the duality of attraction</p><p>and repulsion, and no blueprints.[*] Time now for one last style of emergent</p><p>patterning.[28]</p><p>TALK LOCALLY, BUT DON’T FORGET TO ALSO</p><p>TALK GLOBALLY NOW AND THEN</p><p>Suppose you live in a thoroughly odd community. There is a total of 101</p><p>people in it, each in their own house. The houses are arranged in a straight</p><p>line, say, along a river. You live in the first house of this 101-house-long</p><p>line; how often do you interact with each of your 100 neighbors?</p><p>There are all sorts of potential ways. Maybe you talk only to your next-</p><p>door neighbor (figure A). Maybe, as a contrarian, you interact only with the</p><p>neighbor the farthest from you (figure B). Maybe the same amount with</p><p>each person (figure C), maybe randomly (figure D). Maybe you interact the</p><p>most with your immediate neighbor, X percent less with the neighbor after</p><p>that, and X percent of that less with the neighbor after that, decreasing at a</p><p>constant rate (figure E).</p><p>Then there’s a particularly interesting distribution where around 80</p><p>percent of your interactions occur with the twenty closest neighbors and the</p><p>remainder spread out across everyone else, with interactions a little less</p><p>likely with each step farther out (figure F).</p><p>This is the 80:20 rule—approximately 80 percent of interactions occur</p><p>among approximately 20 percent of the population. In the commercial</p><p>world, it’s sardonically stated as 80 percent of complaints come from 20</p><p>percent of the customers. Eighty percent of crime is caused by 20 percent of</p><p>the criminals. Eighty percent of the company’s work is due to the efforts of</p><p>20 percent of the employees. In the early days of the pandemic, a large</p><p>majority of COVID-19 infections were caused by the small subset of</p><p>infected super-spreaders.[29]</p><p>The 80:20 descriptor captures the spirit of what is known as a Pareto</p><p>distribution, of a type mathematicians call a “power law.” While it is</p><p>formally defined by features of the curve, it’s easiest to understand in plain</p><p>English: a power-law distribution is when the substantial majority of</p><p>interactions are very local, with a steep drop-off after that, and as you go</p><p>out further, interactions become rarer.</p><p>All sorts of weird things turn out to have power-law distributions, as</p><p>demonstrated by work pioneered by network scientist Albert-László</p><p>Barabási of Northeastern University. Of the hundred most common Anglo-</p><p>Saxon last names in the U.S., roughly 80 percent of people with those</p><p>names possess the twenty most common. Twenty percent of people’s</p><p>texting relationships account for about 80 percent of the texting. Twenty</p><p>percent of websites account for 80 percent of searches. About 80 percent of</p><p>earthquakes are of the lowest 20 percent of magnitude. Of fifty-four</p><p>thousand violent attacks throughout eight different insurgent wars, 80</p><p>percent of the fatalities arose from 20 percent of the attacks. Another study</p><p>analyzed the lives of 150,000 notable intellectuals over the</p><p>will, and thus holding people morally responsible for their actions is</p><p>not okay (a conclusion described as “deplorable” by one leading</p><p>philosopher whose thinking we’re going to dissect big time). This</p><p>incompatibilism will be most frequently contrasted with the compatibilist</p><p>view that while the world is deterministic, there is still free will, and thus</p><p>holding people morally responsible for their actions is just.</p><p>This version of compatibilism has produced numerous papers by</p><p>philosophers and legal scholars concerning the relevance of neuroscience to</p><p>free will. After reading lots of them, I’ve concluded that they usually boil</p><p>down to three sentences:</p><p>a. Wow, there’ve been all these cool advances in neuroscience, all reinforcing the</p><p>conclusion that ours is a deterministic world.</p><p>b. Some of those neuroscience findings challenge our notions of agency, moral</p><p>responsibility, and deservedness so deeply that one must conclude that there is no free</p><p>will.</p><p>c. Nah, it still exists.</p><p>Naturally, a lot of time will be spent examining the “nah” part. In doing</p><p>so, I’ll consider only a subset of such compatibilists. Here’s a thought</p><p>experiment for identifying them: In 1848 at a construction site in Vermont,</p><p>an accident with dynamite hurled a metal rod at high speed into the brain of</p><p>a worker, Phineas Gage, and out the other side. This destroyed much of</p><p>Gage’s frontal cortex, an area central to executive function, long-term</p><p>planning, and impulse control. In the aftermath, “Gage was no longer</p><p>Gage,” as stated by one friend. Formerly sober, reliable, and the foreman of</p><p>his work crew, Gage was now “fitful, irreverent, indulging at times in the</p><p>grossest profanity (which was not previously his custom) . . . obstinate, yet</p><p>capricious and vacillating,” as described by his doctor. Phineas Gage is the</p><p>textbook case that we are the end products of our material brains. Now, 170</p><p>years later, we understand how the unique function of your frontal cortex is</p><p>the result of your genes, prenatal environment, childhood, and so on (wait</p><p>for chapter 4).</p><p>Now the thought experiment: Raise a compatibilist philosopher from</p><p>birth in a sealed room where they never learn anything about the brain.</p><p>Then tell them about Phineas Gage and summarize our current knowledge</p><p>about the frontal cortex. If their immediate response is “Whatever, there’s</p><p>still free will,” I’m not interested in their views. The compatibilist I have in</p><p>mind is one who then wonders, “OMG, what if I’m completely wrong about</p><p>free will?,” ponders hard for hours or decades, and concludes that there’s</p><p>still free will, here’s why, and it’s okay for society to hold people morally</p><p>responsible for their actions. If a compatibilist has not wrestled through</p><p>being challenged by knowledge of the biology of who we are, it’s not worth</p><p>the time trying to counter their free-will belief.</p><p>GROUND RULES AND DEFINITIONS</p><p>What is free will? Groan, we have to start with that, so here comes</p><p>something totally predictable along the lines of “Different things to</p><p>different types of thinkers, which gets confusing.” Totally uninviting.</p><p>Nevertheless, we have to start there, followed by “What is determinism?”</p><p>I’ll do my best to mitigate the drag of this.</p><p>What Do I Mean by Free Will?</p><p>People define free will differently. Many focus on agency, whether a person</p><p>can control their actions, act with intent. Other definitions concern whether,</p><p>when a behavior occurs, the person knows that there are alternatives</p><p>available. Others are less concerned with what you do than with vetoing</p><p>what you don’t want to do. Here’s my take.</p><p>Suppose that a man pulls the trigger of a gun. Mechanistically, the</p><p>muscles in his index finger contracted because they were stimulated by a</p><p>neuron having an action potential (i.e., being in a particularly excited state).</p><p>That neuron in turn had its action potential because it was stimulated by the</p><p>neuron just upstream. Which had its own action potential because of the</p><p>next neuron upstream. And so on.</p><p>Here’s the challenge to a free willer: Find me the neuron that started this</p><p>process in this man’s brain, the neuron that had an action potential for no</p><p>reason, where no neuron spoke to it just before. Then show me that this</p><p>neuron’s actions were not influenced by whether the man was tired, hungry,</p><p>stressed, or in pain at the time. That nothing about this neuron’s function</p><p>was altered by the sights, sounds, smells, and so on, experienced by the man</p><p>in the previous minutes, nor by the levels of any hormones marinating his</p><p>brain in the previous hours to days, nor whether he had experienced a life-</p><p>changing event in recent months or years. And show me that this neuron’s</p><p>supposedly freely willed functioning wasn’t affected by the man’s genes, or</p><p>by the lifelong changes in regulation of those genes caused by experiences</p><p>during his childhood. Nor by levels of hormones he was exposed to as a</p><p>fetus, when that brain was being constructed. Nor by the centuries of</p><p>history and ecology that shaped the invention of the culture in which he was</p><p>raised. Show me a neuron being a causeless cause in this total sense. The</p><p>prominent compatibilist philosopher Alfred Mele of Florida State</p><p>University emphatically feels that requiring something like that of free will</p><p>is setting the bar “absurdly high.”[6] But this bar is neither absurd nor too</p><p>high. Show me a neuron (or brain) whose generation of a behavior is</p><p>independent of the sum of its biological past, and for the purposes of this</p><p>book, you’ve demonstrated free will. The point of the first half of this book</p><p>is to establish that this can’t be shown.</p><p>What Do I Mean by Determinism?</p><p>It’s virtually required to start this topic with the dead White male Pierre</p><p>Simon Laplace, the eighteenth-/nineteenth-century French polymath (it’s</p><p>also required that you call him a polymath, as he contributed to</p><p>mathematics, physics, engineering, astronomy, and philosophy). Laplace</p><p>provided the canonical claim for all of determinism: If you had a</p><p>superhuman who knew the location of every particle in the universe at this</p><p>moment, they’d be able to accurately predict every moment in the future.</p><p>Moreover, if this superhuman (eventually termed “Laplace’s demon”) could</p><p>re-create the exact location of every particle at any point in the past, it</p><p>would lead to a present identical to our current one. The past and future of</p><p>the universe are already determined.</p><p>Science since Laplace’s time shows that he wasn’t completely right</p><p>(proving that Laplace was not a Laplacian demon), but the spirit of his</p><p>demon lives on. Contemporary views of determinism have to incorporate</p><p>the fact that certain types of predictability turn out to be impossible (the</p><p>subject of chapters 5 and 6) and certain aspects of the universe are actually</p><p>nondeterministic (chapters 9 and 10).</p><p>Moreover, contemporary models of determinism must also accommodate</p><p>the role played by meta-level consciousness. What do I mean by this?</p><p>Consider a classic psychology demonstration of people having less freedom</p><p>in their choices than they assumed.[7] Ask someone to name their favorite</p><p>detergent, and if you have unconsciously cued them earlier with the word</p><p>ocean, they become more likely to answer, “Tide.” As an important</p><p>measure of where meta-level consciousness comes in, suppose the person</p><p>realizes what the researcher is up to and, wanting to show that they can’t be</p><p>manipulated, decides that they won’t say “Tide,” even if it is their favorite.</p><p>Their freedom has been just as constrained, a point in many of the coming</p><p>chapters. Similarly, wind up as an adult exactly like your parents or the</p><p>exact opposite of them, and you are equally unfree—in the latter case, the</p><p>pull toward adopting their behavior, the ability to consciously recognize that</p><p>tendency to do that, the mindset to recoil from that with horror and thus do</p><p>the opposite, are all manifestations of the ways that you became you outside</p><p>your control.</p><p>Finally, any contemporary view of determinism must accommodate a</p><p>profoundly important point, one that dominates the second half of the book</p><p>last two</p><p>millennia, determining how far each individual died from their birthplace—</p><p>80 percent of the individuals fell within 20 percent of the maximal distance.</p><p>[*] Twenty percent of words in a language account for 80 percent of the</p><p>usage. Eighty percent of craters on the Moon are in the smallest twentieth</p><p>percentile of size. Actors get a Bacon number, where if you were in a movie</p><p>with the prolific Kevin Bacon (1,600 people), your Bacon number is 1; if</p><p>you were in a movie with someone who was in a movie with him, yours is</p><p>2; in a movie with someone who was in a movie with someone who was in</p><p>a movie with Bacon, 3 (the most common Bacon number, held by ~350,000</p><p>actors), and so on. And starting with that modal number and increasing the</p><p>Bacon number from there, there is a power-law distribution to the smaller</p><p>and smaller number of actors.[*],[30]</p><p>I’d be hard-pressed to see something adaptive about power-law</p><p>distributions in Bacon numbers or the size of lunar craters. However,</p><p>power-law distributions in the biological world display can be highly</p><p>adaptive.[*],[31]</p><p>For example, when there’s lots of food in an ecosystem, various species</p><p>forage randomly, but when food is spare, roughly 80 percent of foraging</p><p>forays (i.e., moving in one direction looking for food, before trying a</p><p>different direction) are within 20 percent of the maximal distance ever</p><p>searched—this turns out to optimize the energy spent searching relative to</p><p>the likelihood of finding food; cells of the immune system show the same</p><p>when searching for a rare pathogen. Dolphins show an 80:20 distribution of</p><p>within-family and between-family social interactions; the 80-ness means</p><p>that family groups remain stable even after an individual dies, while the 20-</p><p>ness allows for the flow of foraging information between families. Most</p><p>proteins in our bodies are specialists, interacting with only a handful of</p><p>other types of proteins, forming small, functional units. Meanwhile, a small</p><p>percentage are generalists, interacting with scores of other proteins</p><p>(generalists are switch points between protein networks—for example, if</p><p>one source of energy is rare, a generalist protein switches to using a</p><p>different energy source).[*],[32]</p><p>Then there are adaptive power-law relationships in the brain. What</p><p>counts as adaptive or useful in how neuronal networks are wired? It</p><p>depends on what kind of brain you want. Maybe one where every neuron</p><p>synapses onto the maximal possible number of other neurons while</p><p>minimizing the miles of axons needed. Maybe one that optimizes solving</p><p>familiar, easy problems quickly or being creative in solving rare, difficult</p><p>ones. Or maybe one that loses the minimal amount of function when the</p><p>brain is damaged.</p><p>You can’t optimize more than one of those attributes. For example, if</p><p>your brain cares only about solving familiar problems quickly, thanks to</p><p>neurons being wired up in small, highly interconnected modules of similar</p><p>neurons, you’re screwed the first time something unpredictable demands</p><p>some creativity.</p><p>While you can’t optimize more than one attribute, you can optimize how</p><p>differing demands are balanced, what trade-offs are made, to come up with</p><p>the network that is ideal for the balance between predictability and novelty</p><p>in a particular environment.[*] And this often turns out to have a power-law</p><p>distribution where, say, the vast majority of neurons in cortical mini</p><p>columns interact only with immediate neighbors, with an increasingly rare</p><p>subset wandering out increasingly longer distances.[*] Writ large, this</p><p>explains “brain-ness,” a place where the vast majority of neurons form a</p><p>tight, local network—the “brain”—with a small percentage projecting all</p><p>the way out to places like your toes.[33]</p><p>Thus, on scales ranging from single neurons to far-flung networks,</p><p>brains have evolved patterns that balance local networks solving familiar</p><p>problems with far-flung ones being creative, all the while keeping down the</p><p>costs of construction and the space needed. And, as usual, without a central</p><p>planning committee.[*],[34]</p><p>EMERGENCE DELUXE</p><p>We’ve now seen a number of motifs that come into play in emergent</p><p>systems—rich-get-richer phenomena where higher-quality solutions give</p><p>off stronger recruiting signals, iterative bifurcation that inserts near-infinity</p><p>into finite places, spatiotemporal control of attraction and repulsion rules,</p><p>mathematical optimizing of the balance between different wiring needs—</p><p>and there are many more.[*],[35]</p><p>Here are two last examples of emergence that incorporate a number of</p><p>these motifs. One is startling in its implications; one is so charming that I</p><p>can’t omit it.</p><p>Charm first. Consider a toenail that is a perfect Platonic rectangle X</p><p>units in height (after ignoring the curvature of a nail) (diagram A). Savage</p><p>the perfection with some scissors, cutting off a triangle of toenail (diagram</p><p>B). If the toenail universe did not involve emergent complexity, the toenail</p><p>would now regrow as in diagram C. Instead, you get diagram D.</p><p>How? The top of a toenail thickens from bearing the brunt of contacting</p><p>the outside world (e.g., the inside of your sock; a boulder; that damn coffee</p><p>table, why don’t we get rid of it, all we do is pile up junk on it), and once it</p><p>thickens, it stops growing. After the cutting, only point a, at the original</p><p>length (next diagram), retains the thickening. And as point b’s regrowth</p><p>brings it to the same height as point a, it now bears the brunt of the outside</p><p>worlds and thickens (its further growth is probably also constrained by the</p><p>thickness of point a adjacent to it). The same process occurs when point c</p><p>arrives. . . . There’s no comparative information involved; point c doesn’t</p><p>have to choose between emulating point b or emulating point d. Instead, the</p><p>optimal solution emerges from the nature of toenail regrowth.</p><p>What inspired me to include this example? A</p><p>man named Bhupendra Madhiwalla, then age</p><p>eighty-two, living in Mumbai, India, did that</p><p>experiment with a toenail of his, repeatedly</p><p>photographed the regrowth process and then emailed</p><p>pictures to me from out of the blue. Which made me</p><p>immensely happy.</p><p>Now the awesome final example. As a tautology,</p><p>studying the function of neurons in the brain tells</p><p>you about the function of neurons in the brain. But</p><p>sometimes more detailed information can be found</p><p>by growing neurons in petri dishes. These are</p><p>typically two-dimensional “monolayer” cultures,</p><p>where a slurry of individual neurons is plated down</p><p>randomly, then begin to connect with each other as a carpet. However, some</p><p>fancy techniques make it possible to grow three-dimensional cultures,</p><p>where the slurry of a few thousand neurons is suspended in a solution. And</p><p>these neurons, each floating on its own, find and connect up with each</p><p>other, forming clumps of brain “organoids.” And after months, these</p><p>organoids, barely large enough to be visible without a microscope, self-</p><p>organize into brain structures. A slurry of human cortical neurons starts</p><p>making radiating scaffolding,[*] constructing a primitive cortex with the</p><p>beginnings of separate layers, even the beginnings of cerebrospinal fluid.</p><p>And these organoids eventually produce synchronized brain waves that</p><p>mature similarly to the way they do in fetal and neonatal brains. A random</p><p>bunch of neurons, perfect strangers floating in a beaker, spontaneously</p><p>build themselves into the starts of our brains.[*] Self-organized Versailles is</p><p>child’s play in comparison.[36]</p><p>What has this tour shown us? (A) From molecules to populations of</p><p>organisms, biological systems generate complexity and optimization that</p><p>match what computer scientists, mathematicians, and urban planners</p><p>achieve (and where roboticists explicitly borrow swarm intelligence</p><p>strategies of insects[37]). (B) These adaptive systems emerge from simple</p><p>constituent parts having simple local interactions, all without centralized</p><p>authority, overt comparisons followed by decision-making, a blueprint, or a</p><p>blueprint maker.[*] (C) These systems have characteristics that exist only at</p><p>the emergent</p><p>level—a single neuron cannot have traits related to circuitry—</p><p>and whose behavior can be predicted without having to resort to reductive</p><p>knowledge about the component parts. (D) Not only does this explain</p><p>emergent complexity in our brains, but our nervous systems use some of the</p><p>same tricks used by the likes of individual proteins, ant colonies, and slime</p><p>molds. All without magic.</p><p>Well, that’s nice. Where does free will come into this?</p><p>8</p><p>Does Your Free Will Just Emerge?</p><p>FIRST, WHAT ALL OF US CAN AGREE ON</p><p>So emergence is about reductive piles of bricks producing spectacular</p><p>emergent states, ones that can be thoroughly unpredictable or that can be</p><p>predicted based on properties that exist only at the emergent level.</p><p>Reassuringly, no one thinks that free will lurks in the neuronal equivalent of</p><p>individual bricks (well, almost no one; wait for the next chapter). This is</p><p>nicely summarized by philosopher Christian List of Ludwig Maximilian</p><p>University in Munich: “If we look at the world solely through the lens of</p><p>fundamental physics or even that of neuroscience, we may not find agency,</p><p>choice, and mental causation,” and people rejecting free will “make the</p><p>mistake of looking for free will at the wrong level, namely the physical or</p><p>neurobiological one—a level at which it cannot be found.” Robert Kane</p><p>states the same: “We think we have to become originators at the micro-level</p><p>[to explain free will] . . . and we realize, of course, that we cannot do that.</p><p>But we do not have to. It is the wrong place to look. We do not have to</p><p>micro-manage our individual neurons one by one.”[1]</p><p>So these free-will believers accept that an individual neuron cannot defy</p><p>the physical universe and have free will. But a bunch of them can; to quote</p><p>List, “free will and its prerequisites are emergent, higher-level</p><p>phenomena.”[2]</p><p>Thus, a lot of people have linked emergence and free will; I will not</p><p>consider most of them because, to be frank, I can’t understand what they’re</p><p>suggesting, and to be franker, I don’t think the lack of comprehension is</p><p>entirely my fault. As for those who have more accessibly explored the idea</p><p>that free will is emergent, I think there are broadly three different ways in</p><p>which they go wrong.</p><p>PROBLEM #1: CHAOTIC MISSTEPS REDUX</p><p>We know the drill. Compatibilists and free-will-skeptic incompatibilists</p><p>agree that the world is deterministic but disagree about whether free will</p><p>can coexist with that. But if the world is indeterministic, you’ve cut the legs</p><p>out from under free-will skeptics. The chaos chapter showed how you get</p><p>there by confusing the unpredictability of chaotic systems with</p><p>indeterminism. You can see how folks drive off a cliff with the same</p><p>mistake about the unpredictability of many instances of emergent</p><p>complexity.</p><p>A great example of this is found in the work of List, a philosophy</p><p>heavyweight who made a big splash with his 2019 book, Why Free Will Is</p><p>Real. As noted, List readily recognizes that individual neurons work in a</p><p>deterministic way, while holding out for higher-level, emergent free will. In</p><p>this view, “the world may be deterministic at some levels and</p><p>indeterministic at others.”[3]</p><p>List emphasizes unique evolution, a defining feature of deterministic</p><p>systems, where any given starting state can produce only one given</p><p>outcome. Same starting state, run it over and over, and not only should you</p><p>get one mature outcome each time, but it better be the same one. List then</p><p>ostensibly proves the existence of emergent indeterminism with a model</p><p>that appears in various forms in a number of his publications:</p><p>The top panel represents a reductive, fine-grain scenario where</p><p>(progressing from left to right) five similar starting states each produce five</p><p>distinct outcomes. We then turn to the bottom panel, which is a state that</p><p>List says displays emergent indeterminism. How does he get there? The</p><p>bottom panel “shows the same system at a higher level of description,</p><p>obtained by coarse-graining the state space,” making use of “the usual</p><p>rounding convention.” And when you do that, those five different starting</p><p>states become the same, and that singular starting state can produce five</p><p>completely different paths, proving that it is indeterministic and</p><p>unpredictable.[4]</p><p>Er, maybe not. Sure, a system that is deterministic at the micro level can</p><p>be indeterministic at the macro in this way, but only if you’re allowed to</p><p>decide that five different (though similar) starting states are all actually the</p><p>same, merging them into a single higher-order simulation. This is the last</p><p>chapter all over again—when you’re Edward Lorenz, come back from</p><p>lunch and coarse-grain your computer program, decide that the morning’s</p><p>parameters can be rounded off with the usual rounding convention, and</p><p>you’re bit in the rear by a butterfly. Two things that are similar are not</p><p>identical, and you can’t decide that they are simply because that represents</p><p>the conventions of thinking.</p><p>Reflecting my biological roots, here’s a demonstration of the same point:</p><p>Here are six different molecules, all with similar structures.[*] Now let’s</p><p>coarse-grain ’em, decide that they are similar enough that we can consider</p><p>them to be the same, by the usual scale of rounding convention, and</p><p>therefore, they can be used interchangeably when we inject one of them into</p><p>someone’s body and see what happens. And if there isn’t always the same</p><p>exact effect, yeah, you’ve supposedly just demonstrated emergent</p><p>indeterminism.</p><p>But they’re not all the same. Consider the middle and bottom structures</p><p>in the first column. Majorly similar—just try remembering their structural</p><p>differences for a final exam. But if you coarse-grain them into being the</p><p>same, rather than just very similar, things are going to get really messy—</p><p>because the top molecule of the two is a type of estrogen, and the bottom is</p><p>testosterone. Ignore sensitive dependence on initial conditions, decide the</p><p>two molecules are the same by whatever you’ve deemed the usual</p><p>conventional rounding, and sometimes you get someone with a vagina,</p><p>sometimes a penis, sometimes sort of both. Supposedly proving emergent</p><p>indeterminism.[*]</p><p>It’s the last chapter redux; unpredictable is not the same thing as</p><p>indeterministic. Disperse armies of ants at ten feeding spots, and you can’t</p><p>predict just how close (and by what route) they are going to get to the</p><p>solution to the traveling-salesman problem out of the 360,000+ possibilities.</p><p>Instead, you’ll have to simulate what happens to their cellular automaton</p><p>step by step. Do it all again, same ants at the same starting points but with</p><p>one of those ten feeding spots in a slightly different location, and you might</p><p>get a different (but still remarkably close) approximation of the traveling-</p><p>salesman solution. Do it repeatedly, each time with one of the feeding</p><p>stations moved slightly, and you’re likely to get an array of great solutions.</p><p>Small differences in starting states can generate very different outcomes.</p><p>But an identical starting state can’t do that and supposedly prove</p><p>indeterminacy.</p><p>PROBLEM #2: ORPHANS RUNNING WILD</p><p>So much for the idea that in emergent systems the same starting state can</p><p>give rise to multiple outcomes. The next mistake is a broader one—the idea</p><p>that emergence means the reductive bricks that you start with can give rise</p><p>to emergent states that can then do whatever the hell they want.</p><p>This has been stated in a variety of ways, where terms like brain, cause</p><p>and effect, or materialism stand in for the reductive level, while terms like</p><p>mental states, a person, or I imply the big, emergent end product.</p><p>According to philosopher Walter Glannon, “although the brain generates</p><p>and sustains our mental states, it does not determine them, and this leaves</p><p>enough room for individuals to ‘will themselves to be’ through their choices</p><p>and actions.” “Persons,” he concludes, “are constituted by but not identical</p><p>to their brains.” Neuroscientist Michael Shadlen writes of emergent states</p><p>having a special status as a “consequence of their emergence as entities</p><p>orphaned from the chain of cause and effect that led to their implementation</p><p>in neural machinery” (italics mine). Adina Roskies relatedly writes,</p><p>“Macrolevel explanations are independent of the truth of determinism.</p><p>These same arguments suffice to explain why an agent still makes a choice</p><p>in a deterministic world, and why he or she is responsible for it.”[5]</p><p>This raises an important dichotomy. Philosophers with this interest</p><p>discuss “weak emergence,” which is where no matter how cool, ornate,</p><p>unexpected, and adaptive an emergent state is, it is still constrained by what</p><p>its reductive bricks can and can’t do. This is contrasted with “strong</p><p>emergence,” where the emergent state that emerges from the micro can no</p><p>longer be deduced from it, even in chaoticism’s sense of a stepwise manner.</p><p>The well-respected philosopher Mark Bedau, of Reed College, considers</p><p>the strong emergence that can do as it pleases with happy-go-lucky free will</p><p>to be close to theoretically impossible.[*] Strong emergence claims</p><p>“heighten the traditional worry that emergence entails illegitimately getting</p><p>something from nothing,” which is “uncomfortably like magic.”[*] The</p><p>influential philosopher David Chalmers of New York University weighs in</p><p>as well, considering that the only thing that comes close to qualifying as a</p><p>case of strong emergence is consciousness; likewise with another major</p><p>contributor to this field, Johns Hopkins physicist Sean Carroll, who thinks</p><p>that while consciousness is the only real reason to be interested in strong</p><p>emergence, it’s sure not a case of it.</p><p>With a limited role, if any, for strong emergence (and thus for its being</p><p>the root of free will), we are left with weak emergence, which, in Bedau’s</p><p>words, “is no universal solvent.” You can be out of your mind but not out of</p><p>your brain; no matter how emergently cool, ant colonies are still made of</p><p>ants that are constrained by whatever individual ants can or can’t do, and</p><p>brains are still made of brain cells that function like brain cells.[6]</p><p>Unless you resort to one last trick to pull free will from emergence.</p><p>PROBLEM #3: DEFYING GRAVITY</p><p>The place where a final mistake creeps in is the idea that an emergent state</p><p>can reach down and change the fundamental nature of the bricks comprising</p><p>it.</p><p>We all know that an alteration at the brick level can change the emergent</p><p>end product. If you’re injected with many copies of a molecule that</p><p>activates six of the fourteen subtypes of serotonin receptors,[*] your macro</p><p>level is likely to include perceiving vivid images that other people don’t,</p><p>plus maybe even some religious transcendence. Dramatically drop the</p><p>number of glucose molecules in someone’s bloodstream, and their resulting</p><p>macro level will have trouble remembering whether Grover Cleveland was</p><p>president before or after Benjamin Harrison.[*] Even if consciousness</p><p>qualifies as the closest thing to true strong emergence, induce</p><p>unconsciousness by infusing a molecule like phenobarbital, and you’ll have</p><p>shown that it isn’t remotely free from its building blocks.</p><p>Good, we all agree that altering the little can change the emergent big.</p><p>And the reverse certainly holds true. Sit here and press button A or B, and</p><p>which motor neurons tell your arm muscles to shift this way or that will be</p><p>manipulated by the emergent macrophenomenon called aesthetics, if you’re</p><p>asked which painting you prefer, the one of a Renaissance woman with a</p><p>half smile or the one of Campbell’s soup cans. Or press the button</p><p>indicating which of two people you deem more likely to be destined for</p><p>hell, or whether 1946’s Call Me Mister or 1950’s Call Me Madam is the</p><p>more obscure musical.</p><p>A 2005 study concerning social conformity shows a particularly stark,</p><p>fascinating version of the emergent level manipulating the reductive</p><p>business of individual neurons. Sit a subject down and show them three</p><p>parallel lines, one clearly shorter than the other two. Which is shorter?</p><p>Obviously that one. But put them in a group where everyone else (secretly</p><p>working on the experiment) says the longest line is actually the shortest—</p><p>depending on the context, a shocking percentage of people will eventually</p><p>say, yeah, that long line is the shortest one. This conformity comes in two</p><p>types. In the first, go-along-to-get-along public conformity, you know</p><p>which line is shortest but join in with everyone else to be agreeable. In this</p><p>circumstance, there is activation of the amygdala, reflecting the anxiety</p><p>driving you to go along with what you know is the wrong answer. The</p><p>second type is “private conformity,” where you drink the Kool-Aid and</p><p>truly believe that somehow, weirdly, you got it all wrong with those lines</p><p>and everyone else really was correct. And in this case, there is also</p><p>activation of the hippocampus, with its central role in learning and memory</p><p>—conformity trying to rewrite the history of what you saw. But even more</p><p>interesting, there’s activation of the visual cortex—“Hey, you neurons over</p><p>there, the line you foolishly thought was longer at first is actually shorter.</p><p>Can’t you just see the truth now?”[*],[7]</p><p>Think about this. When is a neuron in the visual cortex supposed to</p><p>activate? Just to wallow in minutiae that can be ignored, when a photon of</p><p>light is absorbed by rhodopsin in disc membranes within a retinal</p><p>photoreceptive cell, causing the shape of the protein to change, changing</p><p>transmembrane ion currents, thus decreasing the release of the</p><p>neurotransmitter glutamate, which gets the next neuron in line involved,</p><p>starting a sequence culminating in that visual cortical neuron having an</p><p>action potential. One big micro-level blowout of reductionism.</p><p>And what’s happening instead during private conformity? That same Mr.</p><p>Machine little neuron in the visual cortex activates because of the macro-</p><p>level emergent state that we’d call an urge toward fitting in, a state built out</p><p>of the neurobiological manifestations of the likes of cultural values, a desire</p><p>to seem likable, adolescent acne having left scars of low self-esteem, and so</p><p>on.[*],[8]</p><p>So some emergent states have downward causality, which is to say that</p><p>they can alter reductive function and convince a neuron that long is short</p><p>and war is peace.</p><p>The mistake is the belief that once an ant joins a thousand others in</p><p>figuring out an optimal foraging path, downward causality causes it to</p><p>suddenly gain the ability to speak French. Or that when an amoeba joins a</p><p>slime mold colony that is solving a maze, it becomes a Zoroastrian. And</p><p>that a single neuron, normally being subject to gravity, stops being so once</p><p>it holds hands with all the other neurons producing some emergent</p><p>phenomenon. That the building blocks work differently once they’re part of</p><p>something emergent. It’s like believing that when you put lots of water</p><p>molecules together, the resulting wetness causes each molecule to switch</p><p>from being made of two hydrogens and one oxygen to two oxygens and one</p><p>hydrogen. But the whole point of emergence, the basis of its amazingness,</p><p>is that those idiotically simple little building blocks that only know a few</p><p>rules about interacting with their immediate neighbors remain precisely as</p><p>idiotically simple when their building-block collective is outperforming</p><p>urban planners with business cards. Downward causation doesn’t cause</p><p>individual building blocks to acquire complicated skills; instead, it</p><p>determines the contexts in which the blocks are doing their idiotically</p><p>simple things. Individual neurons don’t become causeless causes that defy</p><p>gravity and help generate free will just because they’re interacting with lots</p><p>of other neurons.</p><p>And the core belief among this style of emergent free-willers is that</p><p>emergent states can in fact change how neurons work, and that free will</p><p>depends on it. It is the assumption that emergent systems “have base</p><p>elements that behave in novel ways when they operate as part of the higher-</p><p>order system.” But no matter how unpredicted an emergent property in the</p><p>brain might be, neurons are not freed of their histories once they join</p><p>the</p><p>complexity.[9]</p><p>This is another version of our earlier dichotomy. There’s weak</p><p>downward causality, where something emergent like conformity can make a</p><p>neuron fire the same way as it would in response to photons of light—the</p><p>workings of this component part have not changed. And there’s strong</p><p>downward causality, where it can. The consensus among most philosophers</p><p>and neurobiologists thinking about this is that strong downward causality,</p><p>should it exist, is irrelevant to this book’s focus. In a critique of this</p><p>approach to discovering free will, psychologists Michael Mascolo of</p><p>Merrimack College and Eeva Kallio of the University of Jyväskylä write,</p><p>“While [emergent systems] are irreducible, they are not autonomous in the</p><p>sense of having causal powers that override those of their constituents,” a</p><p>point emphasized as well by Spanish philosopher Jesús Zamora Bonilla in</p><p>his essay “Why Emergent Levels Will Not Save Free Will.” Or stated in</p><p>biological terms by Mascolo and Kallio, “while the capacities for</p><p>experience and meaning are emergent properties of biophysical systems, the</p><p>capacity for behavioral regulation is not. The capacity for self-regulation is</p><p>an already existing capacity of living systems.” There’s still gravity.[10]</p><p>AT LAST, SOME CONCLUSIONS</p><p>Thus, in my view, emergent complexity, while being immeasurably cool, is</p><p>nonetheless not where free will exists, for three reasons:</p><p>a. Because of the lessons of chaoticism—you can’t just follow convention and say that two</p><p>things are the same, when they are different, and in a way that matters, regardless of how</p><p>seemingly minuscule that difference; unpredictable doesn’t mean undetermined.</p><p>b. Even if a system is emergent, that doesn’t mean it can choose to do whatever it wants; it</p><p>is still made up of and constrained by its constituent parts, with all their mortal limits and</p><p>foibles.</p><p>c. Emergent systems can’t make the bricks that built them stop being brick-ish.[*], [11]</p><p>These properties are all intrinsic to a deterministic world, whether</p><p>chaotic, emergent, predictable, or unpredictable. But what if the world isn’t</p><p>really deterministic after all? On to the next two chapters.</p><p>I</p><p>9</p><p>A Primer on Quantum Indeterminacy</p><p>really do not want to write this chapter, or the next one. I’ve been</p><p>dreading it, in fact. When friends ask me how the book writing is</p><p>going, I grimace and say, “Well, okay, but I’m still postponing doing</p><p>the chapters on indeterminacy.” Why the dread? To start, (a) the chapters’</p><p>subject rests on profoundly bizarre and counterintuitive science (b) that I</p><p>barely understand and (c) that even the people who you’d think understand</p><p>it admit that they don’t, but with a profound noncomprehension, compared</p><p>with my piddly cluelessness, and (d) the topic exerts a gravitational pull</p><p>upon crackpot ideas as surely as does a statue upon defecating pigeons, a</p><p>pull that constitutes a “What are they talking about?” strange attractor.</p><p>Nonetheless, here goes.</p><p>This chapter examines some foundational domains of the universe in</p><p>which extremely tiny stuff operates in ways that are not deterministic.</p><p>Where unpredictability does not reflect the limitations of humans tackling</p><p>math, or the wait for an even more powerful magnifying glass, but instead</p><p>reflects ways in which the physical state of the universe does not determine</p><p>it. And the next chapter is about reining in the free-willers in this</p><p>playground of indeterminacy.</p><p>Were I to chicken out and end this pair of chapters right here, the</p><p>conclusions would be that, yes, Laplacian determinism really does appear to</p><p>fall apart down at the subatomic level; however, such eensy-weensy</p><p>indeterminism is vastly unlikely to influence anything about behavior; even</p><p>if it did, it’s even more unlikely that it would produce something resembling</p><p>free will; scholarly attempts to find free will in this realm frequently strain</p><p>credulity.</p><p>UNDETERMINED RANDOMNESS</p><p>What exactly do we mean by “randomness”? Suppose we have a particle</p><p>that moves “randomly.” To qualify, it would show these properties:</p><p>—If at time 0 a particle is in spot X, the most likely place you’d expect to find that</p><p>randomly moving particle for the rest of time is back at spot X. And if at some point</p><p>after time 0, the particle happens to be in spot Z, now for the rest of time, spot Z is where</p><p>it’s most likely to be. The best predictor of where a randomly moving particle is likely to</p><p>be is wherever it is right now.</p><p>—Take any unit of time—say, one second. The amount of variability in the particle’s</p><p>movement in the next second will be as much as during one second a million years from</p><p>now.</p><p>—The pattern of movement at time 0 has zero correlation with time 1 or −1.</p><p>—If it looks as if the particle has moved in a straight line, get that magnifying glass and</p><p>look closer and you’ll see that it isn’t really a straight line. Instead, the particle zigzags,</p><p>regardless of the scale of magnification.</p><p>—Because of that zigzagging, when magnified infinitely, a particle will have moved an</p><p>infinitely long distance between any two points.</p><p>These are stringent features for a particle to qualify as undetermined.[*]</p><p>These requirements, especially that spacey Menger-sponge business about</p><p>something infinitely long fitting into a finite space, show how capital-R</p><p>Randomness differs from random channel surfing.</p><p>So what does a particle being random have to do with your being the</p><p>agentive captain of your fate?</p><p>LOW-RENT RANDOMNESS: BROWNIAN MOTION</p><p>We start with the Jane and Joe Lunchbucket version of indeterminism, one</p><p>that is rarely contemplated at meditation retreats.</p><p>Sit in an otherwise dark room that has a shaft of light coming in from a</p><p>window, and look at what is being illuminated along the way by the shaft</p><p>(i.e., not the spot on the wall being lit up but the air illuminated between the</p><p>window and the lit wall). You’ll see minuscule dust particles that are in</p><p>constant motion, vibrating, jerking this way or that. Behaving randomly.</p><p>People (e.g., Robert Brown, in 1827) had long noted the phenomenon,</p><p>but it wasn’t until the last century that random (aka “stochastic”) movement</p><p>was identified to occur among particles suspended in a fluid or gas. Tiny</p><p>particles oscillate and vibrate as a result of being hit randomly by photons</p><p>of light, which transfer energy to the particle, producing the vibratory</p><p>phenomenon of kinetic energy. Which causes particles to bump into each</p><p>other randomly. Which causes them to bump into other particles.</p><p>Everything moving randomly, the unpredictability of the three-body</p><p>problem on steroids.</p><p>Mind you, this isn’t the unpredictability of cellular automata, where</p><p>every step is deterministic but not determinable. Instead, the state of a</p><p>particle in any given instant is not dependent on its state an instant before.</p><p>Laplace is vibrating disconsolately in his grave. The features of such</p><p>stochasticity were formalized by Einstein in 1905, his annus mirabilis when</p><p>he announced to the world that he was not going to be a patent clerk</p><p>forever. Einstein explored the factors that influence the extent of Brownian</p><p>motion of suspended particles (note the plural on particles—any given</p><p>particle is random, and predictability is probabilistic only on the aggregate</p><p>level of lots of particles). One thing that increases Brownian motion is heat,</p><p>which increases kinetic energy in particles. In contrast, it’s decreased when</p><p>the surrounding fluid or gas environment is sticky or viscous or when the</p><p>particle is bigger. Think of this last one this way: The bigger a particle, the</p><p>bigger the bull’s-eye, the more likely it is to be bumped into by lots of other</p><p>particles, on all its sides. Which increases the odds of all those bumps</p><p>canceling each other out and the big particle staying put. Thus, the smaller</p><p>the particle, the more exciting the Brownian motion that it shows—while</p><p>the Great Pyramid of Giza may be vibrating, it isn’t doing it much.[*]</p><p>So that’s Brownian motion, particles bumping into each other randomly.</p><p>How does that relate to biology (a first step toward seeing</p><p>its relevance to</p><p>behavior)? Lots, as it turns out. One paper explores how a type of Brownian</p><p>motion explains the distribution of populations of axon terminals. Another</p><p>concerns how copies of the receptor for the neurotransmitter acetylcholine</p><p>randomly aggregate into clusters, something important to their function.</p><p>Another example concerns abnormality in the brain—some mostly</p><p>mysterious factors increase the production of a weirdly folded fragment</p><p>called the beta-amyloid peptide. If one copy of this fragment randomly</p><p>bumps into another one, they stick together, and this clump of aggregated</p><p>protein crud grows bigger. These soluble amyloid aggregates are the most</p><p>likely killers of your neurons in Alzheimer’s disease. And Brownian motion</p><p>helps explain probabilities of fragments bumping into each other.[1]</p><p>I like teaching one example of Brownian motion, because it undermines</p><p>myths of how genes determine everything interesting in living systems.</p><p>Take a fertilized egg. When it divides in two, there is random Brownian</p><p>splitting of the stuff floating around inside, such as thousands of those</p><p>powerhouses-of-the-cell mitochondria—it’s never an exact 50:50 split, let</p><p>alone the same split each time. Meaning those two cells already differ in</p><p>their power-generating capacity. Same for vast numbers of copies of</p><p>proteins called transcription factors, which turn genes on or off; the uneven</p><p>split of transcription factors when the cell divides means the two cells will</p><p>differ in their gene regulation. And with each subsequent cell division,</p><p>randomness plays that role in the production of all those cells that</p><p>eventually constitute you.[*],[2]</p><p>Now, time to scale up and see where Brownian-esque randomness plays</p><p>into behavior. Consider some organism—say, a fish—looking for food.</p><p>How does it find food most efficiently? If food is plentiful, the fish forages</p><p>in little forays anchored around this place of easy eating.[*] But if food is</p><p>diffuse and sparse, the most efficient way to bump into some is to switch to</p><p>a random, Brownian foraging pattern called a “Levy walk.” So if you’re the</p><p>only thing worth eating in the middle of the ocean, the predator that grabs</p><p>you will probably have gotten there by a Levy walk. And logically, many</p><p>prey species move randomly and unpredictably in evading predators. The</p><p>same math describes another type of predator hunting for prey—a white</p><p>blood cell searching for pathogens to engulf. If the cell is in the middle of a</p><p>cluster of pathogens, it does the same sort of home-based forays as a killer</p><p>whale feasting in the middle of a bunch of seals. But when the pathogens</p><p>are sparse, white blood cells switch to a random Levy-walk hunting</p><p>strategy, just like a killer whale. Biology is the best.[3]</p><p>To summarize, the world is filled with instances of indeterministic</p><p>Brownian motion, with various biological phenomena having evolved to</p><p>optimally exploit versions of this randomness. Are we talking free will</p><p>here?[*] Before addressing this question, time to face the inevitable and</p><p>tackle the mother of all theories.[4]</p><p>QUANTUM INDETERMINACY</p><p>Here goes. The classical physical picture of how the universe works,</p><p>invariably attributed to Newton, tanked in the early twentieth century with</p><p>the revolution of quantum indeterminacy, and nothing has been the same</p><p>since. The subatomic world turns out to be deeply weird and still can’t be</p><p>fully explained. I’ll summarize here the findings that are most pertinent to</p><p>free-will believers.</p><p>WAVE/PARTICLE DUALITY</p><p>The start of the most foundational weirdness was the immeasurably cool,</p><p>landmark double-slit experiment first carried out by Thomas Young in 1801</p><p>(another one of those polymaths who, when he wasn’t busy with physics, or</p><p>outlining the biology of how color vision works, helped translate the</p><p>Rosetta stone). Shoot a beam of light at a barrier that has two vertical slits</p><p>in it. Behind it is a wall that can detect where the light is hitting it. This</p><p>shows that the light travels through the two slits as waves. How is this</p><p>detected? If there was a wave emanating from each slit, the two waves</p><p>would wind up overlapping. And there’s a characteristic signature when a</p><p>pair of waves does this—when the peaks of two waves converge, you get an</p><p>immensely strong signal; when the troughs of the two converge, the</p><p>opposite; when a peak and a trough meet, they cancel each other out.</p><p>Surfers understand this.</p><p>So light travels as a wave—classical knowledge. Shoot a stream of</p><p>electrons at the double-slit barrier, and there’s the same punch line—a wave</p><p>function. Now, shoot one electron at a time, recording where it hits the</p><p>detector wall, and the individual electron, the individual particle, passes</p><p>through as a wave. Yup, the single electron passes through both slits</p><p>simultaneously. It’s in two places at once.</p><p>Turns out that it’s more than just two places. The exact location of the</p><p>electron is indeterministic, distributed probabilistically across a cloud of</p><p>locations at once, something termed superposition.</p><p>Accounts of this now usually say something to the effect of “Now things</p><p>get weird”—as if a single particle being in multiple places at once weren’t</p><p>weird. Now things get weirder. Build a recording device into the double-slit</p><p>wall, to document the passage of each electron. You already know what will</p><p>happen—each individual electron passes through both slits at once, as a</p><p>wave. But no; each electron now passes through one slit or the other,</p><p>randomly. The mere process of measuring, documenting what happens at</p><p>the double-slit wall causes the electrons (and, as it turns out, streams of</p><p>light, made up of photons) to stop acting as waves. The wave function</p><p>“collapses,” and each electron passes through the double-slit wall as a</p><p>singular particle.</p><p>Thus, electrons and photons show particle/wave duality, with the process</p><p>of measurement turning waves into particles. Now measure the properties</p><p>of the electron after it passes through the slits but before it hits the detector</p><p>wall, and as a result, each electron passes through one of the slits as a single</p><p>particle. It “knows” that it is going to be measured in a bit, which collapses</p><p>its wave function. Why the process of measuring collapses wave functions</p><p>—the “measurement problem”—remains mysterious.[5]</p><p>(To jump ahead for a moment, you can guess that things are going to get</p><p>very New Agey if you assume that the macroscopic world—big things like,</p><p>say, you—also works this way. You can be in multiple places at once; you</p><p>are nothing but potential. Merely observing something can change it;[*]</p><p>your mind can alter the reality around it. Your mind can determine your</p><p>future. Heck, your mind can change your past. More jabberwocky to come.)</p><p>Particle/wave duality generates a key implication. When an electron is</p><p>moving past a spot as a wave, you can know its momentum, but you</p><p>obviously can’t know its exact location, since it’s indeterministically</p><p>everywhere. And once the wave function collapses, you can measure where</p><p>that particle now is, but you can’t know its momentum, since the process of</p><p>measurement changes everything about it. Yup, it’s Heisenberg’s</p><p>uncertainty principle.[*]</p><p>The inability to know both location and momentum, the fact of</p><p>superposition and things being in multiple places at once, the impossibility</p><p>of knowing which slit an electron will pass through once a wave has</p><p>collapsed into a particle—all introduce a fundamental indeterminism into</p><p>the universe. Einstein, despite upending the reductive, deterministic world</p><p>of Newtonian physics, hated this type of indeterminism, famously</p><p>declaring, “God does not play dice with the universe.” This began a cottage</p><p>industry of physicists trying to slip some form of determinism in the back</p><p>door. Einstein’s version is that the system actually is deterministic, thanks to</p><p>some still-undiscovered factor(s), and things will go back to making sense</p><p>once this “hidden variable” is identified. Another backdoor move is the</p><p>very opaque “many-world” idea, which posits that waves don’t really</p><p>collapse into a singularity;</p><p>instead their wave-ness continues in an infinite</p><p>number of universes, making for a completely deterministic world(s), and it</p><p>just looks singular if you’re looking from only one universe at a time. I</p><p>think. My sense is that the hidden-variable dodge is most doubters’ favorite.</p><p>However, the majority of physicists accept the indeterministic picture of</p><p>quantum mechanics—known as the Copenhagen interpretation, reflecting</p><p>its being championed by the Copenhagen-based Niels Bohr. In his words,</p><p>“Those who are not shocked when they first come across quantum theory</p><p>cannot possibly have understood it.”[*],[6]</p><p>ENTANGLEMENT AND NONLOCALITY</p><p>Next weirdness.[*] Two particles (say, two electrons in different shells of an</p><p>atom) can become “entangled,” where their properties (such as their</p><p>direction of spin) are linked and perfectly correlated. The correlation is</p><p>always negative—if one electron spins in one direction, its coupled partner</p><p>spins the opposite way. Fred Astaire steps forward with his left leg; Ginger</p><p>Rogers steps back with her right.</p><p>But it’s stranger than that. For starters, the two electrons don’t have to be</p><p>in the same atom. They can be a few atoms apart. Okay, sure. Or, it turns</p><p>out, they can be even farther apart. The current record is particles nearly</p><p>nine hundred miles apart, at two ground stations linked by a quantum</p><p>satellite.[*] Moreover, if you alter the property of one particle, the other</p><p>changes as well, implying a causality that isn’t local. There is no theoretical</p><p>limit for how far apart entangled particles can be. An electron in the Crab</p><p>Nebula in the constellation Taurus can be entangled with an electron in the</p><p>piece of broccoli stuck between your incisors. And as the strangest feature,</p><p>when the state of one particle is altered, the complementary change in the</p><p>other occurs instantaneously[*]—meaning that the broccoli and the Crab</p><p>Nebula are influencing each other faster than the speed of light.[7]</p><p>Einstein was not amused (and labeled the phenomenon with a sarcastic</p><p>German equivalent of spooky).[*] In 1935, he and two collaborators</p><p>published a paper that challenged the possibility of this instantaneous</p><p>entanglement, again positing hidden variables that explained things without</p><p>invoking faster-than-the-speed-of-light mojo. In the 1960s, the Irish</p><p>physicist John Stewart Bell showed that there was something off in the</p><p>math in that paper of Einstein’s. And in the decades since, extraordinarily</p><p>difficult experiments (like the one with that satellite) have confirmed that</p><p>Bell was right when he said that Einstein was wrong when he said that the</p><p>interpretation of entanglement was wrong. In other words, the phenomenon</p><p>is for real, although it still remains basically unexplained, nonetheless</p><p>generating highly accurate predictions.[8]</p><p>Since then, scientists have explored the potential of using quantum</p><p>entanglement in computing (with people at Apple apparently making</p><p>significant progress), in communication systems, maybe even in</p><p>automatically receiving a widget from Amazon the instant you think that</p><p>you’ll be happier owning one. And the weirdness just won’t stop—</p><p>entanglement over long enough distances can also show nonlocality over</p><p>time. Suppose you have two entangled electrons a light-year apart; alter one</p><p>of them and the other particle is altered at the same instant . . . a year ago.</p><p>Scientists have also shown quantum entanglement in living systems,</p><p>between a photon and the photosynthetic machinery of bacteria.[*] You</p><p>better bet that we’ve got free-will speculations coming that invoke time</p><p>travel, entanglement between neurons in the same brain, and, as long as</p><p>we’re at it, between brains.[9]</p><p>QUANTUM TUNNELING</p><p>This one is a piece of cake conceptually, after all the preceding strangeness.</p><p>Shoot a stream of electrons at a wall. As we know, each travels as a wave,</p><p>superposition dictating that until you measure its location, each electron is</p><p>probabilistically in numerous places at once. Including the really, really</p><p>unlikely but theoretically possible outcome of one of those numerous places</p><p>being on the other side of the wall, because the electron has tunneled</p><p>through it. And, as it turns out, this can happen.</p><p>That’s it for this pitiful tour of quantum mechanics. For our purposes, the</p><p>main points are that in the view of most of the savants, the subatomic</p><p>universe works on a level that is fundamentally indeterministic on both an</p><p>ontic and epistemic level. Particles can be in multiple places at once, can</p><p>communicate with each other over vast distances faster than the speed of</p><p>light, making both space and time fundamentally suspect, and can tunnel</p><p>through solid objects. As we’ll now see, that’s plenty enough for people to</p><p>run wild when proclaiming free will.</p><p>10</p><p>Is Your Free Will Random?</p><p>QUANTUM ORGASMIC-NESS: ATTENTION AND</p><p>INTENTION ARE THE MECHANICS OF</p><p>MANIFESTATION</p><p>The previous chapter revealed some truly weird things about the universe</p><p>that introduce a fundamental indeterminism into the proceedings. And from</p><p>virtually the first moment this news got around, some believers in free will</p><p>have attributed all sorts of mystical gibberish to quantum mechanics.[*]</p><p>There are now proponents of quantum metaphysics, quantum philosophy,</p><p>quantum psychology. There’s quantum theology and quantum Christian</p><p>realism; in one tract in that vein, quantum mechanics is cited as proving that</p><p>humans cannot be reduced to predictable machines, making for human</p><p>uniqueness that aligns with the biblical claim that God loves each person in</p><p>a unique manner. For the “I don’t believe in organized religion, but I’m a</p><p>very spiritual person” crowd, there’s quantum spirituality and quantum</p><p>mysticism. Then there’s New Age entrepreneur Deepak Chopra, who, in his</p><p>1989 book Quantum Healing, promises a pathway to curing cancer,</p><p>reversing aging, and, heavens to Betsy, even immortality.[*] There’s</p><p>quantum activism, which, as espoused by a New Age physicist in his</p><p>seminars, “is the idea of changing ourselves and our societies in accordance</p><p>with the principles of quantum physics.” There’s “quantum cognition,”</p><p>“spin-mediated consciousness,” “quantum neurophysics,” and—wait for it</p><p>—a “Nebulous Cartesian system” of oscillations and quantum dynamics,</p><p>explaining our freely choosing brains. And as a branch that particularly gets</p><p>under my skin, there’s quantum psychotherapy, a field where one paper</p><p>proposes that clinical depression is rooted in quantum abnormalities in the</p><p>fatty acids found in the membranes of platelet cells; gain hope from the</p><p>knowledge that there are folks pursuing this angle to help you, should you</p><p>feel suffocatingly sad day after day. Meanwhile, the same journal contains a</p><p>paper aiming to aid the treatment of schizophrenia sufferers, entitled</p><p>“Quantum Logic of the Unconscious and Schizophrenia” (in which</p><p>quantum comprises 9.6 percent of the words in the paper’s abstract). I’m</p><p>not gonna lie—I’m not a big fan of folks touting crap like this concerning</p><p>people in pain.[1]</p><p>The nonsense has some consistent themes. There’s a notion that if</p><p>particles can be entangled and communicate with each other</p><p>instantaneously, there is a unity, a oneness that connects all living things</p><p>together, including all humans (except for people who are mean to dolphins</p><p>or elephants). The time travel spookiness of entanglement can be hijacked</p><p>with the idea that there is no unfortunate event in your past that cannot, in</p><p>theory, be gone back to and fixed. There’s the theme that if you can</p><p>supposedly collapse a quantum wave just by looking at it, you can achieve</p><p>nirvana or go into the boss’s office and get a raise. According to the same</p><p>New Age physicist, “The material world around us is nothing but possible</p><p>movements of consciousness. I am choosing moment by moment my</p><p>experience.” There is also the usual trope that whatever quantum physicists</p><p>found out with their high-tech gizmos merely confirms what was already</p><p>known by the Ancients; lotus positions galore. And near-villainous anti-</p><p>grooviness comes from “materialists” with their “classical</p><p>physics”[*]</p><p>—“these elitists who dictate people’s experiences of meaning.” All this</p><p>infinite potential is one big blowout salute to the renowned New Age healer</p><p>Mary Poppins.[*],[2]</p><p>Some problems here are obvious. These papers, which are typically</p><p>unvetted and unread by neuroscientists, are published in journals that</p><p>scientific indexes won’t classify as scientific journals (e.g.,</p><p>NeuroQuantology) and are written by people not professionally trained to</p><p>know how the brain works.[3]</p><p>But now and then, one’s critique of this thinking has to accommodate</p><p>someone who knew how the brain works, bringing us to the challenging</p><p>case of the Australian neurophysiologist John Eccles. He wasn’t just a</p><p>good, or even a great, scientist. He was Sir John, Nobel laureate, who</p><p>pioneered understanding in the 1950s of how synapses work. Thirty years</p><p>later, in his book How the Self Controls Its Brain (Springer-Verlag, 1994),</p><p>Eccles posited that the “mind” produces “psychons” (i.e., fundamental units</p><p>of consciousness, a term previously mostly used in cheesy science fiction),</p><p>which regulate “dendrons” (i.e., functional units of neurons) through</p><p>quantum tunneling. He didn’t merely reject materialism in favor of dualism;</p><p>he declared himself a “trialist,” making room for the category of soul/spirit,</p><p>which freed the human brain from some of the laws of the physical</p><p>universe. In his book Evolution of the Brain: Creation of the Self</p><p>(Routledge, 1989), an unironic amalgam of spirituality and paleontology,</p><p>Eccles tried to pinpoint when this uniqueness first evolved, which hominin</p><p>ancestor gave birth to the first organism with a soul. He also believed in</p><p>ESP and psychokinesis, querying new lab members whether they shared</p><p>these beliefs. By my student days, the mention of Eccles, with his religious</p><p>mysticism and embrace of the paranormal, elicited nothing but eye-rolling.</p><p>As a scathing New York Times review of Evolution of the Brain concluded,</p><p>Eccles’s descent into spirituality invited “Ophelia’s lament for Hamlet, ‘O!</p><p>what a noble mind is here o’erthrown.’ ”[*],[4]</p><p>Obviously, it’s not sufficient for me to reject the idea that quantum</p><p>indeterminacy is an opening for free will merely by citing the paucity of</p><p>neuroscientists thinking this way, or by performing the Dirge for Eccles.</p><p>Time to examine what I see as, collectively, three fatal problems with the</p><p>idea.</p><p>PROBLEM #1: BUBBLING UP</p><p>The starting point here is the idea that quantum effects, down there at the</p><p>level of electrons entangling with each other, will affect “biology.” There is</p><p>precedent for this concerning photosynthesis. In that realm, electrons that</p><p>have been excited by light are impossibly efficient at finding the fastest way</p><p>to move from one part of a plant cell to another, seemingly because each</p><p>electron does this by being in a quantum superposition state, checking out</p><p>all the possible routes at once.[5]</p><p>So that’s plants. Trying to pull free will out of electrons in the brain is</p><p>the immediate challenge—can quantal effects bubble upward, amplify in</p><p>their effects, so that they can influence gigantic things, like a single</p><p>molecule, or a single neuron, or a single person’s moral beliefs? Nearly</p><p>everyone thinking about the subject concludes that it cannot happen</p><p>because, as we’ll soon cover, quantal effects get washed out, cancel each</p><p>other out in the noise—the waves of superposition “decohere.” As</p><p>summarized nicely by the title of a book by physicist David Lindley, Where</p><p>Does the Weirdness Go? Why Quantum Mechanics Is Strange, but Not as</p><p>Strange as You Think (Basic Books, 1996).</p><p>Nonetheless, people linking quantum indeterminacy with free will argue</p><p>otherwise. Their challenge is to show how any building block of neuronal</p><p>function is subject to quantum effects. One possibility is explored by Peter</p><p>Tse, who considers the neurotransmitter glutamate, where the workings of</p><p>one of its receptors requires popping a single atom of magnesium out of an</p><p>ion channel that it blocks. In Tse’s view, the location of the magnesium can</p><p>change in the absence of antecedent causes, because of indeterminate</p><p>quantal randomness. And these effects bubble up further: “The brain has in</p><p>fact evolved to amplify quantum domain randomness . . . up to a level of</p><p>neural spike timing randomness” (my emphasis)—i.e., up to the level of</p><p>individual neurons being indeterminate. And the consequences then ripple</p><p>upward further into circuits of neurons and beyond.[6]</p><p>Other advocates have also focused on quantal effects occurring at a</p><p>similar level, as captured in one book’s title—Chance in Neurobiology:</p><p>From Ion Channels to the Question of Free Will.[*] Psychiatrist Jeffrey</p><p>Schwartz of UCLA views the level of single ion channels and ions as fair</p><p>game for quantal effects: “This extreme smallness of the opening in the</p><p>calcium ion channels has profound quantum mechanical implications.”</p><p>Biophysicist Alipasha Vaziri of Rockefeller University examines the role of</p><p>“non-classical” physics in determining which type of ion flows through a</p><p>particular channel.[7]</p><p>In the views of anesthesiologist Stuart Hameroff and physicist Roger</p><p>Penrose, consciousness and free will arise from a different part of neurons,</p><p>namely microtubules. To review, neurons send axonal and dendritic</p><p>projections all over the brain. This requires a transport system within these</p><p>projections to, for example, deliver the building blocks for new copies of</p><p>neurotransmitter or neurotransmitter receptors. This is accomplished with</p><p>bundles of transport tubes—microtubules—inside projections (this was</p><p>briefly touched on in chapter 7). Despite some evidence that they can</p><p>themselves be informational, microtubules are mostly like the pneumatic</p><p>tubes in office buildings circa 1900, where someone in accounting could</p><p>send a note in a cylinder downstairs to the folks in marketing. Hameroff and</p><p>Penrose (with papers with titles such as “How Quantum Biology Can</p><p>Rescue Conscious Free Will”) focus in on microtubules. Why? In their</p><p>view, the tightly packed, fairly stable, parallel microtubules are ideal for</p><p>quantum entanglement effects among them, and it’s on to free will from</p><p>there. This strikes me as akin to hypothesizing that the knowledge</p><p>contained in a library emanates not from the books but from the little carts</p><p>used to transport books around for reshelving.[8]</p><p>Hameroff and Penrose’s ideas have gained particular traction among</p><p>quantum free-willers, no doubt in part because Penrose won the Nobel Prize</p><p>in Physics for work concerning black holes and also authored the 1989</p><p>bestseller The Emperor’s Mind: Concerning Computers, Minds, and the</p><p>Laws of Physics (Oxford University Press). Despite this firepower,</p><p>neuroscientists, physicists, mathematicians, and philosophers have pilloried</p><p>these ideas. MIT physicist Max Tegmark showed that the time course of</p><p>quantum states in microtubules is many, many orders of magnitude shorter-</p><p>lived than anything biologically meaningful; in terms of the discrepancy in</p><p>scale, Hameroff and Penrose are suggesting that the movement of a glacier</p><p>over the course of a century could be significantly influenced by random</p><p>sneezes among nearby villagers. Others pointed out that the model depends</p><p>on a key microtubule protein having a conformation that doesn’t occur, on</p><p>types of intercellular connections that don’t happen in the adult brain, and</p><p>on an organelle in neurons being in a place where it isn’t.[9]</p><p>So, this savaging aside, can quantal effects actually bubble up enough to</p><p>influence behavior? The indeterminacy that releases magnesium from a</p><p>single glutamate receptor doesn’t enhance excitation across a synapse all</p><p>that much. And even major excitation of a single synapse is not enough to</p><p>trigger an action potential in a neuron. And an action potential in one</p><p>neuron is not enough to make a signal propagate through a network of</p><p>neurons. Let’s put some numbers behind these facts. The dendrite in a</p><p>single glutamatergic synapse contains approximately 200 glutamate</p><p>receptors, and remember that we’re considering quantal events in a single</p><p>receptor</p><p>at a time. A neuron has, conservatively, 10,000–50,000 of those</p><p>synapses. Just to pick a brain region at random, the hippocampus has</p><p>approximately 10 million of those neurons. That’s 20–100 trillion glutamate</p><p>receptors (200 x 10,000 x 10,000,000 = 20 trillion, and 200 x 50,000 x</p><p>10,000,000 = 100 trillion).[*] It is possible that an event having no prior</p><p>deterministic cause could alter the functioning of a single glutamate</p><p>receptor. But how likely is it that quantum events like these just happen to</p><p>occur at the same time and in the same direction (i.e., increasing or</p><p>decreasing receptor activation) in enough of those 20–100 trillion receptors</p><p>to produce an actual neurobiological event that has no prior deterministic</p><p>cause?[10]</p><p>Apply some similar numbers in the hippocampus to those putative</p><p>consciousness-producing microtubules: Their basic building block, a</p><p>protein called tubulin, is 445 amino acids long, and amino acids average out</p><p>to close to 20 atoms each. Thus, around 9,000 atoms in each molecule of</p><p>tubulin. Each stretch of microtubule is made up of 13 tubulin molecules.</p><p>Each stretch of axon contains about 100 bundles of microtubules, each axon</p><p>helping to make the 10,000–50,000 synapses in each of those 10 million</p><p>neurons. Again with the zeros.</p><p>This is the bubbling-up problem in going from quantum indeterminacy</p><p>at the subatomic level up to brains producing behavior—you’d need to have</p><p>a staggeringly large number of such random events occurring at the same</p><p>time, place, and direction. Instead, most experts conclude that the more</p><p>likely scenario is that any given quantum event gets lost in the noise of a</p><p>staggering number of other quantum events occurring at different times and</p><p>directions. People in this business view the brain not only as “noisy” in this</p><p>sense but also as “warm” and “wet,” the messy sort of living environment</p><p>that biases against quantum effects persisting. As summarized by one</p><p>philosopher, “The law of large numbers, combined with the sheer number</p><p>of quantum events occurring in any macro-level object, assure us that the</p><p>effects of random quantum-level fluctuations are entirely predictable at the</p><p>macro level, much the way that the profits of casinos are predictable, even</p><p>though based on millions of ‘purely chance’ events.” The early-twentieth-</p><p>century physicist Paul Ehrenfest, in the theorem bearing his name,</p><p>formalizes how as one considers larger and larger numbers of elements, the</p><p>nonclassical physics of quantum mechanics merges into old-style,</p><p>predictable classical physics.[*] To paraphrase Lindley, this is why the</p><p>weirdness disappears.[11]</p><p>So one glutamate receptor does not a moral philosophy make. The</p><p>response to this by quantum free-willers is that various features of</p><p>nonclassical physics can coordinate quantum events among a lot of</p><p>constituents in the nervous system (and some posit that quantum</p><p>indeterminacy bubbles up to some extent and meets chaoticism there,</p><p>piggybacking all the way up to behavior). For Eccles, quantum tunneling</p><p>across synapses allows for the coupling of networks of neurons in shared</p><p>quantum states (and note that implicit in this idea and those to follow is that</p><p>entanglement occurs not just between two particles, but between whole</p><p>neurons as well). For Schwartz, quantum superposition means that a single</p><p>ion flowing through a channel is not really singular. Instead, it is a</p><p>“quantum cloud of possibilities associated with the [calcium] ion to fan out</p><p>over an increasing area as it moves away from the tiny channel to the target</p><p>region where the ion will be absorbed as a whole, or not absorbed at all.” In</p><p>other words, thanks to particle/wave duality, each ion can have coordinated</p><p>effects far and wide. And, Schwartz continues, this process bubbles upward</p><p>to encompass the whole brain: “In fact, because of uncertainties on timings</p><p>and locations, what is generated by the physical processes in the brain will</p><p>be not a single discrete set of non-overlapping physical possibilities but</p><p>rather a huge smear of classically conceived possibilities” now subject to</p><p>quantum rules. Sultan Tarlaci and Massimo Pregnolato cite similar quantum</p><p>physics in speculating that a single neurotransmitter molecule has a similar</p><p>cloud of superposition possibilities, binding to an array of receptors at once</p><p>and lassoing them into collective action.[*],[12]</p><p>So the notion that random, indeterministic quantum effects can bubble</p><p>all the way up to behavior strikes me as a little dubious. Moreover, nearly</p><p>all the scientists with the appropriate expertise think it is resoundingly</p><p>dubious.</p><p>Somewhere around here it seems useful to approach things on a more</p><p>empirical level. Do synapses ever actually act randomly? How about entire</p><p>neurons? Entire networks of neurons?</p><p>NEURONAL SPONTANEITY</p><p>As a brief reminder: When an action potential occurs in a neuron, it goes</p><p>hurtling down the axon, eventually reaching all of the thousands of that</p><p>neuron’s axon terminals. As a result, packets of neurotransmitter are</p><p>released from each terminal.</p><p>If you were designing things, maybe each axon terminal’s</p><p>neurotransmitters would be contained in a single bucket, a single large</p><p>vesicle, which would then be emptied into the synapse. That has a certain</p><p>logic. Instead, that same amount of neurotransmitter is stored in a bunch of</p><p>much smaller buckets, and all of them are emptied into the synapse in</p><p>response to an action potential. Your average hippocampal neuron that</p><p>releases glutamate as its neurotransmitter has about 2.2 million copies of</p><p>glutamate molecules stored in each of its axon terminals. In theory, each</p><p>terminal could have all of those copies in our single big bucket vesicle;</p><p>instead, as noted before, the terminal contains an average of 270 little</p><p>vesicles, each containing about eight thousand copies of glutamate.</p><p>Why has this organization evolved, instead of the single-bucket</p><p>approach? Probably because it gives you more fine control. For example, it</p><p>turns out that a large percentage of vesicles are usually mothballed at the</p><p>back end of the terminal, kept in storage for when needed. Therefore, an</p><p>action potential doesn’t really cause the release of neurotransmitter from all</p><p>the vesicles in each axon terminal. More correctly, it causes releases from</p><p>all of the vesicles in the “readily releasable pool.” And neurons can regulate</p><p>what percentage of their vesicles are readily releasable versus in storage, a</p><p>way of changing the strength of the signal across the synapse.</p><p>This was the work of Bernard Katz, who got some of his training with</p><p>Eccles and went on to his own knighthood and Nobel Prize. Katz would</p><p>isolate a single neuron and, with the use of a particular drug, make it</p><p>impossible for it to have an action potential. He’d then study what would be</p><p>happening at a given axon terminal. What he saw was that, amid action</p><p>potentials being blocked, every now and then, maybe once a minute,[*] the</p><p>axon terminal would release a tiny hiccup of excitation, something</p><p>eventually called a miniature end-plate potential (MEPP). Showing that</p><p>little bits of neurotransmitter were spontaneously and randomly released.</p><p>Katz noted something interesting. The hiccups were all roughly the same</p><p>size, say, 1.3 smidgens of excitation. Never 1.2 or 1.4. To the limits of</p><p>measurement, always 1.3. And then, after sitting there recording the</p><p>occasional 1.3 smidgen-size blip, Katz noticed that much more rarely than</p><p>that, there’d be a hiccup that was 2.6 smidgens. Whoa. And even more</p><p>rarely, 3.9 smidgens. What was Katz seeing? 1.3 smidgens was the amount</p><p>of excitation of one single vesicle being spontaneously released; 2.6, the</p><p>much rarer spontaneous release of two vesicles simultaneously, and so on.[*]</p><p>From that came the insight that neurotransmitters were stored in individual</p><p>vesicular packets, and that every now and then, in a purely probabilistic</p><p>fashion, an individual vesicle would dump its neurotransmitters—drumroll</p><p>please—in the absence of an antecedent cause.[*],[13]</p><p>While the field has often viewed the phenomenon as not hugely</p><p>interesting, often referring to it semisarcastically as “leaky synapses,” the</p><p>notion of there being no antecedent causes turned spontaneous vesicular</p><p>release of neurotransmitter into an amusement park in which</p><p>neuroquantologists can gambol. Aha, spontaneous, nondeterministic</p><p>vesicular neurotransmitter release as the building block for the brain as a</p><p>cloud of potentials, for being the captain of your fate. Four reasons to be</p><p>very cautious about this:[14]</p><p>—Not so fast with the no-antecedent-cause part. There’s a whole cascade of molecules</p><p>involved in the process of an action potential causing vesicles to dump their</p><p>neurotransmitter into the synapse—ion channels open or close, ion-sensitive enzymes are</p><p>activated, a matrix of proteins holding a vesicle still in its inactive state has to be</p><p>cleaved, a molecular machete has to cut through more matrix to allow the vesicle to then</p><p>move toward the neuron’s membrane, the vesicle has to now dock to a specific release</p><p>portal in the membrane. The insights of many fruitful careers in science. Okay, you think</p><p>you see where I’m going—yeah, yeah, neurotransmitter doesn’t just get dumped from</p><p>out of nowhere, there’s this whole complex mechanistic cascade explaining intentional</p><p>neurotransmitter release, so we’ll reframe our free will as when this deterministic</p><p>cascade happens to be triggered in the absence of an antecedent cause. But no—it’s not</p><p>just when the usual process is triggered randomly, because it turns out that the</p><p>mechanistic cascade for spontaneous vesicular release is different from the cascade for</p><p>release evoked by an action potential. It’s not a random universe hitting a button that</p><p>normally represents intent. A separate button evolved.[15]</p><p>—Moreover, the process of spontaneous vesicular release is regulated by factors</p><p>extrinsic to the axon terminal—other neurotransmitters, hormones, alcohol, having a</p><p>disease like diabetes, or having a particular visual experience can all alter spontaneous</p><p>release without having a similar effect on evoked neurotransmitter release. Events in</p><p>your big toe can change the likelihood of these hiccups happening in the axon terminal</p><p>of some neuron in the corner of your brain. How would, say, a hormone do this? It sure</p><p>wouldn’t be changing the fundamental nature of quantum mechanics (“Ever since</p><p>puberty and hormones hit, all I get from her is sullenness and quantum entanglement”).</p><p>But a hormone can alter the opportunity for quantum events to occur. For example, many</p><p>hormones change the composition of ion channels, changing how subject they are to</p><p>quantum effects.[16]</p><p>Thus, deterministic neurobiology can make indeterministic randomness more or less</p><p>likely to occur. It’s like you’re the director of a show where, at some point, the new king</p><p>emerges, to much acclaim. And as your direction, you tell the twenty people in the</p><p>ensemble, “Okay, when the king appears from stage left, shout out stuff like ‘Hoorah!’</p><p>‘Behold, the king!’ ‘Long life, sire!’ ‘Huzzah!’—just pick one of those.”[*] And you’re</p><p>pretty much guaranteed to get the mélange of responses you were aiming for.</p><p>Determined indeterminacy. This certainly does not count as randomness being an</p><p>uncaused cause.[17]</p><p>—Spontaneous vesicular release of neurotransmitters serves a useful purpose. If a</p><p>synapse has been silent for a while, the likelihood of spontaneous release increases—the</p><p>synapse gets up and stretches a bit. It’s like, during a long period at home, running the</p><p>car occasionally to keep the battery from dying.[*] In addition, spontaneous</p><p>neurotransmitter release plays a large role in the developing brain—it’s a good idea to</p><p>excite a newly wired synapse a bit, make sure everything is working right, before putting</p><p>it in charge of, say, breathing.[18]</p><p>—Finally, there’s still the bubbling-up problem.</p><p>The bubbling issue brings us to our next level. So individual vesicles</p><p>randomly dump their contents now and then, ignoring for the moment the</p><p>issues of its involving unique machinery, being intentionally regulated, and</p><p>being purposeful. Do enough vesicles ever get dumped all at once to make a</p><p>major burst of excitation in a single synapse? Unlikely; an action potential</p><p>evokes about forty times the excitation as does the spontaneous dump of a</p><p>single vesicle.[*] You’d need a lot of those hiccups at once to produce this.</p><p>Scaling up one step higher, do neurons ever just randomly have action</p><p>potentials, dumping vesicles in all ten thousand to fifty thousand axon</p><p>terminals, seemingly in the absence of an antecedent cause?</p><p>Now and then. Have we now leapfrogged up to a more integrated level</p><p>of brain function that could be subject to quantum effects? The same</p><p>caution is called for again. Such action potentials have their own</p><p>mechanistic antecedent causes, are regulated extrinsically, and serve a</p><p>purpose. As an example of the last point, neurons that send their axon</p><p>terminals into muscles, stimulating muscle movement, will have</p><p>spontaneous action potentials. It turns out that when the muscle has been</p><p>quiet for a while, a part of it (called the muscle spindle) can make the</p><p>neurons more likely to have spontaneous action potentials—when you’ve</p><p>been still for a long while, your muscles get twitchy, just so the battery</p><p>doesn’t run down.[*] Another case where a mechanistic, deterministic</p><p>regulatory loop can make indeterministic events more likely. Again, we’ll</p><p>get to what to make of such determined indeterminacy.</p><p>One level higher—do entire networks, circuits of neurons, ever activate</p><p>randomly? People used to think so. Suppose you’re interested in what areas</p><p>of the brain respond to a particular stimulus. Stick someone in a brain</p><p>scanner and expose them to that stimulus, and see what brain regions</p><p>activate (for example, the amygdala tends to activate in response to seeing</p><p>pictures of scary faces, implicating that brain region in fear and anxiety).</p><p>And in analyzing the data, you would always have to subtract out the</p><p>background level of noisy activity in each brain region, in order to identify</p><p>what was explicitly activated by the stimulus. Background noise. Interesting</p><p>term. In other words, when you’re just lying there, doing nothing, there’s all</p><p>sorts of random burbling going on throughout the brain, once again begging</p><p>for an indeterminacy interpretation.</p><p>Until some mavericks, principally Marcus Raichle of Washington</p><p>University School of Medicine, decided to study the boring background</p><p>noise. Which, of course, turns out to be anything but that—there’s no such</p><p>thing as the brain doing “nothing”—and is now known as the “default mode</p><p>network.” And, no surprise by now, it has its own underlying mechanisms,</p><p>is subject to all sorts of regulation, serves a purpose. One such purpose is</p><p>really interesting because of its counterintuitive punch line. Ask subjects in</p><p>a brain scanner what they were thinking at a particular moment, and the</p><p>default network is very active when they are daydreaming, aka “mind-</p><p>wandering.” The network is most heavily regulated by the dlPFC. The</p><p>obvious prediction now would be that the uptight dlPFC inhibits the default</p><p>network, gets you back to work when you’re spacing out thinking about</p><p>your next vacation. Instead, if you stimulate someone’s dlPFC, you increase</p><p>activity of the default network. An idle mind isn’t the Devil’s playground.</p><p>It’s a state that the most superego-ish part of your brain asks for now and</p><p>then. Why? Speculation is that it’s to take advantage of the creative problem</p><p>solving that we do when mind-wandering.[19]</p><p>• • •</p><p>W hat is to be made of these instances of neurons acting</p><p>spontaneously? Back, once again, to the show-me scenario—if free</p><p>will exists, show me a neuron(s) that just caused a behavior to occur in the</p><p>complete absence of any influences coming from other neurons, from the</p><p>neuron’s energy state, from hormones, from any environmental events</p><p>stretching back through fetal life, from genes. On and on. And none of the</p><p>versions of ostensibly spontaneous activation of a single vesicle, synapse,</p><p>neuron, or neuronal network constitutes</p><p>an example of this. None are truly</p><p>random events that could be directly rooted in quantum effects; instead,</p><p>they are all circumstances where something very mechanistic in the brain</p><p>has determined that it’s time to be indeterministic. Whatever quantum</p><p>effects there are in the nervous system, none bubble up to the level of</p><p>telling us anything about someone pulling a trigger heartlessly or heroically.</p><p>PROBLEM #2: IS YOUR FREE WILL A SMEAR?</p><p>Which brings us to the second big problem with the idea that quantum</p><p>mechanics means that our macroscopic world cannot actually be</p><p>deterministic and free will is alive and well. Rather than the technicalities of</p><p>leaky synapses, muscle spindles, and quantumly entangled vesicles, this</p><p>problem is simple. And, in my opinion, devastating.</p><p>Suppose there were no issues with bubbling—indeterminacy at the</p><p>quantum level was not canceled out in the noise and instead shaped</p><p>macroscopic events dozens of orders of magnitude larger in size. Suppose</p><p>the functioning of every part of your brain as well as your behavior could</p><p>most effectively be understood on the quantum level.</p><p>It’s difficult to imagine what that would look like. Would we each be a</p><p>cloud of superimposition, believing in fifty mutually contradictory moral</p><p>systems at the same time? Would we simultaneously pull the trigger and not</p><p>pull the trigger during the liquor store stickup, and only when the police</p><p>arrive would the macro-wave function collapse and the clerk be either dead</p><p>or not?</p><p>This raises a fundamental problem that screams out, one that every stripe</p><p>of scholar thinking about this topic typically wrestles with. If our behavior</p><p>were rooted in quantum indeterminacy, it would be random. In his</p><p>influential 2001 essay “Free Will as a Problem in Neurobiology,”</p><p>philosopher John Searle wrote, “Quantum indeterminism gives us no help</p><p>with the free will problem because that indeterminism introduces</p><p>randomness into the basic structure of the universe, and the hypothesis that</p><p>some of our acts occur freely is not at all the same as the hypothesis that</p><p>some of our acts occur at random. . . . How do we get from randomness to</p><p>rationality?”[*] Or as often pointed out by Sam Harris, if quantum</p><p>mechanics actually played a role in supposed free will, “every thought and</p><p>action would seem to merit the statement ‘I don’t know what came over</p><p>me.’ ” Except, I’d add, you wouldn’t actually be able to make that</p><p>statement, since you’d just be making gargly sounds because the muscles in</p><p>your tongue would be doing all sorts of random things. As emphasized by</p><p>Michael Shadlen and Adina Roskies, whether you believe that free will is</p><p>compatible with determinism, it isn’t compatible with indeterminism.[*] Or</p><p>in the really elegant words of one philosopher, “Chance is as relentless as</p><p>necessity.”[20]</p><p>When we argue about whether our behavior is the product of our agency,</p><p>we’re not interested in random behavior, why there might have been that</p><p>one time in Stockholm where Mother Teresa pulled a knife on some guy</p><p>and stole his wallet. We’re interested in the consistency of behavior that</p><p>constitutes our moral character. And in the consistent ways in which we try</p><p>to reconcile our multifaceted inconsistencies.[*] We’re trying to understand</p><p>how Martin Luther would stick to his guns and say, “Here I stand, I can do</p><p>no other,” when ordered to renounce his views by ecumenical thugs who</p><p>burned people at the stake as a hobby. We’re trying to understand that lost-</p><p>cause person who is trying to straighten out their life yet makes self-</p><p>destructive, impulsive decisions again and again. It’s why funerals so often</p><p>include a eulogy from that person’s oldest friend, a historical witness to</p><p>consistency: “Even when we were in grade school, she already was the sort</p><p>of person who . . .”</p><p>Even if quantum effects bubbled up enough to make our macro world as</p><p>indeterministic as our micro one is, this would not be a mechanism for free</p><p>will worth wanting. That is, unless you figure out a way where we can</p><p>supposedly harness the randomness of quantum indeterminacy to direct the</p><p>consistencies of who we are.</p><p>PROBLEM #3: HARNESSING THE RANDOMNESS OF</p><p>QUANTUM INDETERMINACY TO DIRECT THE</p><p>CONSISTENCIES OF WHO WE ARE</p><p>Which is precisely what is argued by some free-will believers leaning on</p><p>quantum indeterminacy. In the words of Daniel Dennett in describing this</p><p>view, “Whatever you are, you can’t influence the undetermined event—the</p><p>whole point of quantum indeterminacy is that such quantum events are not</p><p>influenced by anything—so you will somehow have to co-opt it or join</p><p>forces with it, putting it to use in some intimate way” (my italics). Or in the</p><p>words of Peter Tse, your brain “would have to be able to harness this</p><p>randomness to fulfill information processing aims.”[21]</p><p>I see two broad ways of thinking about how we might harness, co-opt,</p><p>and join forces with randomness for moral consistency. In a “filtering”</p><p>model, randomness is generated indeterministically, the usual, but the</p><p>agentic “you” installs a filter up top that allows only some of the</p><p>randomness that has bubbled up to pass through and drive behavior. In</p><p>contrast, in a “messing with” model, your agentic self reaches all the way</p><p>down and messes with the quantum indeterminacy itself in a way that</p><p>produces the behavior supposedly chosen.</p><p>Filtering</p><p>Biology provides at least two fantastic examples of this sort of filtering. The</p><p>first is evolution—the random physical chemistry of mutations occurring in</p><p>DNA provides genotypic variety, and natural selection is then the filter</p><p>choosing which mutations get through and become more common in a gene</p><p>pool. The other example concerns the immune system. Suppose you get</p><p>infected with a virus that your body has never seen before; thus, there’s no</p><p>antibody against it in your body’s medicine cabinet. The immune system</p><p>now shuffles some genes to randomly generate an enormous array of</p><p>different antibodies. At which point filtering begins. Each new type of</p><p>antibody is presented with a piece of the virus, to see how well the former</p><p>reacts to the latter. It’s a Hail Mary pass, hoping that some of these</p><p>randomly generated antibodies happen to target the virus. Identify them,</p><p>and then destroy the rest of the antibodies, a process termed positive</p><p>selection. Now check each remaining antibody type and make sure it</p><p>doesn’t happen to do something dangerous as well, namely targeting a piece</p><p>of you that happens to be similar to the viral fragment that was presented.</p><p>Check each candidate antibody against a “self” fragment; find any that</p><p>attack it and get rid of them and the cells that made them—negative</p><p>selection. You now have a handful of antibodies that target the novel virus</p><p>without inadvertently targeting you.[22]</p><p>As such, this is a three-step process. One—the immune system</p><p>determines it’s time to induce some indeterministic randomness. Two—the</p><p>random gene shuffling occurs. Three—your immune system determines</p><p>which random outcomes fit the bill, filtering out the rest. Deterministically</p><p>inducing a randomization process; being random; using predetermined</p><p>criteria for filtering out the unuseful randomness. In the jargon of that field,</p><p>this is “harnessing the stochasticity of hypermutation.”</p><p>Which is what supposedly goes on in the filtering version of quantum</p><p>effects generating free will. In Dennett’s words:</p><p>The model of decision making I am proposing has the following</p><p>feature: when we are faced with an important decision, a</p><p>consideration-generator whose output is to some degree</p><p>undetermined, produces a series of considerations, some of</p><p>which may of course be immediately rejected as irrelevant by the</p><p>agent (consciously or unconsciously). Those considerations that</p><p>are selected by the agent as having a more than negligible</p><p>bearing on the decision then figure in a reasoning process, and if</p><p>the agent is in the main reasonable, those considerations</p><p>ultimately serve as predictors and explicators of the agent’s final</p><p>decision.[23]</p><p>As such, determining that you are at a decision-making juncture</p><p>—despite the world being deterministic, things can change. Brains change,</p><p>behaviors change. We change. And that doesn’t counter this being a</p><p>deterministic world without free will. In fact, the science of change</p><p>strengthens the conclusion; this will come in chapter 12.</p><p>With those issues in mind, time to see the version of determinism that</p><p>this book builds on.</p><p>Imagine a university graduation ceremony. Almost always moving,</p><p>despite the platitudes, the boilerplate, the kitsch. The happiness, the pride.</p><p>The families whose sacrifices now all seem worth it. The graduates who</p><p>were the first in their family to finish high school. The ones whose</p><p>immigrant parents sit there glowing, their saris, dashikis, barongs</p><p>broadcasting that their pride in the present isn’t at the cost of pride in their</p><p>past.</p><p>And then you notice someone. Amid the family clusters postceremony,</p><p>the new graduates posing for pictures with Grandma in her wheelchair, the</p><p>bursts of hugs and laughter, you see the person way in the back, the person</p><p>who is part of the grounds crew, collecting the garbage from the cans on the</p><p>perimeter of the event.</p><p>Randomly pick any of the graduates. Do some magic so that this garbage</p><p>collector started life with the graduate’s genes. Likewise for getting the</p><p>womb in which nine months were spent and the lifelong epigenetic</p><p>consequences of that. Get the graduate’s childhood as well—one filled with,</p><p>say, piano lessons and family game nights, instead of, say, threats of going</p><p>to bed hungry, becoming homeless, or being deported for lack of papers.</p><p>Let’s go all the way so that, in addition to the garbage collector having</p><p>gotten all that of the graduate’s past, the graduate would have gotten the</p><p>garbage collector’s past. Trade every factor over which they had no control,</p><p>and you will switch who would be in the graduation robe and who would be</p><p>hauling garbage cans. This is what I mean by determinism.</p><p>AND WHY DOES THIS MATTER?</p><p>Because we all know that the graduate and the garbage collector would</p><p>switch places. And because, nevertheless, we rarely reflect on that sort of</p><p>fact; we congratulate the graduate on all she’s accomplished and move out</p><p>of the way of the garbage guy without glancing at him.</p><p>T</p><p>2</p><p>The Final Three Minutes of a Movie</p><p>wo men stand by a hangar in a small airfield at night. One is in a</p><p>police officer’s uniform, the other dressed as a civilian. They talk</p><p>tensely while, in the background, a small plane is taxiing to the</p><p>runway. Suddenly, a vehicle pulls up and a man in a military uniform gets</p><p>out. He and the police officer talk tensely; the military man begins to make</p><p>a phone call; the civilian shoots him, killing him. A vehicle full of police</p><p>pulls up abruptly, the police emerging rapidly. The police officer speaks to</p><p>them as they retrieve the body. They depart as abruptly, with the body but</p><p>not the shooter. The police officer and the civilian watch the plane take off</p><p>and then walk off together.</p><p>What’s going on? A criminal act obviously occurred—from the care</p><p>with which the civilian aimed, he clearly intended to shoot the man. A</p><p>terrible act, compounded further by the man’s remorseless air—this was</p><p>cold-blooded murder, depraved indifference. It is puzzling, though, that the</p><p>police officer made no attempt to apprehend him. Possibilities come to</p><p>mind, none good. Perhaps the officer has been blackmailed by the civilian</p><p>to look the other way. Maybe all the police who appeared on the scene are</p><p>corrupt, in the pocket of some drug cartel. Or perhaps the police officer is</p><p>actually an impostor. One can’t be certain, but it’s clear that this was a scene</p><p>of intent-filled corruption and lawless violence, the police officer and the</p><p>civilian exemplars of humans at their worst. That’s for sure.</p><p>Intent features heavily in issues about moral responsibility: Did the</p><p>person intend to act as she did? When exactly was the intent formed? Did</p><p>she know that she could have done otherwise? Did she feel a sense of</p><p>ownership of her intent? These are pivotal issues to philosophers, legal</p><p>scholars, psychologists, and neurobiologists. In fact, a huge percentage of</p><p>the research done concerning the free-will debate revolves around intent,</p><p>often microscopically examining the role of intent in the seconds before a</p><p>behavior happens. Entire conferences, edited volumes, careers, have been</p><p>spent on those few seconds, and in many ways, this focus is at the heart of</p><p>arguments supporting compatibilism; this is because all the careful,</p><p>nuanced, clever experiments done on the subject collectively fail to falsify</p><p>free will. After reviewing these findings, the purpose of this chapter is to</p><p>show how, nevertheless, all this is ultimately irrelevant to deciding that</p><p>there’s no free will. This is because this approach misses 99 percent of the</p><p>story by not asking the key question: And where did that intent come from</p><p>in the first place? This is so important because, as we will see, while it sure</p><p>may seem at times that we are free do as we intend, we are never free to</p><p>intend what we intend. Maintaining belief in free will by failing to ask that</p><p>question can be heartless and immoral and is as myopic as believing that all</p><p>you need to know to assess a movie is to watch its final three minutes.</p><p>Without that larger perspective, understanding the features and</p><p>consequences of intent doesn’t amount to a hill of beans.</p><p>THREE HUNDRED MILLISECONDS</p><p>Let’s start off with William Henry Harrison, ninth president of the United</p><p>States, remembered only for idiotically insisting on giving a record-long</p><p>two-hour inauguration speech in the freezing cold in January 1841, without</p><p>coat or hat; he caught pneumonia and died a month later, the first president</p><p>to die in office and the shortest presidential term.[*],[1]</p><p>With that in place, think about William Henry Harrison. But first, we’re</p><p>going to stick electrodes all over your scalp for an electroencephalogram</p><p>(EEG), to observe the waves of neuronal excitation generated by your</p><p>cortex when you’re thinking of Bill.</p><p>Now don’t think of Harrison—think about anything else—as we</p><p>continue recording your EEG. Good, well done. Now don’t think about</p><p>Harrison, but plan to think about him whenever you want a little while later,</p><p>and push this button the instant you do. Oh, also, keep an eye on the second</p><p>hand on this clock and note when you chose to think about Harrison. We’re</p><p>also going to wire up your hand with recording electrodes to detect</p><p>precisely when you start the pushing; meanwhile, the EEG will detect when</p><p>neurons that command those muscles to push the button start to activate.</p><p>And this is what we find out: those neurons had already activated before</p><p>you thought you were first freely choosing to start pushing the button.</p><p>But the experimental design of this study isn’t perfect, because of its</p><p>nonspecificity—we may have just learned what’s happening in your brain</p><p>when it is generically doing something, as opposed to doing this particular</p><p>something. Let’s switch instead to your choosing between doing A and</p><p>doing B. William Henry Harrison sits down to some typhoid-riddled</p><p>burgers and fries, and he asks for ketchup. If you decide he would have</p><p>pronounced it “ketch-up,” immediately push this button with your left hand;</p><p>if it was “cats-up,” push this other button with your right. Don’t think about</p><p>his pronunciation of ketchup right now; just look at the clock and tell us the</p><p>instant you chose which button to push. And you get the same answer—the</p><p>neurons responsible for whichever hand pushes the button activate before</p><p>you consciously formed your choice.</p><p>Let’s do something fancier now than looking at brain waves, since EEG</p><p>reflects the activity of hundreds of millions of neurons at a time, making it</p><p>hard to know what’s happening in particular brain regions. Thanks to a</p><p>grant from the WHH Foundation, we’ve bought a neuroimaging system and</p><p>will do functional magnetic resonance imaging (fMRI) of your brain while</p><p>you do the task—this will tell us about activity in each individual brain</p><p>region at the same time. The</p><p>activates an indeterministic generator, and you then reason through which</p><p>consideration is chosen.[*] As noted, Roskies does not equate the random</p><p>noise of nervous systems (rooted in quantum indeterminacy or otherwise)</p><p>with the headwaters of free will; instead, for Roskies, writing with Michael</p><p>Shadlen, free will is what’s happening when you filter out the chaff from</p><p>the wheat: “Noise puts a limit on an agent’s capacities and control, but</p><p>invites the agent to compensate for these limitations by high-level decisions</p><p>or policies[*] that may be (a) consciously accessible; (b) voluntarily</p><p>malleable; and (c) indicative of character.” Filtering, picking, choosing as</p><p>an act of sufficient free will and character that, as they state, this “can</p><p>provide a basis for accountability and responsibility.”[24]</p><p>Such a harnessing scenario has at least three limitations, of increasing</p><p>significance:</p><p>—A child has fallen into an icy river, and your consideration generator produces three</p><p>possibilities to choose among: leap in and save the child; shout for help; pretend you</p><p>didn’t see and scurry away. Choose. But since we’re dealing with quantum</p><p>indeterminacy, what if the first three possibilities are: tango in the absence of a partner;</p><p>confess to cheating on your taxes; make squawking sounds while jumping backward like</p><p>the dolphins at Sea World? Perfectly plausible, if superpositioned electron waves are the</p><p>wellsprings from which your moral decisions flow.</p><p>—To avoid having only tangoing, confessing, and dolphining as options, determine that</p><p>you need to indeterminately generate every random possibility. But now you have to</p><p>spend a lifetime evaluating and comparing each before choosing which is best. You need</p><p>to have an impossibly efficient search algorithm.[*],[25]</p><p>—So, phew, generate enough options so that they aren’t all silly, figure out how to</p><p>efficiently evaluate them all, and then use your criteria to filter out all but the winner. But</p><p>where does that filter, reflecting your values, ethics, and character, come from? It’s</p><p>chapter 3. And where does intent come from? How is it that one person’s filter filters out</p><p>every random possibility other than “Rob the bank,” while another’s goes for “Wish the</p><p>bank teller a good day”? And where do the values and criteria come from in even first</p><p>deciding whether some circumstance merits activating Dennett’s random consideration</p><p>generator? One person might do so when considering whether to commence an act of</p><p>civil disobedience at great personal cost, while another would when making a fashion</p><p>decision. Likewise, where do the differences come from as to which search algorithm is</p><p>used and for how long? Where do all of those come from? From the events, outside the</p><p>person’s control, occurring one second before, one minute before, one hour before, and</p><p>so on. Filtering out nonsense might prevent quantum indeterminacy from generating</p><p>random behavior, but it sure isn’t a manifestation of free will.</p><p>Messing With</p><p>To reiterate, in a messing-with model, you don’t merely pick and choose</p><p>among the random quantum effects generated. Instead, you reach down and</p><p>alter the process. As discussed in the last chapter, downward causation is</p><p>perfectly valid; the metaphor often used is that when a wheel is rolling, its</p><p>high-level wheel-ness is causing its constituent parts to do forward rolls.</p><p>And when you choose to pull a trigger, all of your index finger’s cells,</p><p>organelles, molecules, atoms, and quarks move about an inch.</p><p>Thus, supposedly, some high-level “me” reaches down, does some</p><p>downward causation such that subatomic events produce free will. In the</p><p>words of Irish neuroscientist Kevin Mitchell, “indeterminacy creates some</p><p>elbow room. . . . What randomness does, it is posited, is to introduce some</p><p>room, some causal slack in the system, for higher-order factors to exert a</p><p>causal influence” (my emphasis).[26]</p><p>As a first problem, the “controlled randomness” implicit in reaching</p><p>down and messing with quantum events is as much of an oxymoron as</p><p>“determined indeterminacy.” And where do the criteria come from as to</p><p>how you’re going to mess with your electrons? Amid those issues, the</p><p>biggest challenge I have in evaluating this idea is that it is truly difficult to</p><p>understand what exactly is being suggested.</p><p>One picture of downward causation changing the ability of quantum</p><p>events to influence our behavior is offered by libertarian philosopher Robert</p><p>Kane, who, it will be recalled from chapter 4, suggests that at times of life</p><p>when we are at a major crossroads of decision-making, the consistent</p><p>character at play when we choose was formed in the past out of free will</p><p>(i.e., his idea of “Self-Forming Actions”). But how does that self-formed</p><p>self actually bring about that decision? At such consequential crossroads,</p><p>“there is tension and uncertainty in our minds about what to do, I suggest,</p><p>that is reflected in appropriate regions of our brains by movement away</p><p>from thermodynamic equilibrium—in short, a kind of stirring up of chaos in</p><p>the brain that makes it sensitive to microindeterminacies at the neuronal</p><p>level.” In this view, your conscious self uses downward causation to induce</p><p>neuronal chaoticism in a way that allows quantum indeterminacy to bubble</p><p>all the way up in exactly the way you’ve chosen.[27]</p><p>Similar messing-with comes from Peter Tse, who, as quoted earlier,</p><p>argues that “the brain has in fact evolved to amplify quantum domain</p><p>randomness” (and then speculates that animals that had brains that could do</p><p>this “procreate better than those that did not”). For him, the brain reaches</p><p>down and messes with fundamental indeterminacy: “This permits</p><p>information to be downwardly causal regarding which indeterministic</p><p>events at the root-most level will be realized.”[*],[28]</p><p>I am nontrivially unsure how Tse proposes this happens. He wisely</p><p>emphasizes how cause and effect in the nervous system can be</p><p>conceptualized as the flow of “information.” But then a cloud of dualism</p><p>comes in. For him, downwardly causal information is not materially real,</p><p>which runs counter to the fact that in the brain, “information” is comprised</p><p>of real, material things, like neurotransmitter, receptor, and ion channel</p><p>molecules. Neurotransmitters bind to particular receptors for particular</p><p>durations; chains of proteins change conformations such that channels open</p><p>or close like the locks in the Panama Canal; ions flow like tsunamis into or</p><p>out of cells. But despite that, “information cannot be anything like an</p><p>energy that imposes forces.” However, such information, which is not</p><p>causal, can allow information that is causal: “Information is not causal as a</p><p>force. Rather, it is causal by allowing those physical causal chains that are</p><p>also informational causal chains . . . to become real.” And while</p><p>informational “patterns” are not material, there are “physically realized</p><p>pattern detectors.” In other words, while information might be made of</p><p>immaterial dust, the brain’s immaterial dust detectors are made of</p><p>reinforced concrete, steel rebar, and, if you’re on the old side, asbestos.</p><p>My problem with Kane’s and Tse’s views, and the similar ones of other</p><p>philosophers, is that, for the life of me, I can’t figure out how such reaching</p><p>down and messing with microscopic indeterminacy in the brain is supposed</p><p>to work. I can’t get past information being both a force and not without</p><p>sensing cake being both had and eaten. When Kane writes, “There is</p><p>tension and uncertainty in our minds about what to do, I suggest, that is</p><p>reflected in appropriate regions of our brains by movement away from</p><p>thermodynamic equilibrium,”[29] I am unclear whether “reflected” is meant</p><p>to be causal or correlative. Moreover, I know of no biology that explains</p><p>how having to make a tough decision causes thermodynamic disequilibrium</p><p>in the brain; how chaoticism can be “stirred up” in synapses; how chaotic</p><p>and nonchaotic determinism differ in their sensitivity to quantum</p><p>indeterminacy occurring at a scale many, many orders of magnitude</p><p>smaller; whether downward causality</p><p>results show clearly, once again, that particular</p><p>regions have “decided” which button to push before you believe you</p><p>consciously and freely chose. Up to ten seconds before, in fact.</p><p>Eh, forget about fMRI and the images it produces, where a single pixel’s</p><p>signal reflects the activity of about half a million neurons. Instead, we’re</p><p>going to drill holes in your head and then stick electrodes into your brain to</p><p>monitor the activity of individual neurons; using this approach, once again,</p><p>we can tell if you’ll go for “ketch-up” or “cats-up” from the activity of</p><p>neurons before you believe you decided.</p><p>These are the basic approaches and findings in a monumental series of</p><p>studies that have produced a monumental shitstorm as to whether they</p><p>demonstrate that free will is a myth. These are the core findings in virtually</p><p>every debate about what neuroscience can tell us on the subject. And I think</p><p>that at the end of the day, these studies are irrelevant.</p><p>It began with Benjamin Libet, a neuroscientist at the University of</p><p>California at San Francisco, in a 1983 study so provocative that at least one</p><p>philosopher refers to it as “infamous,” there are conferences held about it,</p><p>and scientists are described as doing “Libet-style studies.”[*], [2]</p><p>We know the experimental setup. Here’s a button. Push it whenever you</p><p>want. Don’t think about it beforehand; look at this fancy clock that makes it</p><p>easy to detect fractions of a second and tell us when you decided to push the</p><p>button, that moment of conscious awareness when you freely made your</p><p>decision.[*] Meanwhile, we’ll be collecting EEG data from you and</p><p>monitoring exactly when your finger starts moving.</p><p>Out of this came the basic findings: people reported that they decided to</p><p>push the button about two hundred milliseconds—two tenths of a second—</p><p>before their finger started moving. There was also a distinctive EEG</p><p>pattern, called a readiness potential, when people prepared to move; this</p><p>emanated from a part of the brain called the SMA (supplementary motor</p><p>area), which sends projections down the spine, stimulating muscle</p><p>movement. But here’s the crazy thing: the readiness potential, the evidence</p><p>that the brain had committed to pushing the button, occurred about three</p><p>hundred milliseconds before people believed they had decided to push the</p><p>button. That sense of freely choosing is just a post hoc illusion, a false sense</p><p>of agency.</p><p>This is the observation that started it all. Read technical papers on</p><p>biology and free will, and in 99.9 percent of them, Libet will appear, usually</p><p>by the second paragraph. Ditto for articles in the lay press—“Scientist</p><p>Proves There Is No Free Will; Your Brain Decides Before You Think You</p><p>Did.”[*] It inspired scads of follow-up research and theorizing; people are</p><p>still doing studies directly inspired by Libet nearly forty years after his 1983</p><p>publication. For example, there’s a 2020 paper entitled “Libet’s Intention</p><p>Reports Are Invalid.”[3] Having your work be important enough that</p><p>decades later, people are still trash-talking it is immortality for a scientist.</p><p>The basic Libet finding that you’re kidding yourself if you think you</p><p>made a decision when it feels like you did has been replicated.</p><p>Neuroscientist Patrick Haggard of University College London had subjects</p><p>choose between two buttons—choosing to do A versus B, rather than</p><p>choosing to do something versus not. This suggested the same conclusion</p><p>that the brain has seemingly decided before you think you did.[4]</p><p>These findings ushered in Libet 2.0, the work of John-Dylan Haynes and</p><p>colleagues at Humboldt University in Germany. It was twenty-five years</p><p>later, with fMRIs available; everything else was the same. Once again,</p><p>people’s sense of conscious choice came about two hundred milliseconds</p><p>before the muscles started moving. Most important, the study replicated the</p><p>conclusion from Libet, fleshing it out further.[*] With fMRI, Haynes was</p><p>able to spot the which-button decision even farther up in the brain’s chain of</p><p>command, in the prefrontal cortex (PFC). This made sense, as the PFC is</p><p>where executive decisions are made. (When the PFC, along with the rest of</p><p>the frontal cortex, is destroyed, à la Gage, one makes terrible, disinhibited</p><p>decisions.) To simplify a bit, once having decided, the PFC passes the</p><p>decision on to the rest of the frontal cortex, which passes it to the premotor</p><p>cortex, then to the SMA and, a few steps later, on to your muscles.[*]</p><p>Supporting the view of Haynes having spotted decision-making farther</p><p>upstream, the PFC was making its decision up to ten seconds before</p><p>subjects felt they were consciously deciding.[*], [5]</p><p>Then Libet 3.0 explored free-will-is-an-illusion down to monitoring the</p><p>activity of individual neurons. Neuroscientist Itzhak Fried of UCLA worked</p><p>with patients with intractable epilepsy, unresponsive to antiseizure</p><p>medications. As a last-ditch effort, neurosurgeons remove the part of the</p><p>brain where these seizures initiate; with Fried’s patients, it was the frontal</p><p>cortex. One obviously wants to minimize the amount of tissue removed, and</p><p>in preparation for that, electrodes are implanted in the targeted area prior to</p><p>the surgery, allowing for monitoring activity there. This provides a fine-</p><p>grained map of function, telling you what subparts you should avoid</p><p>removing, if there’s any leeway.</p><p>So Fried would have the subjects do a Libet-style task while electrodes</p><p>in their frontal cortex detected when particular neurons there activated.</p><p>Same punch line: some neurons activated in preparation for a particular</p><p>movement decision seconds before subjects claimed they had consciously</p><p>decided. In fascinating related studies, he has shown that neurons in the</p><p>hippocampus that code for a specific episodic memory activate one to two</p><p>seconds before the person becomes aware of freely recalling that memory.</p><p>[6]</p><p>Thus, three different techniques, monitoring the activity of hundreds of</p><p>millions of neurons down to single neurons, all show that at the moment</p><p>when we believe that we are consciously and freely choosing to do</p><p>something, the neurobiological die has already been cast. That sense of</p><p>conscious intent is an irrelevant afterthought.</p><p>This conclusion is reinforced by studies showing how malleable the</p><p>sense of intent and agency is. Back to the basic Libet paradigm; this time,</p><p>pushing a button caused a bell to ring, and the researchers would vary how</p><p>long of a fraction-of-a-second time delay there’d be between the pushing</p><p>and the ringing. When the bell ringing was delayed, subjects reported their</p><p>intent to push the button coming a bit later than usual—without the</p><p>readiness potential or actual movement changing. Another study showed</p><p>that if you feel happy, you perceive that conscious sense of choice sooner</p><p>than if you’re unhappy, showing how our conscious sense of choosing can</p><p>be fickle and subjective.[7]</p><p>Other studies of people undergoing neurosurgery for intractable epilepsy,</p><p>meanwhile, showed that the sense of intentional movement and actual</p><p>movement can be separated. Stimulate an additional brain region relevant to</p><p>decision-making,[*] and people would claim they had just moved</p><p>voluntarily—without so much as having tensed a muscle. Stimulate the pre-</p><p>SMA instead, and people would move their finger while claiming that they</p><p>hadn’t.[8]</p><p>One neurological disorder reinforces these findings. Stroke damage to</p><p>part of the SMA produces “anarchic hand syndrome,” where the hand</p><p>controlled by that side of the SMA[*] acts against the person’s will (e.g.,</p><p>grabbing food from someone else’s plate); sufferers even restrain their</p><p>anarchic hand with their other one.[*] This suggests that the SMA keeps</p><p>volition on task, binding “intention to action,” all before the person believes</p><p>they’ve formed that intention.[9]</p><p>Psychology studies also show how the sense of agency can be illusory.</p><p>In one study, pushing a button would be followed immediately by a light</p><p>going on . . . some of the time. The percentage of time the light would go on</p><p>was varied; subjects were then asked how much</p><p>control they felt they had</p><p>over the light. People consistently overestimate how reliably the light</p><p>occurs, feeling that they control it.[*] In another study, subjects believed</p><p>they were voluntarily choosing which hand to use in pushing a button.</p><p>Unbeknownst to them, hand choice was being controlled by transcranial</p><p>magnetic stimulation[*] of their motor cortex; nonetheless, subjects</p><p>perceived themselves as controlling their decisions. Meanwhile, other</p><p>studies used manipulations straight out of the playbook of magicians and</p><p>mentalists, with subjects claiming agency over events that were actually</p><p>foregone and out of their control.[10]</p><p>If you do X and this is followed by Y, what increases the odds of your</p><p>feeling like you caused Y? Psychologist Daniel Wegner of Harvard, a key</p><p>contributor in this area, identified three logical variables. One is priority—</p><p>the shorter the delay between X and Y, the more readily we have an illusory</p><p>sense of will. There are also consistency and exclusivity—how consistently</p><p>Y happens after you’ve done X, and how often Y happens in the absence of</p><p>X. The more of the former and the less of the latter, the stronger the</p><p>illusion.[11]</p><p>Collectively, what does this Libetian literature, starting with Libet,</p><p>show? That we can have an illusory sense of agency, where our sense of</p><p>freely, consciously choosing to act can be disconnected from reality;[*] we</p><p>can be manipulated as to when we first feel a sense of conscious control;</p><p>most of all, this sense of agency comes after the brain has already</p><p>committed to an action. Free will is a myth.[12]</p><p>Surprise!, people have been screaming at each other about these</p><p>conclusions ever since, incompatibilists perpetually citing Libet and his</p><p>descendants, and compatibilists being scornful shade throwers about the</p><p>entire literature. It didn’t take long to start. Two years after his landmark</p><p>paper, Libet published a review in a peer-commentary journal (where</p><p>someone presents a theoretical paper on a controversial topic, followed by</p><p>short commentaries by the scientist’s friends and enemies); commentators</p><p>beating on Libet accused him of “egregious errors,” overlooking</p><p>“fundamental measurement concepts,” conceptual unsophistication</p><p>(“Pardon, your dualism is showing,” accused one critic), and having an</p><p>unscientific faith in the accuracy of his timing measurements (sarcastically</p><p>proclaiming Libet as practicing “chronotheology”).[13]</p><p>The criticisms of the work of Libet, Haynes, Fried, Wegner, and friends</p><p>continue unabated. Some focus on minutiae like the limitations of using</p><p>EEGs, fMRI, and single-neuron recordings, or the pitfalls inherent in</p><p>subjects self-reporting most anything. But most criticisms are more</p><p>conceptual and collectively show that rumors of Libetianism killing free</p><p>will are exaggerated. These are worth detailing.</p><p>YOU GUYS PROCLAIM THE DEATH OF FREE WILL,</p><p>BASED ON SPONTANEOUS FINGER MOVEMENTS?</p><p>The Libetian literature is built around people spontaneously deciding to do</p><p>something. In the view of Manuel Vargas, free will revolves around being</p><p>future oriented, enduring an immediate cost for a long-term goal, and thus</p><p>“Libet’s experiment insisted on a purely immediate, impulsive action—</p><p>which is precisely not what free will is for.”[14]</p><p>Moreover, what was being spontaneously decided was to push a button,</p><p>and this bears little resemblance to whether we have free will concerning</p><p>our beliefs and values or our most consequential actions. In the words of</p><p>psychologist Uri Maoz of Chapman University, this is a contrast between</p><p>“picking” and “choosing”—Libet is about picking which box of Cheerios to</p><p>take off the supermarket shelf, not about choosing something major.</p><p>Dartmouth philosopher Adina Roskies, for example, views Libet-world</p><p>picking as a caricature of real choice, dwarfed even by the complexity of</p><p>deciding between tea and coffee.[*], [15]</p><p>Does the Libet finding apply to something more interesting than button</p><p>pushing? Fried replicated the Libet effect when subjects in a driving</p><p>simulator chose between turning left and turning right. Another study</p><p>merged neuroscience with getting out of the lab on a sunny day, checking</p><p>for the Libet phenomenon in subjects just before they bungee-jumped. Did</p><p>the neuroscientists, clutching their equipment, jump too? No, a wireless</p><p>EEG device was strapped to the jumpers’ heads, making them look like</p><p>Martians persuaded to bungee-jump by frat bros after some beer pong.</p><p>Results? Replication of Libet, where a readiness potential preceded the</p><p>subjects’ believing they had decided to jump.[16]</p><p>To which the compatibilists replied, This is still totally artificial—</p><p>choosing when to leap into an abyss or whether to turn left or right in a</p><p>driving simulator tells us nothing about our free will in choosing between,</p><p>say, becoming a nudist versus a Buddhist, or becoming an algologist versus</p><p>an allergologist. This criticism was backed by a particularly elegant study.</p><p>In the first situation, subjects would be presented with two buttons and told</p><p>that each represented a particular charity; press one of the buttons and that</p><p>charity will be sent a thousand dollars. Second version: two buttons, two</p><p>charities, push whichever button you feel like, each charity is getting five</p><p>hundred dollars. The brain was commanding the same movement in both</p><p>scenarios, but the choice in the first one was highly consequential, while</p><p>that in the second was as arbitrary as the one in the Libet study. The boring,</p><p>arbitrary situation evoked the usual readiness potential before there was a</p><p>sense of conscious decision; the consequential one didn’t. In other words,</p><p>Libet doesn’t tell us anything about free will worth wanting. In the</p><p>wonderfully sarcastic words of one leading compatibilist, the take-home</p><p>message of this entire literature is “Don’t play rock paper scissors for</p><p>money [with one of these free will skeptic researchers] if your head is in an</p><p>fMRI machine.”[17]</p><p>But then, the revenge of the free will skeptics. Haynes’s group brain-</p><p>imaged subjects participating in a nonmotoric task, choosing whether to add</p><p>or subtract one number from another; they found a neural signature of</p><p>decision coming before conscious awareness, but coming from a different</p><p>brain region than the SMA (called the posterior cingulate / precuneus</p><p>cortex). So maybe the pick-your-charity scientists were just looking in the</p><p>wrong part of the brain—simple brain regions decide things before you</p><p>think you’ve consciously made a simple decision, more complicated</p><p>regions before you think you’ve made a complicated choice.[18]</p><p>The jury is still out, because the Libetian literature remains almost</p><p>entirely about spontaneous decisions regarding some fairly simple things.</p><p>On to the next broad criticism.</p><p>60 PERCENT? REALLY?</p><p>What does it mean to become aware of a conscious decision? What do</p><p>“deciding” and “intending” really mean? Again with semantics that aren’t</p><p>just semantic. The philosophers run wild here in subtle ways that leave</p><p>many neuroscientists (e.g., me) gasping in defanged awe. How long does it</p><p>take to focus on focusing on the second hand on a clock? In her writing,</p><p>Roskies emphasizes the difference between conscious intention and</p><p>consciousness of intention. Alfred Mele speculates that the readiness</p><p>potential is the time when, in fact, you have legitimately freely chosen, and</p><p>it then takes a bit of time for you to be consciously aware of your freely</p><p>willed choice. Arguing against this, one study showed that at the time of the</p><p>onset of the readiness potential, rather than thinking about when they were</p><p>going to move, many subjects were thinking about things like dinner.[19]</p><p>Can you decide to decide? Are intending and having an intent the same</p><p>thing? Libet instructed subjects to note the time when they first became</p><p>aware of “the subjective experience of ‘wanting’ or intending to act”—but</p><p>are “wanting” and “intending” the same? Is it possible to be spontaneous</p><p>when you’ve been told to be spontaneous?</p><p>As long as we’re at it, what actually is a readiness potential?</p><p>Remarkably,</p><p>nearly forty years after Libet, a paper can still be entitled</p><p>“What Is the Readiness Potential?” Could it be deciding-to-do, actual</p><p>“intention,” while the conscious sense of decision is deciding-to-do-now, an</p><p>“implementation of intention”? Maybe the readiness potential doesn’t mean</p><p>anything—some models suggest that it is just the point where random</p><p>activity in the SMA passes a detectable threshold. Mele forcefully suggests</p><p>that the readiness potential is not a decision but an urge, and physicist Susan</p><p>Pockett and psychologist Suzanne Purdy, both of the University of</p><p>Auckland, have shown that the readiness potential is less consistent and</p><p>shorter when subjects are planning to identify when they made a decision,</p><p>versus when they felt an urge. For others, the readiness potential is the</p><p>process leading to deciding, not the decision itself. One clever experiment</p><p>supports this interpretation. In it, subjects were presented four random</p><p>letters and then instructed to choose one in their minds; sometimes they</p><p>were then signaled to press a button corresponding to that letter, sometimes</p><p>not—thus, the same decision-making process occurred in both scenarios,</p><p>but only one actually produced movement. Crucially, a similar readiness</p><p>potential occurred in both cases, suggesting, in the words of compatibilist</p><p>neuroscientist Michael Gazzaniga, that rather than the SMA deciding to</p><p>enact a movement, it’s “warming up for its participation in the dynamic</p><p>events.”[20]</p><p>So are readiness potentials and their precursors decisions or urges? A</p><p>decision is a decision, but an urge is just an increased likelihood of a</p><p>decision. Does a preconscious signal like a readiness potential ever occur</p><p>and despite that, the movement doesn’t then happen? Does a movement</p><p>ever occur without a preconscious signal preceding it? Combining these two</p><p>questions, how accurately do these preconscious signals predict actual</p><p>behavior? Something close to 100 percent accuracy would be a major blow</p><p>to free-will belief. In contrast, the closer accuracy is to chance (i.e., 50</p><p>percent), the less likely it is that the brain “decides” anything before we feel</p><p>a sense of choosing.</p><p>As it turns out, predictability isn’t all that great. The original Libet study</p><p>was done in such a way that it wasn’t possible to generate a number for this.</p><p>However, in the Haynes studies, fMRI images predicted which behavior</p><p>occurred with only about 60 percent accuracy, almost at the chance level.</p><p>For Mele, a “60-percent accuracy rate in predicting which button a</p><p>participant will press next doesn’t seem to be much of a threat to free will.”</p><p>In Roskies’s words, “All it suggests is that there are some physical factors</p><p>that influence decision-making.” The Fried studies recording from</p><p>individual neurons pushed accuracy up into the 80 percent range; while</p><p>certainly better than chance, this sure doesn’t constitute a nail in free will’s</p><p>coffin.[21]</p><p>Now for the next criticisms.</p><p>WHAT IS CONSCIOUSNESS?</p><p>Giving this section this ridiculous heading reflects how unenthused I am</p><p>about having to write this next stretch. I don’t understand what</p><p>consciousness is, can’t define it. I can’t understand philosophers’ writing</p><p>about it. Or neuroscientists’, for that matter, unless it’s “consciousness” in</p><p>the boring neurological sense, like not experiencing consciousness because</p><p>you’re in a coma.[*],[22]</p><p>Nevertheless, consciousness is central to Libet debates, sometimes, in a</p><p>fairly heavy-handed way. For example, take Mele, in a book whose title</p><p>trumpets that he’s not pulling any punches—Free: Why Science Hasn’t</p><p>Disproved Free Will. In its first paragraph, he writes, “There are two main</p><p>scientific arguments today against the existence of free will.” One arises</p><p>from social psychologists showing that behavior can be manipulated by</p><p>factors that we’re not aware of—we’ve seen examples of these. The other is</p><p>neuroscientists whose “basic claim is that all our decisions are made</p><p>unconsciously and therefore not freely” (my italics). In other words, that</p><p>consciousness is just an epiphenomenon, an illusory, reconstructive sense of</p><p>control irrelevant to our actual behavior. This strikes me as an overly</p><p>dogmatic way of representing just one of many styles of neuroscientific</p><p>thought on the subject.</p><p>The “ooh, you neuroscientists not only eat your dead but also believe all</p><p>our decisions are unconscious” nyah-nyah matters, because we shouldn’t be</p><p>held morally responsible for our unconscious behaviors (although</p><p>neuroscientist Michael Shadlen of Columbia University, whose excellent</p><p>research has informed free-will debates, makes a spirited argument along</p><p>with Roskies that we should be held morally responsible for even our</p><p>unconscious acts).[23]</p><p>Compatibilists trying to fend off the Libetians often make a last stand</p><p>with consciousness: Okay, okay, suppose that Libet, Haynes, Fried, and so</p><p>on really have shown that the brain decides something before we have a</p><p>sense of having consciously and freely done so. Let’s grant the</p><p>incompatibilists that. But does turning that preconscious decision into</p><p>actual behavior require that conscious sense of agency? Because if it does,</p><p>rather than bypassing consciousness as an irrelevancy, free will can’t be</p><p>ruled out.[*]</p><p>As we saw, knowing what a brain’s preconscious decision was</p><p>moderately predicts whether the behavior will actually occur. But what</p><p>about the relationship between the preconscious brain’s decision and the</p><p>sense of conscious agency—is there ever a readiness potential followed by</p><p>a behavior without a conscious sense of agency coming in between? One</p><p>cool study done by Dartmouth neuroscientist Thalia Wheatley and</p><p>collaborators[*] shows precisely this—subjects were hypnotized and</p><p>implanted with a posthypnotic suggestibility that they make a spontaneous</p><p>Libet-like movement. In this case, when triggered by the cued suggestion,</p><p>there’d be a readiness potential and the subsequent movement, without</p><p>conscious awareness in between. Consciousness is an irrelevant hiccup.[24]</p><p>Sure, retort compatibilists, this doesn’t mean that intentional behavior</p><p>always bypasses consciousness—rejecting free will based on what happens</p><p>in the posthypnotic brain is kind of flimsy. And there is a higher-order level</p><p>to this issue, something emphasized by incompatibilist philosopher Gregg</p><p>Caruso of the State University of New York—you’re playing soccer, you</p><p>have the ball, and you consciously decide that you are going to try to get</p><p>past this defender, rather than pass the ball off. In the process of then trying</p><p>to do this, you make a variety of procedural movements that you’re not</p><p>consciously choosing; what does it mean that you have made the explicit</p><p>choice to let a particular implicit process take over? The debate continues,</p><p>not just over whether the preconscious requires consciousness as a</p><p>mediating factor but also over whether both can simultaneously cause a</p><p>behavior.[25]</p><p>Amid these arcana, it’s hugely important if the preconscious decision</p><p>requires consciousness as a mediator. Why? Because during that moment of</p><p>conscious mediation we should then be expected to be able to veto a</p><p>decision, prevent it from happening. And you can hang moral responsibility</p><p>on that.[26]</p><p>FREE WON’T: THE POWER TO VETO</p><p>Even if we don’t have free will, do we have free won’t, the ability to slam</p><p>our foot on the brake between the moment of that conscious sense of freely</p><p>choosing to do something and the behavior itself? This is what Libet</p><p>concluded from his studies. Clearly we have that veto power. Writ small,</p><p>you’re about to reach for more M&M’s but stop an instant before. Writ</p><p>larger, you’re about to say something hugely inappropriate and disinhibited</p><p>but, thank God, you stop yourself as your larynx warms up to doom you.</p><p>The basic Libetian findings gave rise to a variety of studies looking at</p><p>where vetoing actions fits in. Do it or not: once that conscious sense of</p><p>intent occurs, subjects have the option to stop. Do it now or in a bit: once</p><p>that conscious sense of intent occurs, immediately push the button or first</p><p>count</p><p>to ten. Impose an external veto: In a brain-computer interface study,</p><p>researchers used a machine learning algorithm that monitored a subject’s</p><p>readiness potential, predicting in real time when the person was about to</p><p>move; some of the time, the computer would signal the subject to stop the</p><p>movement in time. Of course, people could generally stop themselves up</p><p>until a point of no return, which roughly corresponded to when the neurons</p><p>that send a command directly to muscles were about to fire. As such, a</p><p>readiness potential doesn’t constitute an unstoppable decision, and one</p><p>would generally look the same whether the subject was definitely going to</p><p>push a button or there was the possibility of a veto.[*],[27]</p><p>How does the vetoing work, neurobiologically? Slamming a foot on the</p><p>brake involved activating neurons just upstream of the SMA.[*] Libet may</p><p>have spotted this in a follow-up study examining free won’t. Once subjects</p><p>had that conscious sense of intent, they were supposed to veto the action; at</p><p>that point, the tail end of the readiness potential would lose steam, flatten</p><p>out.[*],[28]</p><p>Meanwhile, other studies explored interesting spin-offs of free won’t–</p><p>ness. What’s the neurobiology of a gambler on a losing streak who manages</p><p>to stop gambling, versus one who doesn’t?[*] What happens to free won’t</p><p>when there’s alcohol on board? How about kids versus adults? It turns out</p><p>H</p><p>that kids need to activate more of their frontal cortex than do adults to get</p><p>the same effectiveness at inhibiting an action.[29]</p><p>So what do all these versions of vetoing a behavior in a fraction of a</p><p>second say about free will? Depends on whom you talk to, naturally.</p><p>Findings like these have supported a two-stage model about how we are</p><p>supposedly the captains of our fate, one espoused by the likes of everyone</p><p>from William James to many contemporary compatibilists. Stage one, the</p><p>“free” part: your brain spontaneously chooses, amid alternative</p><p>possibilities, to generate the proclivity toward some action. Stage two, the</p><p>“will” part, is where you consciously consider this proclivity and either</p><p>green-light it or free-won’t it. As one proponent writes, “Freedom arises</p><p>from the creative and indeterministic generation of alternative possibilities,</p><p>which present themselves to the will for evaluation and selection.” Or in</p><p>Mele’s words, “even if urges to press are determined by unconscious brain</p><p>activity, it may be up to the participants whether they act on those urges or</p><p>not.”[30] Thus, “our brains” generate a suggestion, and “we” then judge it;</p><p>this dualism sets our thinking back centuries.</p><p>The alternative conclusion is that free won’t is just as suspect as free</p><p>will, and for the same reasons. Inhibiting a behavior doesn’t have fancier</p><p>neurobiological properties than activating a behavior, and brain circuitry</p><p>even uses their components interchangeably. For example, sometimes</p><p>brains do something by activating neuron X, sometimes by inhibiting the</p><p>neuron that is inhibiting neuron X. Calling the former “free will” and</p><p>calling the latter “free won’t” are equally untenable. This recalls chapter 1’s</p><p>challenge to find a neuron that initiated some act without being influenced</p><p>by any other neuron or by any prior biological event. Now the challenge is</p><p>to find a neuron that was equally autonomous in preventing an act. Neither</p><p>free-will nor free-won’t neurons exist.</p><p>• • •</p><p>aving now reviewed these debates, what can we conclude? For</p><p>Libetians, these studies show that our brains decide to carry out a</p><p>behavior before we think that we’ve freely and consciously done so. But</p><p>given the criticisms that have been raised, I think all that can be concluded</p><p>is that in some fairly artificial circumstances, certain measures of brain</p><p>function are moderately predictive of a subsequent behavior. Free will, I</p><p>believe, survives Libetianism. And yet I think that is irrelevant.</p><p>JUST IN CASE YOU THOUGHT THIS WAS ALL</p><p>ACADEMIC</p><p>The debates over Libet and his descendants can be boiled down to a</p><p>question of intent: When we consciously decide that we intend to do</p><p>something, has the nervous system already started to act upon that intent,</p><p>and what does it mean if it has?</p><p>A related question is screamingly important in one of the areas where</p><p>this free-will hubbub is profoundly consequential—in the courtroom. When</p><p>someone acts in a criminal manner, did they intend to?</p><p>By this I’m not suggesting bewigged judges arguing about some</p><p>lowlife’s readiness potentials. Instead, the questions that define “intent” are</p><p>whether a defendant could foresee, without substantial doubt, what was</p><p>going to happen as a result of their action or inaction, and whether they</p><p>were okay with that outcome. From that perspective, unless there was intent</p><p>in that sense, a person shouldn’t be convicted of a crime.</p><p>Naturally, this generates complex questions. For example, should</p><p>intending to shoot someone but missing count as a lesser crime than</p><p>shooting successfully? Should driving with a blood alcohol level in the</p><p>range that impairs control of a car count as less of a transgression if you</p><p>lucked out and happened not to kill a pedestrian than if you did (an issue</p><p>that Oxford philosopher Neil Levy has explored with the concept of “moral</p><p>luck”)?[31]</p><p>As another wrinkle, the legal field distinguishes between general and</p><p>specific intent. The former is about intending to commit a crime, whereas</p><p>the latter is intending to commit a crime as well as intending a specific</p><p>consequence; the charge of the latter is definitely more serious than the</p><p>former.</p><p>Another issue that can come up is deciding whether someone acted</p><p>intentionally out of fear or anger, with fear (especially when reasonable)</p><p>seen as more mitigating; trust me, if the jury consisted of neuroscientists,</p><p>they’d deliberate for eternity trying to decide which emotion was going on.</p><p>How about if someone intended to do something criminal but instead</p><p>unintentionally did something else criminal?</p><p>An issue that we all recognize is how long before a behavior the intent</p><p>was formed. This is the world of premeditation, the difference between, say,</p><p>a crime of passion with a few milliseconds of intent versus an action long</p><p>planned. It is pretty unclear legally exactly how long one needs to meditate</p><p>upon an intended act for it to count as premeditated. As an example of this</p><p>lack of clarity, I once was a teaching witness in a trial where a pivotal issue</p><p>was whether eight seconds (as recorded by a CCTV camera) is enough time</p><p>for someone in a life-threatening circumstance to premeditate a murder.</p><p>(My two cents was that under the circumstances involved, eight seconds not</p><p>only wasn’t enough time for a brain to do premeditated thinking, it wasn’t</p><p>enough time for it to do any thinking, and free won’t–ness was an irrelevant</p><p>concept; the jury heartily disagreed.)</p><p>Then there are questions that can be at the core of war crime trials. What</p><p>kind of threat is needed for someone’s criminality to count as coerced?</p><p>What about agreeing to do something with criminal intent while knowing</p><p>that if you refused, someone else would do it immediately and more</p><p>brutally? Taking things even further, what should be done with someone</p><p>who intentionally chose to commit a crime, not knowing that they would</p><p>have been forced to commit that act if they had tried to do otherwise?[*],[32]</p><p>At this juncture, we appear to have two wildly different realms of</p><p>thinking about agency and responsibility—people arguing about the</p><p>supplementary motor area in neurophilosophy conferences and prosecutors</p><p>and public defenders jousting in courtrooms. Yet they share something that</p><p>potentially strikes a blow against free-will skepticism:</p><p>Suppose it turns out that our sense of conscious decision-making doesn’t actually come</p><p>after things like readiness potentials, that activity in the SMA, the prefrontal cortex, the</p><p>parietal cortex, wherever, is never better than only moderately predicting behavior, and</p><p>only for the likes of pushing buttons. You sure can’t say free will is dead based on that.</p><p>Likewise,</p>
- Manual de Resenha Equina
- Vosgerau S. R. e Romanowski J. P. Estudos de revisao...
- DP _ AD APS - 2023_1 Temas e demais instruções - Tutelados (1)
- Manual de Idantidade Visual - Ufes Supec WEB
- Comunicado - Orientadores Trabalho de Conclusão de Curso (TCC) - Administração 2023
- Manual Trabalho Conclusão de Curso Ead 2024 (1)
- MODELO-RESUMO-EXPANDIDO (1)
- MODELO PRÉ-PROJETO TCC
- Microsoft Word - NORMAS TCC - AESO 2012
- Livro-Texto Unidade I Métodos de pesquisas
- ciencias e saude
- E_book_Exclusivo_TCC_Sem_Drama_50_MODELO
- ModeloResumo.cf81816634234eb4bb28
- É uma característica da 4ª Revolução Industrial: Escolha uma opção: “Atualmente está em voga falar em aprendizagem ativa e metodologias ativas. Em ...
- A ISO aprova Normas Técnicas, que são tipos de padronização, como as normas da ABNT. Congelação as normas assinale a alternativa incorreta
- No sistema offline da sala de aula invertida o estudante se junta ao grupo interagindo
- No que se refere à diferença entre as metodologias tradicionais e as metodologias ativas, assinale a alternativa que contenha características da me...
- No Projeto de Pesquisa constituem os elementos Pós-Textuais: Grupo de escolhas da pergunta Cronograma de Trabalhos Futuros e Referências Bibliográ...
- Que características devem ser priorizadas nas produções científicas que usamos em nossas pesquisas, que credibilidade para ela e para nós como pesq...
- De acordo com as normas da ABNT, são páginas obrigatórias nos elementos pré-textuais: A Capa, folha de rosto, lista de ilustrações e folha de aprov...
- O escolanovismo desenvolveu-se no Brasil no momento em que o país sofria importantes mudançaseconômicas, políticas e sociais. Esse movimento está l...
- Os cursos de formação na área de Ciências precisam viabilizar a construção de projetos de ação, elaboradospelos professores, para que possam interv...
- Quais são os tipos de Artigo Científico? A - ( ) ARTIGO DE REVISÃO BIBLIOGRÁFICA, ARTIGOS ORIGINAIS, ARTIGO DE DIVULGAÇÃO. B - ( ) ARTIGOS ORIGINAI...
- As Referências bibliográficas são: A - ( ) CITAÇÕES DE UM AUTOR. B - ( ) N.D.A. C - ( ) CONJUNTO DE ELEMENTOS QUE IDENTIFICAM OS DOCUMENTOS IMPRESS...
- É objetivo das normas da ABNT: A - ( ) UNIFORMIZAR, PADRONIZAR A PUBLICAÇÃO DOS CONHECIMENTOS B - ( ) CONTEXTUALIZAR OS TRABALHOS ACADÊMICOS C - ( ...
- A elaboração dos objetivos deve ser coerente com o tema escolhido, com o problema de pesquisa e sua delimitação, bem como, com a justificativa e vi...
- AULA 9 ESGOTO
- EXERCÍCIOS COLUNAS E BARRILETE(2)
Perguntas dessa disciplina
Grátis
Grátis