Full text of our readings
Below you will find a complete version of our six assigned readings this semester.
Click the appropriate week (the week of the deadline rather than the week the assignment was given)
below to go to that reading.
Week 02: Ernie Pyle
Response deadline: 9AM Wednesday, Sept. 3
Ernie Pyle’s body lay alone for a long time in a ditch at the side of the road. Men waited at a safe distance, looking for a chance to pull the body away. But the machine gunner, still hidden in the coral ridge, sprayed the area whenever anyone moved. The sun climbed high over the little Pacific island. Finally, after four hours, a combat photographer crawled out along the road, pushing his heavy Speed Graphic camera ahead of him. Reaching the body, he held up the camera and snapped the shutter.
The lens captured a face at rest. The only sign of violence was a thin stream of blood running down the left cheek. Otherwise he might have been sleeping. His appearance was what people in the 1930s and ’40s called “common.” He had often been described as the quintessential “little guy,” but he was not unusually short. In fact, at five feet eight inches, his frame precisely matched the average height of the millions of American soldiers serving in the U.S. Army. It was his build that provoked constant references to his size — a build that once was compared accurately to the shape of a sword. His silver identification bracelet, inscribed “Ernie Pyle, War Correspondent,” could have fit the wrist of a child. The face too was very thin, with skin “the color and texture of sand.” Under the combat helmet, a wrinkled forehead sloped into a long, bald skull fringed by sandy-red hair gone gray. The nose dipped low. The teeth went off at odd angles. Upon meeting Pyle a few months earlier, the playwright Arthur Miller had thought “he might have been the nightwatchman at a deserted track crossing.” In death his hands were crossed at the waist, still holding the cloth fatigue cap he had worn through battles in North Africa, Italy, France, and now here in the far western Pacific, a few hundred miles from Japan. A moment later the regimental chaplain and four non-commissioned officers crawled up with a cloth litter. They pulled the body out of the machine gunner’s line of fire and lifted it into an open truck, then drove the quarter-mile back to the command post on the beach. An Associated Press man was there. He already had sent the first bulletin:
COMMAND POST, IE SHIMA, April 18, (AP) — Ernie Pyle, war correspondent beloved by his co-workers, G.I.’s and generals alike, was killed by a Japanese machine-gun bullet through his left temple this morning.
The bulletin went via radio to a ship nearby, then to the United States and on to Europe. Radio picked it up. Reporters rushed to gather comment. In Germany General Omar Bradley heard the news and could not speak. In Italy General Mark Clark said, “He helped our soldiers to victory.” Bill Mauldin, the young soldier-cartoonist whose war-worn G.I.’s matched the pictures Pyle had drawn with words, said, “The only difference between Ernie’s death and that of any other good guy is that the other guy is mourned by his company. Ernie is mourned by the Army.” At the White House, still in mourning only six days after the death of Franklin Roosevelt, President Harry Truman said, “The nation is quickly saddened again by the death of Ernie Pyle.”
One of Pyle’s editors at the Scripps-Howard newspapers, George Parker, spoke on the radio. “He went into war as a newspaper correspondent among many correspondents,” Parker said. “He came back a figure as great as the greatest—as Eisenhower or MacArthur or Nimitz.” Parker spoke of “that strange and almost inexplainably intimate way” in which Pyle’s readers had known him.4 Indeed, people called newspaper offices all day to be sure Ernie Pyle was really dead. He had seemed so alive to them. Americans in great numbers had shared his life all through the war—his energy and exhaustion; his giddy enjoyments and attacks of nerves; his exhilarations and fears. Through Pyle’s eyes they had watched their “boys” go to distant wars and become soldiers—green and eager at the start, haggard and worn at the end. Through his eyes they had glimpsed great vistas of battle at sea and they had stared into the faces of men in a French field who thought they were about to die. So no one thought it strange for President Truman to equate the deaths of Franklin Roosevelt and a newspaper reporter. For Pyle had become far more than an ordinary reporter, more even than the most popular journalist of his generation. He was America’s eyewitness to the twentieth century’s supreme ordeal. The job of sorting and shipping Pyle’s personal effects fell to Edwin Waltz, a personable and efficient Navy man who had been working as the correspondent’s personal secretary at Pacific Fleet headquarters at Guam. There wasn’t much to go through—a few clothes and toilet articles; books; receipts; some snapshots and letters. Here was Pyle’s passport, stamped with the names of places he had passed through on his journeys to war—Belfast and London; Casablanca and Algiers; and on the last page, “Pacific Area.” Waltz also found a little pocket notebook filled with cryptic jottings in a curlecue script—notes Pyle had made during his last weeks in France in 1944.
9 killed & 10 wounded out of 33 from D-Day to July 25 …
… drove beyond lines … saw orange flame & smoke—shell hit hood—wrecked jeep — dug hole … with hands — our shells & their firing terrible — being alone was worst… .
Blowing holes to bury cows — stench everywhere.
Waltz also found a handwritten draft of a newspaper column. Knowing the war in Europe could end any day, Pyle had collected his thoughts on two sheets of paper, then marked up the sentences with arrows and crossings out and rewordings.
“And so it is over,” the draft began. “The catastrophe on one side of the world has run its course. The day that had so long seemed would never come has come at last.” He was writing this in waters near Japan, he said, “but my heart is still in Europe … For the companionship of two and a half years of death and misery is a spouse that tolerates no divorce.” He hoped Americans would celebrate the victory in Europe with a sense of relief rather than elation, for in the joyousness of high spirits it is easy for us to forget the dead. …there are so many of the living who have burned into their brains forever the unnatural sight of cold dead men scattered over the hillsides and in the ditches along the high rows of hedge throughout the world. Dead men by mass production—in one country after another—month after month and year after year. Dead men in winter and dead men in summer. Dead men in such familiar promiscuity that they become monotonous. Dead men in such monstrous infinity that you come almost to hate them. Those are the things that you at home need not even try to understand. To you at home they are columns of figures, or he is a near one who went away and just didn’t come back. You didn’t see him lying so grotesque and pasty beside the gravel road in France. We saw him. Saw him by the multiple thousands. That’s the difference.5
For unknown reasons Scripps-Howard’s editors chose not to release the column draft, though V-E Day followed Ernie’s death by just three weeks. Perhaps they guessed it would have puzzled his readers, even hurt them. Certainly it was a darker valedictory than they would have expected from him. The war had been a harsh mistress to Ernie. First it had offered him the means of escaping personal despair. Then, while his star rose to public heights he had never imagined, the war had slowly driven him downward again into “flat black depression.” But he kept this mostly to himself. Instead he had offered readers a way of seeing the war that skirted despair and stopped short of horror. His published version of World War II had become the nation’s version. And if Ernie Pyle himself had not won the war, America’s mental picture of the soldiers who had won it was largely Pyle’s creation. He and his grimy G.I.’s, frightened but enduring, had become the heroic symbols of what the soldiers and their children would remember as “the Good War.”
Chapter 1: “I Wanted to Get Out …”
ROOTS AND RISING, 1900-1935 OUTSIDE THE LITTLE TOWN OF DANA, THE TABLE-FLAT PLAIN OF WESTERN Indiana rises ever so slightly to form what natives still call “the mound farm,” a small cluster of white buildings on eighty acres of grain. The thin boy who lived atop this modest elevation in the early 1900s gazed over fields whose peaceful monotony was interrupted only by the little oasis of Dana a mile to the northwest. Only one image would have captured his attention—the tiny silhouette of a wagon or an automobile traversing the horizon on State Highway 36. In other words, the landscape’s most notable feature was the means to escape it. In one way this seems fitting, for the goal of escape possessed the boy from an early age. In another way he never escaped this place.
“That long, sad wind …”
Sadness verging on bitterness always colored Ernie Pyle’s memories of his early years. When he was a traveling newspaper columnist in the 1930s, he once found himself on a remote country road where he felt the dry breeze of his childhood brush his face, awakening a haunting mental picture of small men straining against circumstance and time.“I don’t know whether you know that long, sad wind that blows so steadily across the hundreds of miles of Midwest flat lands in the summertime….” he wrote in his column. “To me [it] is one of the most melancholy things in all life. It comes from so far, and it blows so gently and yet so relentlessly; it rustles the leaves and the branches of the maple trees in a sort of symphony of sadness, and it doesn’t pass on and leave them still. It just keeps coming…. You could—and you do—wear out your lifetime on the dusty plains with that wind of futility blowing in your face. And when you are worn out and gone, the wind, still saying nothing, still so gentle and sad and timeless, is still blowing across the prairies, and will blow in the faces of the little men who follow you, forever.” This was “just one of those small impressions that will form in a child’s mind, and grow and stay with him through a lifetime, even playing its part in his character and his way of thinking, and he can never explain it.”
“Melancholy … worn out … gentle and sad … little men”—this was a description of Pyle’s father, a carpenter at heart, who farmed because he could not make a steady living from his true vocation. “He’s very meek and no trouble,” Will Pyle’s son once told friends. He might have been summing up Will’s life. Ernie depicted him in later writings as a kind but hapless figure, “the man who put oil on his brakes when they were squeaking, then drove to Dana and ran over the curb and through a plate-glass window and right into a dry-good store.” Will’s face would break into a brilliant, sparkling smile when he was pleased or amused. But he spoke little, even to his family. “He has never said a great deal to me all his life, and yet I feel we have been very good friends,” Ernie once told his readers. “He never gave me much advice, or told me to do this or that, or not to.” The formidable Maria Taylor Pyle, not Will, filled the role of family protector and leader. Always called Marie, she stood no taller than her husband, but she gave the impression of being much the bigger of the two. She was a woman of ferocious dedication to the practical tasks at hand—raising chickens and produce, caring for her family, serving her neighbors. She “thrived on action,” her son remembered. “She would rather milk than sew; rather plow than bake.” Ernie’s closest boyhood friend recalled her as “a woman of unusual character—she was husky of build, [with] red hair and florid complexion, an unusually hard worker, even for a farm woman, a strict disciplinarian, very considerate of other people.” Devout and abstemious, she liked a joke and laughed easily and heartily. She could doctor a horse and play the violin. When the neighbors’ children were born she always attended their mothers, and those children grew up to obey her as readily as their parents. With adults she could be devastatingly blunt. “Marie Pyle didn’t wait to tell my dad what she thought of him,” recalled Nellie Kuhns Hendrix, who grew up next door and was close to the Pyles for many years. “If he done something she didn’t like, she’d tell him about it.”5 No one doubted that, as Ed Goforth, another neighbor, put it later, “She wore the pants in the family.” Goforth remembered arriving one morning to help Will with some work. “She looked over at Mr. Pyle and said, ‘Will, take Ed and go shear the sheep today.’ Well, Ed and Will sheared the sheep that day.”6
She raised her only child, whom she always called Ernest, with a mixture of toughness and tenderness. One of Pyle’s strongest memories captured the contradiction. On a summer day when the boy was four or five, he was walking behind his father’s plow when he stopped to fetch some wild roses for his mother. Cutting the stems with his father’s penknife, he suddenly saw a long snake approaching swiftly through the grass. He screamed, bringing his father on the run, and Will sent him back to the house a half-mile away. Ernest came to a patch of high weeds rising between himself and the house. Fearing another snake might be lurking there, he called to his mother, who appeared at the door and summoned him to come ahead through the weeds. He froze and began to cry, whereupon Marie came and whipped him for his apparent stubbornness. “That evening,” her son wrote thirty years later, “when my father came in from the fields, she told him about the crazy boy who wouldn’t walk through the weeds and had to be whipped. And then my father told her about the roses … and the snake. It was the roses, I think, that hurt her so. My mother cried for a long time that night after she went to bed.” For the rest of her life she retold the story on herself, as if to expiate a sin.7
The other woman in Ernie’s life possessed a will to match his mother’s. She was Mary Taylor, Marie’s older sister, who lived with the family until she married a neighbor, George Bales, at the age of forty, when Ernest was six. “Tall and straight” with “more energy than a buzz saw,” she dominated Bales as Marie dominated Will Pyle. Uncle George was likable and smart but he was a dreamer, preferring grand, unrealized schemes to the myriad small tasks necessary for success on his farm. So it was Mary Bales who put in the long days of labor, raising enough chickens, hogs and cattle to get by. As a boy, Ernie saw a great deal of her. Later, after Marie Pyle and George Bales died, Aunt Mary and Will Pyle lived on together in the Pyle farmhouse.8
Though not prosperous, the Pyles were respected, hardworking, churchgoing people. To their son they passed on decency and compassion, sensitivity toward others and a capacity for hard work. Yet there was some obscure unhappiness in this small family that planted in Ernie the seeds of a lifelong melancholy. It drove him to flee not only Dana but all spheres of safe, straitened routine, to assay large achievements far beyond Dana’s field of vision. The exact sources of these drives can only be guessed at. But they had something to do with Ernie’s enduring image of his small, silent father—and perhaps his uncle, too—toiling with little pleasure or worldly success in the shadow of the two strong-willed sisters. In Ernie’s mind, his father would always be the “little man” straining against “the wind of futility.” And so, Ernie feared, might he become such a man himself. The image persisted in his life and writings. His low points would always be shadowed by the fear that he was nothing but an ineffectual man striving mightily to no purpose, and governed by the whims of a powerful woman. Yet the endearing character Pyle established for himself as a writer, and the subjects of his legendry in World War II, were common men transcending the grinding circumstances of everyday existence. Will Pyle’s memory cut both ways.
Ernie grew up as a keenly intelligent child in a home and a town where intellect and big dreams were not especially esteemed. Homely, small for his age, and fussed over by a strong-willed mother, he tended toward self-pity in a world of boys who all seemed bigger, more easygoing, and blessed with fathers who cut a wider swath than Will. Being a “farm boy” instead of a “town boy” exacerbated his itchy sense of inferiority. “I was a farm boy,” he wrote nearly thirty years later, “and town kids can make you feel awfully backward when you’re young…. Even today I feel self-conscious when I walk down the street in Dana, imagining the town boys are making fun of me.”9 While the other kids in the schoolyard wrestled and roughhoused, “I always sat under a tree and ate my apple.”10 His closest friend, a boy one year older named Thad Hooker, often urged Ernie to try sports. But Thad would be pushed away with a bitter “Aw, hell, you know I’m no good at games.”11 Because his voice cracked when he spoke loudly or excitedly, he developed a lifelong habit of clearing his throat before speaking, then using a low and even tone to lessen the chance of a humiliating squeak. At some point he grew anxious about his teeth, cleaning them constantly with twine.12 Intelligence and insecurity fought in Ernie’s mind, pushing him to the role of the outsider looking in, unsure whether to test himself against the big boys or feign disinterest and wish them all a short trip to hell.
Certainly a farmer’s life held no appeal for him. When Ernie was nine, Will led him into the fields and showed him how to use the harrow and plow. From that point on, Ernie remembered, “I worked like a horse,” an animal he came to despise. He once estimated he rode five thousand miles to school and back on the Pyles’ nag, and he trudged for many more miles behind horses in the fields. That was more than enough. During his years of constant cross-country travel, he refused to stay at farmhouses that rented rooms to guests, saying simply, “I’ve had enough of farms.” “Horses were too slow for Ernest,” Will remembered later. “He always said the world was too big for him to be doing confining work here on the farm.”13
He cherished his glimpses of that wider world. Whenever a post-card arrived in the Pyle mailbox, he would snatch it and paste it into a scrapbook. He read as much as he could—mostly newspapers and adventure tales. On a trip with his father to Chicago about 1910, he got his first impression of the big-city newspaper trade amid the noisy traffic of autos and street vendors. “I remember as a kid … how impressed I was with the ads I could see on the sides of huge trucks hauling loads of newsprint for the Chicago Herald-Examiner,” he once told a friend, “the pictures and names of the writers, and the colored pictures of the comic-strip heroes.”14
One species of hero just then emerging into public consciousness held a special allure. In Ernie’s early teens, the walls of his bedroom sprouted sketch after sketch of race cars—the boxy, big-wheeled behemoths of racing’s earliest days. His inspiration was the Indianapolis 500, then in its infancy but no less redolent of masculine glamour than it is today. One year his parents allowed Ernie to attend the race. He was enthralled by the giant crowd lining the two-and-a-half-mile brick oval, the spectators’ black Model T’s jamming the grassy infield, reporters rushing in and out of the speedway’s five-story “press pagoda,” the howl of engines and the glimpse through the smoke of drivers in their helmets and goggles. The annual race, which he witnessed several times, excited his imagination for many years. Even in his thirties, he daydreamed of racing at Indianapolis—a clue to the yen for glory that stirred beneath his self-deprecating facade. “I would rather win that 500-mile race than anything in this world,” he confessed in 1936. “To me there could be no greater emotion than to come down that homestretch, roaring at 130 miles an hour, those 500 exhausting, ripping miles behind you, your face black with grease and smoke, the afternoon shadows of the grandstands dark across the track, a hundred thousand people yelling and stomping their excitement, and you holding up your proud right arm high in the Speedway tradition of taking the checkered flag—the winner! I have dreamed of myself in that role a thousand times.”15
Not surprisingly, the boy who longed for speedway heroics also longed to join the Army when, in 1917, President Wilson committed American forces to the Allied cause in World War I. Too young for service by more than a year, Ernie watched in frustration as other Dana boys left for Europe, including Thad Hooker, who was permitted to leave school early in 1918 to join up. At the high school commencement that spring, a flag-draped chair took Thad’s place among the graduating seniors. “I could hardly bear to go to commencement, I was so ashamed that I wasn’t in the Army, too,” Ernie recalled later.16 In October 1918 he enlisted in the Naval Reserve, hoping to see action eventually. But that hope burst only a month later, when the warring powers announced an armistice.
With no war to escape to, Ernie searched for alternatives. After the prospect of battle, college seemed a pale second choice, but at least it promised a route away from the farm. So, in the fall of 1919, he left for Bloomington with a single suitcase and an aimless ambition. “He always had big ideas,” said Nellie Kuhns Hendrix, for whom Ernie, ten years older, was a big brother figure, telling the neighborhood youngsters of faraway places and imagined adventures. “He wanted to do things.”17
“We aspire to become journalists …”
The war’s end brought Indiana University its biggest enrollment to date in the fall of 1919: 2,229 students, more than twice the population of Dana. Among the young veterans flooding the campus was Paige Cavanaugh, a wisecracking iconoclast from the small town of Salem, Indiana, who would become Ernie’s lifelong surrogate brother. The two could be serious or raucous together, and they shared many likes and dislikes, though Ernie never could share Cavanaugh’s contempt for war veterans who paraded their special status. “Ernie had a hero complex,” Cavanaugh said later. “He and I both had a good eye for phonies around the campus, and we used to sit around and mimic them. But nobody who had been overseas could do wrong in Ernie’s eyes, no matter how big a blowhard he was.”18 Cavanaugh later enjoyed claiming credit for launching Ernie’s newspaper career, if only by suggesting they enroll in journalism as sophomores because the course was reputed to be easy. In fact, Ernie had expressed a strong interest in the field as a freshman, but university rules prevented him from taking the introductory course until his second year.
Cavanaugh later told the story this way: on registration day in the fall of 1920 the country-boy team of Cavanaugh and Pyle tiptoed into a silent classroom where a professor in horn-rimmed glasses sat waiting over his enrollment book, appraising his disheveled scholastic suitors without a word. In the stillness Ernie finally cleared his throat and announced: “We aspire to become journalists, sir.”19
Though he majored in economics, journalism occupied most of Ernie’s remaining years at I.U. Classwork was dispatched quickly; “he had such a memory he didn’t need to study much,” a friend said.20 Instead he invested his energies in the Daily Student. Though “he had periods of mental lowness … when he was certain he wasn’t worth a damn,” he won the approval of editors who rewarded his industry with a demanding beat, the university administration.21 He was appointed editor-in-chief of the summer Student in 1921 and served as city editor the following fall. In a comment echoed by many a later editor, a Daily Student superior recalled: “He was a shy boy but worked hard and made friends quickly.”22 One night about this time, as Ernie typed phoned-in dispatches from the Associated Press, the story of a soldier killed in battle brought tears to his eyes. It was the Pulitzer Prize-winning work of AP reporter Kirke Simpson, whose subject was the interment of the Unknown Soldier at Arlington National Cemetery in 1921. Simpson’s style trembled under the weight of patriotic melodrama—“Alone, he lies in the narrow cell of stone that guards his body; but his soul has entered into the spirit that is America”—but it affected the youngster so deeply that he could quote from the story many years later, when he told a reporter that Kirke Simpson had given him a goal to aim at.23
If Ernie had found a vocation in newspapering, his passion was to see as much of the world beyond Indiana as he possibly could. In his first college summer he leaped at the chance to labor in a Kentucky oil field. He toured the Great Lakes on a Naval Reserve cruise. “I wish I was a good ball player so I could get to make some of those trips [with the I.U. team],” he wrote Aunt Mary one spring. “They went all thru the south this spring on their training trip, and went to Ohio the other day and got to go all thru the state penitentiary.”24 He bummed rides into neighboring states, following the football team—“He wasn’t so damn much interested in boosting the morale of the team as he was in seeing the country,” Cavanaugh said—and finally joined the team as manager in order to secure train tickets.25 But his midwestern rambling paled in comparison to the remarkable stunt he pulled in the spring of 1922, when he and three fraternity brothers wangled permission to accompany the I.U. baseball team by ship to Japan. Working as cabin boys, Ernie and his comrades survived a typhoon in the North Pacific only to find their papers prevented them from disembarking in Japan. To Ernie’s delight, they were forced to journey on to China and the Philippines before rejoining the team for the cruise home. “I never felt better in my life,” he wrote his parents from Shanghai.26
By the middle of his senior year, the outside world looked so inviting that Ernie bailed out of college altogether. On the morning of January 28, 1923, Ernie reported for work at the daily Herald in LaPorte, Indiana, a factory town squeezed between Lake Michigan and the Indiana-Michigan border. The newspaper’s editor, an I.U. alumnus, had asked the chairman of the university’s journalism department to recommend a promising youngster to fill a reporting vacancy. The chairman had recommended Ernie enthusiastically, but he made his usual first impression in LaPorte. “Small, frail and sandy-haired … bashful and unimpressive,” the newcomer “didn’t look like a newspaper man” to the city editor. “But he was there, and we needed a man, so … he went to work.”27 And he prospered, covering a variety of assignments effectively and winning friends quickly, though “he had an inferiority complex … and would never let anybody forget that he was a ‘country boy’ and a ‘poor devil.’” Ernie wrote his outstanding story, which demonstrated rare courage, after infiltrating a Ku Klux Klan rally, then defying the thugs who trailed him out of the meeting and warned him not to publish his account.28
To leave Bloomington just one semester shy of attaining his degree must have grieved the Pyles. Yet he had spurned their wishes. Precisely what he was thinking is guesswork, as the record is bare on this point. He did solicit the opinion of his faculty mentor, Clarence Edmondson, dean of men at I.U., who advised him to take the job. Still, Ernie’s decision must have served as a harsh declaration of independence from Dana. Certainly he was thumbing his nose at propriety. When even Paige Cavanaugh, who was no stick-in-the-mud, warned Ernie he might amount to little without his diploma, his friend had only laughed and said, “We’ll see.”29
There was another factor involved in his departure: Harriett Davidson, a red-headed native of Bloomington who was “one of the most highly respected girls in school,” as Ernie proudly informed his parents. Throughout college he had dated regularly, but he fell hard only for Harriet. At the time he left for LaPorte the romance apparently remained intact, but soon Ernie began hearing tales of a young doctor who collected Harriett for dates in a sparkling red Buick. When Ernie’s colleagues on the LaPorte Herald heard of his quandary, they teased him and noisily predicted his fraternity pin would soon be returned. Then, sometime in the spring of 1923, “the pin did come back,” one of them remembered, “and all of us felt like first class heels…. Ernie was broken-hearted…. He did not want to stay in Indiana any longer….”30
A piece of lucky timing then bestowed a mercy on the miserable twenty-two-year-old. It arrived in the form of a telegram that invited Ernie to a meeting with Earle Martin, a high-ranking editor with the Scripps-Howard newspaper chain. Searching for talent, Martin had just visited Bloomington, where Ernie’s friend Nelson Poynter (then editor of the Daily Student, later a distinguished editor and publisher) had recommended Ernie. Martin offered Poynter and Pyle $30 a week each to work for the Washington Daily News, a new tabloid he had just taken over for Scripps-Howard.31 Ernie didn’t hesitate. Only four months after his arrival in LaPorte, he and Poynter boarded a train for Washington, where they pulled into Union Station early enough on a Sunday in May 1923 for Ernie to see his first major league baseball game.32 Friends in LaPorte tried to persuade him to stay a while to gather more small-town experience before this quick and painless leap to big-city journalism. But Ernie would hear none of it. His editor wasn’t surprised. From the moment Ernie arrived in LaPorte, Ray Smith remembered, “He … had ‘sand in his shoes.’”33
“A good man, but not much drive …”
The Washington Daily News was founded in 1921, just eighteen months before Ernie’s arrival, in the massive business expansion that made Scripps-Howard one of the nation’s most powerful newspaper chains. Tabloid in format, the News’ assigned mission was to woo the working man and woman away from the four established Washington dailies with short, snappy stories and aggressive local reporting. To accomplish this, Earle Martin recruited a staff of talented youngsters and set about creating a competitive tabloid without a tabloid’s traditional reliance on sensationalism and pictures. He intended to showcase cleverly written stories whose main appeal to the reader would be their brevity and punch.34
For this kind of writing the stripling from LaPorte, with his four months of professional experience, proved amazingly well suited. His superiors immediately noted gifts of efficiency and simplicity. Dispatched to cover an explosion at the Bureau of Standards early in his tenure, he phoned in notes that were a model of clarity, prompting Martin to spout off in the city room about “a damn good story.”35 Soon Ernie was shifted from reporting to the copy desk, where he transformed other reporters’ writing into the spare, readable style Martin wanted. Copyediting suited Ernie. For a writer of his temperament—suspecting he was more gifted than others but fearful of showing it—the task held hidden satisfactions. One was the sheer fun of manipulating words into pleasing shapes and sounds. Another was the pleasure of putting to rights the mangled or dull prose of the reporters; they might outrank a timid copy editor in prestige, but he held the ax over their words. Indeed, some News reporters could barely recognize their handiwork after it emerged from under Ernie’s quick pencil. But that was all to the good in the eyes of his superiors. After he converted one writer’s droning account of a multiple hanging into a few taut paragraphs, Martin tacked the article to the city room bulletin board and pronounced it “the perfect tabloid story.” Editors long remembered Ernie’s technical skills as a beginner. “He was one of the fastest copyreaders I ever saw,” said one, “and one of the cleanest writers …” Another said simply: “He had a very orderly mind.”36
He persisted in presenting himself as the “poor devil,” the awshucks kid from the sticks—a convenient refuge for a young man who nursed equal portions of ambition, irresolution and insecurity. Now and then he quietly offered articles to other publications—including, perhaps, the fledgling New Yorker, which he admired—but got nowhere. “A good man, but not much drive,” was the verdict of his peers at the News, who remembered his tenure in the mid-twenties more for his signature belch, inventive profanity and eccentric clothes than his ambition.37 They quickly became aware of a lifelong crotchet: Ernie was perpetually on the verge of an illness, in the middle of an illness or getting over an illness. One News colleague later testified to his “wondrous hypochondria. The standing office gag was to ask Ernie every hour on the hour how he felt. He had only one reply through the years: Terrible!’ And I believe the kid meant it. He always looked it.”38 Always acutely sensitive to cold, he would sometimes wander into the News for his 7:00 A.M. shift in a lumberjack shirt and a long, white stocking cap, which he wore all day. On one such day, Scripps-Howard president Roy Wilson Howard, who always dressed to impress, descended from his Park Avenue offices in New York for an inspection tour of the News. Catching sight of the apparition in the stocking cap, Howard glared at one of his editors and demanded, “What’s that?”39
Throughout Ernie’s life, new acquaintances, men as well as women, felt an urge to take care of him. Surely this was the quality that first appealed to a young woman who met him in Washington in the fall of 1923. Geraldine Siebolds, always called Jerry, had grown up in Hastings, Minnesota, a tranquil town, much less isolated than Dana, that nestled prosperously on the Mississippi River just a twenty-mile streetcar ride from St. Paul. The frame houses of Hastings all had wide, friendly porches—all, that is, but the Sieboldses’ house. It was a strange affair, built deep into the slope of a hill, with one story nearly entirely underground and another story perched atop it, no door in front and no porch. Neighbors said that Jerry’s father, a foreman at the nearby state insane asylum, was afraid of the tornados that swept through the region in summer, so he had built a house to protect himself, his wife and their four children. The odd house fed a vague notion that Siebolds himself was odd. “I got the impression from my own family that he was kind of a weird person,” said Harriet Hendrixson, who grew up several years behind Jerry in school and later became a close friend. Jerry, by contrast, was vibrant and popular, acting in school plays and singing in the Presbyterian choir. She stood out from the crowd partly because she was attractive and vivacious, but also because she excelled in schoolwork and read serious books on her own. There was a hint of the rebel about her. “She used to wear the weirdest clothes,” Mrs. Hendrixson recalled. “She’d tack them up herself and used to pin things here and there.” She had gone to Washington as a Civil Service clerk in 1918. In secret, she and a friend had taken the government service exam, breaking the news to their parents only when plans for their departure were a fait accompli.
In Washington, her rebelliousness flowered into the Greenwich Village-style bohemianism then popular among young urbanites of an intellectual bent. She was petite, with an impish smile that was peculiarly attractive in a manner more than one friend described as “pixyish.” Her clothes were tailored, though she seemed never to buy anything new for herself. She was bright, charming and provocative, displaying a fierce iconoclasm in flashes of wit. A friend remembered her “stubborn, almost … morbid, nonconformism.”40 For a time she was engaged to a dentist, but she soon threw him over as too stuffy. She and Ernie first met at a Halloween party in 1923; a year later they began to date in earnest.41
Ernie was proving unable or unwilling to devote himself to the long, patient haul that was necessary for advancement even in the harum-scarum business of newspapering where change was endemic. When he tired of his routine, he simply would leave. Before his first year at the News was out, he took off for a Caribbean fling, working his way to Puerto Rico and Panama as a seaman. After only two years on the payroll, he pronounced himself worn out at the age of twenty-four and retreated to Dana for two months of rest. During this time he missed Jerry desperately, and when he returned they were married by a justice of the peace just across the Potomac in Virginia. She scoffed at this bow to propriety, giving in only when Ernie insisted that he could not shame his parents by living in sin. For years they shared a private joke by telling friends they weren’t really married. Jerry would neither wear a ring nor observe the anniversary of their wedding, which took place in the summer of 1925.
The following spring, wanderlust struck again. Pooling their savings of about $1,000, Ernie and Jerry bought a Model T and a tent, quit their jobs and fled the capital for points west. In three months they toured the rim of the country, sleeping on the ground and cooking over open fires. They fell in love with New Mexico and Arizona, then crashed Paige Cavanaugh’s bachelor quarters in Hollywood. “They were young, wild, unconventional and neurotic,” Cavanaugh remembered. “They were tearing across the country as if someone was after them.”42 The exodus ended in New York, where they landed, exhausted and broke, at summer’s end. They sold the Ford for $150 to buy food.
They spent sixteen months in New York. Ernie’s copyediting skills brought a paycheck, first at the Evening World, then at the Post. It was not an era he cared to recall. As he summed it up long afterward: “Lived in a basement and never had enough to go to a show, and hated New York.”43 In December 1927, a letter arrived from a friend at the Washington Daily News. This was Lee Graham Miller, a young Harvard man with screen-idol looks who was a rising star in Scripps-Howard circles. In Ernie’s absence he had become managing editor. Now he wanted Pyle for his telegraph editor, in charge of all wire copy. On the day after Christmas, 1927, Ernie was back at his old desk.44 It was better than a New York basement, but still no solution to Ernie’s restlessness. Within weeks he asked Miller to let him write—in off-hour s—a regular column on aviation, and Miller agreed.
Ernie’s aviation column first appeared in the News in March 1928, only ten months after crowds of shouting Frenchmen had surged across a Paris airfield to greet a startled air-mail pilot named Charles Lindbergh as his Spirit of St. Louis taxied to a stop after thirty-three hours in the air. Lindbergh’s solo crossing of the Atlantic—now the only well-remembered feat of early aviation besides the first flight of the Wright Brothers—was in fact only the crowning moment in a decade-long frenzy of competitive efforts to court public approval of aviation in general and various corporations in particular. Air races, spectacular crashes, mysterious losses over land and sea, wild stunts, handsome prizes for distance, speed and endurance records—all these drew intense public curiosity year after year, and close attention from the press. Most magnetic of all was the lone, windblown figure of the pilot, a heroic image that resonated powerfully with Americans’ traditional love of the frontiersman and the cowboy.
Ernie conceived the notion that people would enjoy day-to-day coverage of this burgeoning new enterprise, not only in its grand advances but in its technical intricacies and amusing trivia. His column, probably the first in the United States to deal exclusively with aviation, was not unlike the early computer columns that appeared in newspapers and magazines of the 1980s, full of hope and excitement about a field of endeavor that promised (sometimes overpromised) to remake American society. Each afternoon, after an eight-hour shift on the copy desk, he would hop on a streetcar or flag a taxi bound for one or another of Washington’s airfields.45 There he would wander from office to office and hangar to hangar, chatting with anyone he found. Sometimes he would stay up half the night on a floodlit field, trading stories with pilots or mechanics and listening for the drone of distant planes approaching. There was no lack of material for stories. Washington, lying at the midpoint of the Atlantic seaboard, was then a center of aviation activity, with two of the country’s leading passenger airports, Hoover and Boiling Fields, the Washington Naval Air Station, and a sprinkling of smaller fields nearby. Downtown, congressmen and bureaucrats were shaping the new rules that would govern the new industry and handing out the contracts that would determine winners and losers among hundreds of competing entrepreneurs. As a swiftly growing business, employing 75,000 people by 1929, aviation offered the reporter continuing controversies and developments. Ernie wrote of passenger safety, night flying, engine and airplane design, the founding and expansion of airports and the birth pangs of national airlines. He encountered and befriended any number of pilots, most of them World War veterans now scrambling to earn a living as crop dusters, aerial photographers, Army aviators, passenger and cargo pilots, mail pilots or, most colorful of all, the barnstormers who gypsied from field to field, delighting crowds with wing-walks and offering thrill seekers their first flights for fees of a dollar a minute.
In this crowd of adventurers, hucksters and the occasional genius, Ernie’s face gradually became familiar and welcome. He gained friends and often beat competitors out of stories by seeming to be just another one of the fellows rather than a pushy, question-firing reporter. “Ernie always was the least conspicuous of the lot in manner and appearance,” an acquaintance of that era recalled. “He withdrew behind his cigaret, and instead of talking he for the most part smiled genially at one and all….”46
His column appeared on an inside page of the News under a succession of titles—“D.C. Airports Day by Day,” then “Airways” (with a thumbnail photo of Ernie and an enlarged byline), and finally “Aviation”—with several items of news in each day’s offering. At first he was determinedly newsy, presenting such ho-hum fare as this: “Hiram Bingham Jr., son of the Senator from Connecticut, who is also president of the National Aeronautics Corporation, was a passenger in the Washington-New York Airline’s Ryan this morning on its regular run to New York.”47 But soon Ernie settled into a looser, more descriptive style: “If you follow the movements of the air mail, day after day you will find graphic examples of some of the finest flying in the world. Perhaps you remember what a terrible day yesterday was—heavy black sky, rain pouring down, wind blowing little gales in gusts. Walter Shaffer was up in it—flying.”48
Ernie’s favorites were the anonymous and hard-drinking mail pilots, who were amassing a record of utilitarian service that gradually convinced American business of the advantages of high-speed delivery over long distances.49 In a preview of things to come, Ernie made a specialty out of telling tales of the mail pilots’ feats of bravery and improvisation. They flew under constant pressure to deliver on schedule, yet without radios or detailed flight charts. Instead, they relied on state highway maps and shared lore about the locations of golf courses, polo grounds and dangerously lofty church spires. Fears of freezing cold and bad weather accompanied the mail pilots constantly. Ernie found this stolid, workmanlike flying far more admirable than the record-seeking heroics of Lindbergh and his imitators. A typical Pyle hero was an Ohioan forced to fly double-duty to pick up the slack for a colleague killed in a crash.
[H]e has never been to the North Pole, or the South Pole, or flown across the ocean at midnight with a pig in his lap…. No, all he ever did was fly the night air mail between Cleveland and Cincinnati every night for 34 consecutive nights last winter. Two hundred and thirty-eight hours in the air in a month…. He did it by going to bed the minute he got out of his plane, and resting every second he wasn’t in the air. Even then it almost killed him. There isn’t enough money in the world to make him do it again. All of which goes to show that the boys who break into the papers every morning aren’t necessarily the ones who are doing our greatest flying.50
This sort of tribute made Ernie as popular with the pilots as they were with him, and they returned thanks—and courted further publicity—by calling him first with news. When an East Coast mail pilot had to ditch, it was said that he phoned the Post Office, then Ernie Pyle. As a close observer of Ernie’s career put it, “He found that … he had a gift for becoming a member of a group while retaining his ability to explain it to outsiders.”51 Such membership exacted a price, of course. He could not write negative stories—or not many of them—and retain his membership.
His criticisms were models of caution:
With a bow to my many friends in the Air Corps, and a deeper one to the crew of the Question Mark [a record-setting airplane], some of whom I know and admire greatly, may I venture the dastardly remark that I believe the recent endurance flight was a bit foolish, unimportant and greatly overrated?52
This coziness troubled neither Ernie nor his editors. The watchdog’s role did not dominate newsrooms of the 1920s.
When the pilots called Ernie, or when they told of their exploits in airfield bull sessions, he became skilled at turning their tales into miniature narratives. In twelve inches of column type he could create a compelling little story from the experience of a mail pilot flying blind in a dense fog, then finding his way to safety by the light of a burning barn. He picked up human interest items, too—for example, the boy who wrote a mail pilot asking him to fly higher so as not to hit the boy’s kite. As time went by he included more of these items in the column, realizing many readers preferred them over traditional “hard news.”
He soon understood that his “poor devil” personal style was a useful style for a writer as well. When he needed to explain a technical matter, he developed the trick of disarming the reader with an “awshucks” approach, as in, “I hope I can get this straight, altho it’s going to be a little difficult….” His allusions were admirably concrete. Why did pilots prefer private airfields to public? Because, Ernie explained, “at a private field you get the kind of treatment you get at a high-class hotel, and at most municipal fields you get the kind of treatment you receive at the traffic bureau when you go for a driver’s permit.” And he allowed himself (with his editors’ acquiescence) to speak in a voice that was increasingly personal:
[Y]ou will never know what real despair is until you get a job on a newspaper and spend two hours trying to be funny in a column like this, and every time you read it over and revise it it gets worse, and finally you have to tack a paragraph like this on the end to let your readers know you don’t think it’s funny either. Not very funny, anyway.53
In newspaper offices and at airfields, it was clear that Ernie had succeeded. The News soon relieved him of his copy desk duties, freeing him to work full time on the column. Not long afterward he was named aviation editor for all of Scripps-Howard. When a high-ranking editor undertook to introduce Ernie to Amelia Earhart, the renowned aviatrix stopped him. “Not to know Ernie Pyle,” she said, “is to admit that you yourself are unknown in aviation.54
Friends thought Ernie’s four years on the aviation beat were the happiest of his life. His time was largely his own. He chose his own topics; wrote in a more personal vein than the average reporter; and enjoyed a lay expertise in an interesting field. He even enjoyed a certain prestige. He spoke regularly with senators, cabinet secretaries and congressmen. Important fliers such as James Doolittle and Ira Eaker, later commander of the Eighth Army Air Force in World War II, were personal friends. A diverse collection of acquaintances—pilots, reporters, cops on the beat—dropped in often at the Pyles’ apartment in southwest Washington, filling the humid air with convivial talk of flying, newspapering and where to find bootleg whiskey. To most friends the Pyles’ life appeared carefree and exciting, and their marriage a model of mutual devotion. “They were so concerned about each other’s feelings always,” Harriet Hendrixson remembered. “You just felt it—that they adored one another, were so careful about never doing or saying anything that would displease the other. They were in harmony, let’s face it. It was almost a spiritual thing.”55
But the tranquil surface covered quiet tensions and anxieties. There was a strange, hothouse insularity about the marriage. To Paige Cavanaugh, who once stayed with the Pyles for several weeks, the atmosphere of sensitivity seemed ominous. Each, he realized, was watching the other. “Ernie and Jerry did live for each other,” Cavanaugh wrote later in an unpublished memoir, “but always in a night-marish tug of war. They were always on the alert for a change of mood.” They lived in a state of apprehension, “trying to forestall any little incident which might affect either of them. Ernie might come home in a mood of black despair caused by a rebuff while attempting to get an interview, or by a slight change in his copy by the desk, and then Jerry would go to work. And with Ernie lying on his back on the sofa, hands behind his head staring at the ceiling, Jerry quietly and patiently would mentally massage him back into a state of well-being.”56 No doubt Ernie was needy in this way. But it seems likely that for Jerry, being needed so desperately filled a gaping hole inside herself as well.
Ernie brooded about what would become of him in a profession that often served the old unkindly. He feared that aviation was becoming the domain of corporate big shots and that public interest was fading. Now in his early thirties, he had been in the newspaper business long enough to see older men, even successful ones, shunted into second-rate jobs as younger men came up through the ranks. Cavanaugh believed Ernie dreaded becoming “old, sour-pussed, [in] ill health, in debt, kicked from one desk job to another—a desk that finally had to have a bottle of cheap gin in the lower right hand drawer so as a man could keep on his feet.”57
In the spring of 1932, the News’ editor-in-chief, Lowell Mellett, asked Ernie to become the paper’s managing editor. Ernie was appalled. The job would put an end to his writing and traveling, the very things he loved about the business. Instead, he would be an inside man again, a news technician who sat down to a desk each day in a ritual that would only remind him how short was the tenure of most editors. Then would come the polite demotion, and he would be on his way down toward the dead end he dreaded.
He said yes.
He liked and respected Lowell Mellett, felt grateful to the News, didn’t want to let down his employers. So he threw himself into the job for three years, working killing hours, planning each day’s edition, overseeing staff and payroll, making policy decisions and troubleshooting. He turned out to be good at the work, but he hated it. “For Christ’s sake don’t ever let ’em make you editor of a paper,” he advised a friend a few years later. “It’s a short-cut to insanity.”58
All around Washington the drama of the early New Deal was being played out, but the News’ business was local coverage; stories of the federal government came from Scripps-Howard and the United Press. Ernie’s mind was not on the great issues of the day, but on crime, snappy features and the latest local tidbits from the streets of the District. He prodded his reporters, demanding that they “jolt” themselves—not to work harder but to “work more keenly.” One of his memos to the staff was a definitive statement of his own approach to reporting and writing:
We have to make people read this paper, by making it so alert and saucy and important that they will be afraid of missing something if they don’t read it…. We are asleep. Dead…. Get alive. Keep your eyes open. There are swell stories floating around your beats every day that you either don’t see or don’t bother to do anything about when you do see them…. You can hardly walk down the street, or chat with a bunch of friends, without running into the germ of something that may turn up an interesting story if you’re on the lookout for it. News doesn’t have to be important, but it has to be interesting. You can’t find interesting things, if you’re not interested…. Always look for the story—for the unexpected human emotion in the story….
Write a story as tho it were a privilege for you to write it…. You don’t have to be smart-alecky or pseudo-funny. Be human. Try to write like people talk.59
As Ernie pushed himself to do a job he disliked, Jerry became more reclusive. Holing up with her books, playing sad songs on the piano and drinking, she resisted his pleas to develop more interests and get out of the apartment. He was her only interest in life, she told him; making him happy would sustain her. Remarks by Ernie to friends years later make it clear that by now he knew his wife was troubled in depths he could not fathom. He even began to worry that she might attempt suicide.60
At some point during this period, Jerry discovered she was pregnant and elected to have an abortion. The timing, the circumstances of the pregnancy, Jerry’s reasons for the termination—all these are matters of conjecture. But when Ernie told friends about the episode later, it was clear he had been deeply disturbed by it. He had wanted the child; she had not.61 Her decision could only have compounded his mounting sense of hopelessness about the future.
“I … have failed to achieve my ambition,” he wrote an old friend. “In fact my life the past few years has gone in such a routine and deadening way that I am not sure any more just what my ambitions are. I think maybe I haven’t any material ambitions—rather my ambition is to be free enough of material and financial worries that I can just sit and read and think. But to do that one has to get rich, and the prospects of me ever being rich are very slim indeed…. I get no chance to do any writing. I think that is where my greatest satisfaction lies—in writing—in expressing my feelings in print, and I don’t get a chance to do it now.”62 A decade earlier, during his first months at the News, Ernie had piped up during a late-night bull session to say: “You know, my idea of a good newspaper job would be just to travel around wherever you’d want to without any assignment except to write a story every day about what you’d seen.”63Now, a decade later, he got a chance to try the idea. Recuperating from an illness, he followed his doctor’s suggestion to take a long, leisurely trip across the country. He and Jerry poked about the Southwest—for which they conceived a lifelong love—then caught a slow freighter from California back to the East Coast. Aboard ship they passed many hours talking with a pleasant old gentleman who did nothing but travel the world. Ernie said later the traveler was “one of the few old men … who, by mere example, take the horror out of growing old.” It was “the happiest three weeks of my life.”64
They returned to Washington, and that appeared to be that. But just then the syndicated columnist Heywood Broun took a vacation, leaving a hole inside the News. Ernie filled it with eleven whimsical articles about his trip, and the stories made an impression around town. One who took particular note was George “Deac” Parker, Scripps-Howard’s editor-in-chief. Pyle’s articles “had a sort of Mark Twain quality,” Parker recalled later, “and they knocked my eyes right out.”65
Encouraged, Ernie confessed his unhappiness with his job to Lowell Mellett and resurrected his old daydream of doing a roving reporter column. Such an assignment not only would satisfy his own longings for travel and self-expression; he also believed it offered Jerry a desperately needed hope of renewal through fresh experience and a dramatic—and perpetual—change of scene. He pestered Mellett and Parker until, as Ernie recalled later, Parker said, “Oh, all right, go on and get out. You can try it a little while as an experiment. We’ll see how it turns out.”66 Lee Miller would be his editor. The News would run the column each day, six days a week. Other Scripps-Howard papers would be allowed to pick and choose.
“I didn’t like the inside work,” Ernie told a reporter later. “I didn’t like to be bossed … I didn’t like to be tied down, roped in. I wanted to get out … get away … keep going.”
Week 03: “The World Is Flat” Chapter 6
Response deadline: 9AM Wednesday, Sept. 10
If the flattening of the world is largely (but not entirely) unstoppable, and holds out the potential to be as beneficial to American society as a whole as past market evolutions have been, how does an individual get the best out of it? What do we tell our kids? There is only one message: You have to constantly upgrade your skills. There will be plenty of good jobs out there in the flat world for people with the knowledge and ideas to seize them.
I am not suggesting this will be simple. It will not be. There will be a lot of other people out there also trying to get smarter. It was never good to be mediocre in your job, but in a world of walls, mediocrity could still earn you a decent wage. In a flatter world, you really do not want to be mediocre. You don't want to find yourself in the shoes of Willy Loman in Death of a Salesman, when his son Biff dispels his idea that the Loman family is special by declaring, “Pop! I’m a dime a dozen, and so are you!” An angry Willy retorts, “I am not a dime a dozen! I am Willy Loman, and you are Biff Loman!”
I don’t care to have that conversation with my girls, so my advice to them in this flat world is very brief and very blunt: “Girls, when l was growing up, my parents used to say to me, ‘Tom, finish your dinner—people in China and India are starving.’ My advice to you is: Girls, finish your homework—people in China and India are starving for your jobs.”
The way I like to think about this for our society as a whole is that every person should figure out how to make himself or herself into an untouchable. That’s right. When the world goes fiat, the caste system gets turned upside down, In India untouchables may be the lowest social class, but in a flat world everyone should want to be an untouchable, Untouchables, in my lexicon, are people whose jobs cannot be outsourced,
So who are the untouchables, and how do you or your kids get to be one? Untouchables come in four broad categories: workers who are “special,” workers who are “specialized,” workers who are “anchored,” and workers who are “really adaptable.”
Workers who are special are people like Michael Jordan, Bill Gates, and Barbra Streisand. They have a global market for their goods and services and can command global-sized pay packages. Their jobs can never be outsourced. If you can’t be special — and only a few people can be — you want to be specialized, so that your work cannot be outsourced. This applies to all sorts of knowledge workers, from specialized lawyers, accountants, and brain surgeons, to cutting-edge computer architects and software engineers, to advanced machine tool and robot operators. These are skills that are always in high demand and are not fungible. (“Fungible” is an important word to remember. As Infosys CEO Nandan Nilekani likes to say, in a flat world there is “fungible and nonfungible work.” Work that can be easily digitized and transferred to lower-wage locations is fungible, Work that cannot be digitized or easily substituted is nonfungible. Michael Jordan's jump shot is nonfungible, a bypass surgeon's technique is nonfungible, but a television assembly line worker’s job is now fungible, basic accounting and tax preparation are now fungible.) If you cannot be special or specialized, you want to be anchored. That status applies to most Americans — everyone from my barber, to the waitress at lunch, to the chefs in the kitchen, to the plumbers, nurses, to many doctors, many lawyers, entertainers, electricians, and cleaning ladies. Their jobs are simply anchored and always will be, because they must be done in a specific location, involving face-to-face contact with a customer, client, patient, or audience. These jobs generally cannot be digitized and are not fungible, and the market wage is set according to the local market conditions. But be advised: There are fungible parts of even anchored jobs, and they can and will be outsourced — either to India or to the past — for greater efficiency. (Yes, as David Rothkopf notes, more jobs are actually “outsourced to the past,” thanks to new innovations, than are outsourced to India.) For instance, you are not going to go to Bangalore to find an internist or a divorce lawyer, but your divorce lawyer may one day use a legal aide in Bangalore for basic research or to write up vanilla legal documents, and your internist may use a nighthawk radiologist in Bangalore to read your CAT scan.
This is why if you cannot be special or specialized, you don’t want to count on being anchored so you won’t be outsourced. You actually want to become really adaptable. You want constantly to acquire new skills, knowledge, and expertise that enable you constantly to be able to create value — something more than vanilla ice cream. You want to learn how to make the latest chocolate sauce, the whipped cream, or the cherries on top, or to deliver it as a belly dancer. In whatever your field of endeavor, as parts of your work become commoditized and fungible, or turned into vanilla, adaptable people will always learn how to make some other part of the sundae. Being adaptable in a flat world, knowing how to “learn how to learn,” will be one of the most important assets any worker can have, because job churn will come faster, because innovation will happen faster.
Atul Vashistha, CEO of NeolT, a California consulting firm that specializes in helping U.S. firms do outsourcing, has a good feel for this: “What you can do and how you can adapt and how you can leverage all the experience and knowledge you have when the world goes flat — that is the basic component [for survival]. When you are changing jobs a lot, and when your job environment is changing a lot, being adaptable is the number-one thing. The people who are losing out are those with solid technical skills who have not grown those skills. You have to be skillfully adaptable and socially adaptable.”
The more we push out the boundaries of knowledge and technology, the more complex tasks that machines can do, the more those with specialized education, or the ability to learn how to learn, will be in demand, and for better pay. And the more those without that ability will be less generously compensated, what you don’t want to be is a not very special, not very specialized, not very anchored, or not very adaptable person in a fungible job. If you are in the low-margin, fungible end of the work food chain where businesses have an incentive to outsource to lower-cost, equally efficient producers, there is a much greater chance that your job will be outsourced or your wages depressed.
“If you are a Web programmer and are still using only HTML and have not expanded your skill set to include newer and creative technologies, such as XML and multimedia, your value to the organization gets diminished every year,” added Vashistha. New technologies get introduced that increase complexity but improve results, and as long as a programmer embraces these and keeps abreast of what clients are looking for, his or her job gets hard to outsource. “While technology advances make last year’s work a commodity,” said Vashistha, “reskilling, continual professional education and client intimacy to develop new relationships keeps him or her ahead of the commodity curve and away from a potential offshore.”
My childhood friend Bill Greer is a good example of a person who faced this challenge and came up with a personal strategy to meet it. Greer is forty-eight years old and has made his living as a freelance artist and graphic designer for 26 years. From the late 1970s until right around 2000, the way Bill did his job and served his clients was pretty much the same.
“Clients, like The New York Times, would want a finished piece of artwork,” Bill explained to me. So if he was doing an illustration for a newspaper or a magazine, or proposing a new logo for a product, he would actually create a piece of art — sketch it, color it, mount it on an illustration board, cover it with tissue, put it in a package that was opened with two flaps, and have it delivered by messenger or FedEx. He called it “flap art.” In the industry it was known as camera-ready art, because it needed to be shot, printed on four different layers of color film, or “separations,” and prepared for publication. “It was a finished product, and it had a certain preciousness to it,” said Bill. “It was a real piece of art, and sometimes people would hang them on their walls. In fact, The New York Times would have shows of works that were created by illustrators for its publications.”
But in the last few years “that started to change,” Bill told me, as publications and ad agencies moved to digital preparation, relying on the new software — namely, Quark, Photoshop, and Illustrator, which graphic artists refer to as “the trinity,” which made digital computer design so much easier. Everyone who went through art school got trained on these programs. Indeed, Bill explained, graphic design got so much easier that it became a commodity. It got turned into vanilla ice cream. “In terms of design,” he said, “the technology gave everyone the same tools, so everyone could do straight lines and everyone could do work that was halfway decent. You used to need an eye to see if something was in balance and had the right typeface, but all of a sudden anyone could hammer out something that was acceptable.”
So Greer pushed himself up the knowledge ladder. As publications demanded that all final products be presented as digital files that could be uploaded, and there was no longer any more demand for that precious flap art, he transformed himself into an ideas consultant. “Ideation” was what his clients, including McDonald’s and Unilever, wanted. He stopped using pens and ink and would just do pencil sketches, scan them into his computer, color them by using the computer’s mouse, and then email them to the client, which would have some less skilled artists finish them.
“It was unconscious,” said Greer. “I had to look for work that not everyone else could do, and that young artists couldn’t do with technology for a fraction of what I was being paid. So I started getting offers where people would say to me, ‘Can you do this and just give us the big idea?’ They would give me a concept, and they would just want sketches, ideas, and not a finished piece of art. I still use the basic skill of drawing, but just to convey an idea — quick sketches, not finished artwork. And for these ideas they will still pay pretty good money. It has actually taken me to a different level. It is more like being a consultant rather than a JAFA (Just Another Fucking Artist). There are a lot of JAFAs out there. So now I am an idea man, and I have played off that. My clients just buy concepts.” The JAFAs then do the art inhouse or it gets outsourced. “They can take my raw sketches and finish them and illustrate them using computer programs, and it is not like I would do it, but it is good enough,” he said.
But then another thing happened. While the evolving technology turned the lower end of Greer’s business into a commodity, it opened up a whole new market at the upper end: Greer’s magazine clients. One day, one of his regular clients approached him and asked if he could do morphs. Morphs are cartoon strips in which one character evolves into another. So Martha Stewart is in the opening frame and morphs into Courtney Love by the closing frame. Drew Barrymore morphs into Drew Carey. Mariah Carey morphs into Jim Carey. Cher morphs into Britney Spears. When he was first approached to do these, Greer had no idea where to begin. So he went onto Amazon.com and located some specialized software, bought it, tried it out for a few days, and produced his first morph. Since then he has developed a specialty in the process, and he market for them has expanded to include Maxim magazine, More, and Nickelodeon — one a men’s magazine, one a middle-aged women’s magazine and one a kids’ magazine.
In other words, someone invented a whole new kind of sauce to go on the vanilla, and Greer jumped on it. This is exactly what happens in the global economy as a whole. “I was experienced enough to pick these [morphs] up pretty quickly,” said Greer. “Now I do them on my Mac laptop, anywhere I am, from Santa Barbara to Minneapolis to my apartment in New York. Sometimes clients give me a subject, and sometimes I just come up with them. Morphing used to be one of those really high-end things you saw on TV, and then they came out with this consumer [software] program and people could do it themselves, and I shaped them so magazines could use them. I just upload them as a series of JPEG files. Morphs have been a good business for different magazines. I even get fan mail from kids!”
Greer had never done morphs until the technology evolved and created a new, specialized niche, just when a changing market for his work made him eager to learn new skills. “I wish I could say it was all intentional,” he confessed. “I was just available for work and just lucky they gave me a chance to do these things. I know so many artists who got washed out. One guy who was an illustrator has become a package designer, some have gotten out of the field altogether; one of the best designers I know became a landscape architect. She is still a designer but changed her medium altogether. Visual people can adapt, but I am still nervous about the future.”
I told Greer his story fit well into some of the terms I was using in this book. He began as a chocolate sauce (a classic illustrator), was turned into a vanilla commodity (a classic illustrator in the computer age), upgraded his skills to become a special chocolate sauce again (a design consultant), then learned how to become a cherry on top (a morphs artist) by fulfilling a new demand created by an increasingly specialized market.
Greer contemplated my compliment for a moment and then said, “And here all I was trying to do was survive, and I still am.” As he got up to leave, though, he told me that he was going out to meet a friend “to juggle together.” They have been juggling partners for years, just a little side business they sometimes do on a street corner or for private parties. Greer has very good hand-eye coordination. “But even juggling is being commoditized,” he complained. “It used to be if you could juggle five balls, you were really special. Now juggling five balls is like just anteing up. My partner and I used to perform together, and he was the seven-ball champ when I met him. Now fourteen-year-old kids can juggle seven balls, no problem. Now they have these books, like Juggling for Dummies, and kits that will teach you how to juggle. So they've just upped the standard.”
As goes juggling, so goes the world. These are our real choices: to try to put up walls of protection or to keep marching forward with the confidence that American society still has the right stuff, even in a flatter world. I say march forward. As long as we keep tending to the secrets of our sauce, we will do fine. There are so many things about the American system that are ideally suited for nurturing individuals who can compete and thrive in a flat world.
How so? It starts with America’s research universities, which spin off a steady stream of competitive experiments, innovations, and scientific breakthroughs — from mathematics to biology to physics to chemistry. It is a truism, but the more educated you are, the more options you will have in a flat world. “Our university system is the best,” said Bill Gates. “We fund our universities to do a lot of research and that is an amazing thing. High-IQ people come here, and we allow them to innovate and turn [their innovations] into products. We reward risk-taking. Our university system is competitive and experimental. They can try out different approaches. There are one hundred universities making contributions to robotics. And each one is saying that the other is doing it all wrong, or my piece actually fits together with theirs. It is a chaotic system, but it is a great engine of innovation in the world, and with federal tax money, with some philanthropy on top of that, [it will continue to flourish] ... We will really have to screw things up for our absolute wealth not to increase. If we are smart, we can increase it faster by embracing this stuff.”
The Web browser, magnetic resonance imaging (MRI), superfast computers, global position technology, space exploration devices, and fiber optics are just a few of the many inventions that got started through basic university research projects. The BankBoston Economics Department did a study titled “MIT: The Impact of Innovation.” Among its conclusions was that MIT graduates have founded 4,000 companies, creating at least 1.1 million jobs worldwide and generating sales of $232 billion.
What makes America unique is not that it built MIT, or that its grads are generating economic growth and innovation, but that every state in the country has universities trying to do the same. “America has 4,000 colleges and universities,” said Allan E. Goodman, president of the Institute of International Education. “The rest of the world combined has 7,768 institutions of higher education. In the state of California alone, there are about 130 colleges and universities. There are only 14 countries in the world that have more than that number.”
Take a state you normally wouldn’t think of in this regard: Oklahoma. It has its own Oklahoma Center for the Advancement of Science and Technology (OCAST), which, on its Web site, describes its mission as follows: “In order to compete effectively in the new economy, Oklahoma must continue to develop a well-educated population; a collaborative, focused university research and technology base; and a nurturing environment for cutting-edge businesses, from the smallest startup to the largest international headquarters ... [OCAST promotes] University Business technology centers, which may span several schools and businesses, resulting in new businesses being spawned, new products being manufactured, and new manufacturing technologies employed.” No wonder that in 2003, American universities reaped $1.3 billion from patents, according to the Association of University Technology Managers.
Coupled with America’s unique innovation-generating machines — universities, public and private research labs, and retailers — we have the best-regulated and most efficient capital markets in the world for taking new ideas and turning them into products and services. Dick Foster, director of McKinsey & Co. and the author of two books on innovation, remarked to me, “We have an ‘industrial policy’ in the U.S. — it is called the stock exchange, whether it is the NYSE or the Nasdaq.” That is where risk capital is collected and assigned to emerging ideas or growing companies, Foster said, and no capital market in the world does that better and more efficiently than the American one.
What makes capital provision work so well here is the security and regulation of our capital markets, where minority shareholders are protected. Lord knows, there are scams, excesses, and corruption in our capital markets. That always happens when a lot of money is at stake. What distinguishes our capital markets is not that Enrons don’t happen in America they sure do. It is that when they happen, they usually get exposed, either by the Securities and Exchange Commission or by the business press, and get corrected. What makes America unique is not Enron but Eliot Spitzer, the attorney general of New York State, who has doggedly sought to clean up the securities industry and corporate boardrooms. This sort of capital market has proved very, very difficult to duplicate outside of New York, London, Frankfurt, and Tokyo. Said Foster, “China and India and other Asian countries will not be successful at innovation until they have successful capital markets, and they will not have successful capital markets until they have rule of law which protects minority interests under conditions of risk … We in the U.S. are the lucky beneficiaries of centuries of economic experimentation and we are the experiment that has worked.”
While these are the core secrets of America’s sauce, there are others that need to be preserved and nurtured. Sometimes you have to talk to outsiders to appreciate them, such as Indian-born Vivek Paul of Wipro. “I would add three to your list,” he said to me. “One is the sheer openness of American society.” We Americans often forget what an incredibly open, say-anything-do-anything-start-anything-go-bankrupt-and-start-anything-again society the United States is. There is no place like it in the world, and our openness is a huge asset and attraction to foreigners many of whom come from countries where the sky is not the limit.
Another, said Paul, is the “quality of American intellectual property protection,” which further enhances and encourages people to come up with new ideas. In a flat world, there is a great incentive to develop a new product or process, because it can achieve global scale in a flash. But if you are the person who comes up with that new idea, you want your intellectual property protected. “No country respects and protects intellectual property better than America,” said Paul, and as a result, a lot of innovators want to come here to work and lodge their intellectual property.
The United States also has among the most flexible labor laws in the world. The easier it is to fire someone in a dying industry, the easier it is to hire someone in a rising industry that no one knew would exist five years earlier. This is a great asset, especially when you compare the situation in the United States to inflexible, rigidly regulated labor markets like Germany’s, full of government restrictions on hiring and firing. Flexibility to quickly deploy labor and capital where the greatest opportunity exists, and the ability to quickly redeploy it if the earlier deployment is no longer profitable, is essential in a flattening world.
Still another secret to America’s sauce is the fact that it has the world’s largest domestic consumer market, with the most first adopters, in the world, which means that if you are introducing a new product, technology, or service, you have to have a presence in America. All this means a steady flow of jobs for Americans.
There is also the little-discussed American attribute of political stability. Yes, China has had a good run for the past 25 years, and it may make the transition from communism to a more pluralistic system without the wheels coming off. But it may not. Who would want all his or her eggs in that basket?
Finally, the United States has become one of the great meeting points in the world, a place where lots of different people bond and learn to trust one another. An Indian student who is educated at the University of Oklahoma and then gets his first job with a software firm in Oklahoma City forges bonds of trust and understanding that are really important for future collaboration, even if he winds up returning to India. Nothing illustrates this point better than Yale University’s outsourcing of research to China. Yale president Richard C. Levin explained to me that Yale has two big research operations running in China today, one at Peking University in Beijing and the other at Fudan University in Shanghai. “Most of these institutional collaborations arise not from top-down directives of university administrators, but rather from longstanding personal relationships among scholars and scientists,” said Levin.
How did the Yale Fudan collaboration arise? To begin with, said Levin, Yale professor Tian Xu, its director, had a deep affiliation with both institutions. He did his undergraduate work at Fudan and received his Ph.D. from Yale. “Five of Professor Xu’s collaborators, who are now professors at Fudan, were also trained at Yale,” explained Levin. One was Professor Xu’s friend when both were Yale graduate students; another was a visiting scholar in the laboratory of a Yale colleague; one was an exchange student who came to Yale from Fudan and returned to earn his Ph.D. in China; and the other two were postdoctoral fellows in Professor Xu’s Yale lab. A similar story underlies the formation of the Peking-Yale Joint Center for Plant Molecular Genetics and Agrobiotechnology.
Professor Xu is a leading expert on genetics and has won grants from the National Institutes of Health and the Howard Hughes Foundation to study the connection between genetics and cancer and certain neurodegenerative diseases. This kind of research requires the study of large numbers of genetic mutations in lab animals. “When you want to test many genes and trace for a given gene that may be responsible for certain diseases, you need lo nm a lot of tests. Having a bigger staff is a huge advantage,” explained Levin. So what Yale did was essentially outsource the lab work to Fudan by creating the Fudan-Yale Biomedical Research Center. Each university pays for its own staff and research, so no money changes hands, but the Chinese side does the basic technical work using large numbers of technicians and lab animals, which cost so much less in China, and Yale does the high-end analysis of the data. The Fudan staff, students, and technicians get great exposure to high-end research, and Yale gets a large-scale testing facility that would have been prohibitively expensive if Yale had tried to duplicate it in New Haven. A support lab in America for a project like this one might have 30 technicians, but the one in Fudan has 150.
“The gains are very much two-way,” said Levin. “Our investigators get substantially enhanced productivity, and the Chinese get their graduate students trained, and their young faculty become collaborators with our professors, who are the leaders in their fields. It builds human capital for China and innovation for Yale.” Graduate students from both universities go back and forth, forging relationships that will no doubt produce more collaborations in the future. At the same time, he added, a lot of legal preparation went into this collaboration to make sure that Yale would be able to harvest the intellectual property that is created.
“There is one world of science out there,” said Levin, “and this kind of international division of labor makes a lot of sense.” Yale, he said, also insisted that the working conditions at the Chinese labs be world-class, and, as a result, it has also helped to lift the quality of the Chinese facilities. “The living conditions of the lab animals are right up to U.S. standards,” remarked Levin. “These are not mouse sweatshops.”
Every law of economics tells us that if we connect all the knowledge pools in the world, and promote greater and greater trade and integration, the global pie will grow wider and more complex. And if America, or any other country, nurtures a labor force that is increasingly made up of men and women who are special, specialized, or constantly adapting to higher-value-added jobs, it will grab its slice of that growing pie. But we will have to work at it. Because if current trends prevail, countries like India and China and whole regions like Eastern Europe are certain to narrow the gap with America, just as Korea and Japan and Taiwan did during the Cold War. They will keep upping their standards.
So are we still working at it? Are we tending to the secrets of our sauce? America still looks great on paper, especially if you look backward, or compare it only to India and China of today and not tomorrow. But have we really been investing in our future and preparing our children the way we need to for the race ahead? See the next chapter. But here’s a quick hint:
The answer is no.
Week 04: What Do Job Interviews Really Tell Us
Response deadline: 9AM Wednesday, Sept. 17
Nolan Myers grew up in Houston, the elder of two boys in a middle-class family. He went to Houston's High School for the Performing and Visual Arts and then Harvard, where he intended to major in history and science. After discovering the joys of writing code, though, he switched to computer science. "Programming is one of those things you get involved in, and you just can't stop until you finish," Myers says. "You get involved in it, and all of a sudden you look at your watch and it's four in the morning! I love the elegance of it." Myers is short and slightly stocky and has pale-blue eyes. He smiles easily, and when he speaks he moves his hands and torso for emphasis. He plays in a klezmer band called the Charvard Chai Notes. He talks to his parents a lot. He gets Bs and B-pluses.
In the last stretch of his senior year, Myers spent a lot of time interviewing for jobs with technology companies. He talked to a company named Trilogy, down in Texas, but he didn't think he would fit in. "One of Trilogy's subsidiaries put ads out in the paper saying that they were looking for the top tech students, and that they'd give them two hundred thousand dollars and a BMW," Myers said, shaking his head in disbelief. In another of his interviews, a recruiter asked him to solve a programming problem, and he made a stupid mistake and the recruiter pushed the answer back across the table to him, saying that his "solution" accomplished nothing. As he remembers the moment, Myers blushes. "I was so nervous. I thought, “Hmm, that sucks!" The way he says that, though, makes it hard to believe that he really was nervous, or maybe what Nolan Myers calls nervous the rest of us call a tiny flutter in the stomach. Myers doesn't seem like the sort to get flustered. He's the kind of person you would call the night before the big test in seventh grade when nothing made sense and you had begun to panic.
I like Nolan Myers. He will, I am convinced, be very good at whatever career he chooses. I say those two things even though I have spent no more than ninety minutes in his presence. We met only once, on a sunny afternoon just before his graduation at the Au Bon Pain in Harvard Square. He was wearing sneakers and khakis and a polo shirt in a dark-green pattern. He had a big backpack, which he plopped on the floor beneath the table. I bought him an orange juice. He fished around in his wallet and came up with a dollar to try to repay me, which I refused. We sat by the window. Previously, we had talked for perhaps three minutes on the phone, setting up the interview. Then I e-mailed him, asking him how I would recognize him at Au Bon Pain. He sent me the following message, with what I'm convinced— again, on the basis of almost no evidence — to be typical Myers panache: "22ish, five foot seven, straight brown hair, very good-looking:)." I have never talked to his father, his mother, or his little brother, or any of his professors. I have never seen him ecstatic or angry or depressed. I know nothing of his personal habits, his tastes, or his quirks. I cannot even tell you why I feel the way I do about him. He's good-looking and smart and articulate and funny, but not so good-looking and smart and articulate and funny that there is some obvious explanation for the conclusions I've drawn about him. I just like him, and I'm impressed by him, and if I were an employer looking for bright young college graduates, I'd hire him in a heartbeat.
I heard about Nolan Myers from Hadi Partovi, an executive with Tellme, a highly-touted Silicon Valley startup offering Internet access through the telephone. If you were a computer-science major at MIT, Harvard, Stanford, Caltech, or the University of Waterloo this spring, looking for a job in software, Tellme was probably at the top of your list. Partovi and I talked in the conference room at Tellme's offices, just off the soaring, open floor where all the firm's programmers and marketers and executives sit, some of them with bunk beds built over their desks. (Tellme recently moved into an old printing plant— a low-slung office building with a huge warehouse attached— and, in accordance with new-economy logic, promptly turned the old offices into a warehouse and the old warehouse into offices.) Partovi is a handsome man of twenty-seven, with olive skin and short curly black hair, and throughout our entire interview he sat with his chair tilted precariously at a forty-five-degree angle. At the end of a long riff about how hard it is to find high-quality people, he blurted out one name: Nolan Myers. Then, from memory, he rattled off Myers's telephone number. He very much wanted Myers to come to Tellme.
Partovi had met Myers in January of Myers's senior year, during a recruiting trip to Harvard. "It was a heinous day," Partovi remembers. "I started at seven and went until nine. I'd walk one person out and walk the other in." The first fifteen minutes of every interview he spent talking about Tellme—its strategy, its goals, and its business. Then he gave everyone a short programming puzzle. For the rest of the hour-long meeting, Partovi asked questions. He remembers that Myers did well on the programming test, and after talking to him for thirty to forty minutes he became convinced that Myers had, as he puts it, "the right stuff." Partovi spent even less time with Myers than I did. He didn't talk to Myers's family, or see him ecstatic or angry or depressed, either. He knew that Myers had spent last summer as an intern at Microsoft and was about to graduate from an Ivy League school. But virtually everyone recruited by a place like Tellme has graduated from an elite university, and the Microsoft summer-internship program has more than six hundred people in it. Partovi didn't even know why he liked Myers so much. He just did. "It was very much a gut call," he says.
This wasn't so very different from the experience Nolan Myers had with Steve Ballmer, the CEO of Microsoft. Earlier that year, Myers attended a party for former Microsoft interns called Gradbash. Ballmer gave a speech there, and at the end of his remarks Myers raised his hand. "He was talking a lot about aligning the company in certain directions," Myers told me, "and I asked him about how that influences his ability to make bets on other directions. Are they still going to make small bets?" Afterward, a Microsoft recruiter came up to Myers and said, "Steve wants your e-mail address." Myers gave it to him, and soon he and Ballmer were e-mailing. Ballmer, it seems, badly wanted Myers to come to Microsoft. "He did research on me," Myers says. "He knew which group I was interviewing with, and knew a lot about me personally. He sent me an e-mail saying that he'd love to have me come to Microsoft, and if I had any questions, I should contact him. So I sent him a response, saying thank you. After I visited Tellme, I sent him an e-mail saying I was interested in Tellme, here were the reasons, that I wasn't sure yet, and if he had anything to say I said I'd love to talk to him. I gave him my number. So he called, and after playing phone tag we talked about career trajectory, how Microsoft would influence my career, what he thought of Tellme. I was extremely impressed with him, and he seemed very genuinely interested in me.”
What convinced Ballmer he wanted Myers? A glimpse! He caught a little slice of Nolan Myers in action and—just like that—the CEO of a $400 billion company was calling a college senior in his dorm room. Ballmer somehow knew he liked Myers, the same way Hadi Partovi knew, and the same way I knew after our little chat at Au Bon Pain. But what did we know? What could we know? By any reasonable measure, surely none of us knew Nolan Myers at all.
It is a truism of the new economy that the ultimate success of any enterprise lies with the quality of the people it hires. At many technology companies, employees are asked to all but live at the office, in conditions of intimacy that would have been unthinkable a generation ago. The artifacts of the prototypical Silicon Valley office—the videogames, the espresso bar, the bunk beds, the basketball hoops—are the elements of the rec room, not the workplace. And in the rec room you want to play only with your friends. But how do you find out who your friends are? Today, recruiters canvas the country for resumes.
They analyze employment histories and their competitors' staff listings. They call references and then do what I did with Nolan Myers: sit down with a perfect stranger for an hour and a half and attempt to draw conclusions about that stranger's intelligence and personality. The job interview has become one of the central conventions of the modern economy. But what, exactly, can you know about a stranger after sitting down and talking with him for an hour?
2.
Some years ago, an experimental psychologist at Harvard University, Nalini Ambady, together with Robert Rosenthal, set out to examine the nonverbal aspects of good teaching. As the basis of her research, she used videotapes of teaching fellows that had been made during a training program at Harvard. Her plan was to have outside observers look at the tapes with the sound off and rate the effectiveness of the teachers by their expressions and physical cues. Ambady wanted to have at least a minute of film to work with. When she looked at the tapes, though, there was really only about ten seconds when the teachers were shown apart from the students. "I didn't want students in the frame, because obviously it would bias the ratings," Ambady says. "So I went to my adviser, and I said, 'This isn't going to work.'" But it did. The observers, presented with a ten-second silent video clip, had no difficulty rating the teachers on a fifteen-item checklist of personality traits. In fact, when Ambady cut the clips back to five seconds, the ratings were the same. They were the same even when she showed her raters just two seconds of videotape. That sounds unbelievable unless you actually watch Ambady's teacher clips, as I did, and realize that the eight seconds that distinguish the longest clips from the shortest are superfluous: anything beyond the first flash of insight is unnecessary. When we make a snap judgment, it is made in a snap. It's also, very clearly, a judgment: we get a feeling that we have no difficulty articulating.
Ambady's next step led to an even more remarkable conclusion. She compared those snap judgments of teacher effectiveness with evaluations made, after a full semester of classes, by students of the same teachers. The correlation between the two, she found, was astoundingly high. A person watching a two-second silent video clip of a teacher he has never met will reach conclusions about how good that teacher is that are very similar to those of a student who sits in the teacher's class for an entire semester.
Recently, a comparable experiment was conducted by Frank Bernieri, a psychologist at the University of Toledo. Bernieri, working with one of his graduate students, Neha Gacia-Jain, selected two people to act as interviewers, and trained them for six weeks in the proper procedures and techniques of giving an effective job interview. The two then interviewed ninety-eight volunteers of various ages and backgrounds. The interviews lasted between fifteen and twenty minutes, and afterward each interviewer filled out a six-page, five-part evaluation of the person he'd just talked to. Originally, the intention of the study was to find out whether applicants who had been coached in certain nonverbal behaviors designed to ingratiate themselves with their interviewers— like mimicking the interviewers' physical gestures or posture— would get better ratings than applicants who behaved normally. As it turns out, they didn't. But then another of Bernieri's students an undergraduate named Tricia Prickett, decided that she wanted to use the interview videotapes and the evaluations that had been collected to test out the adage that the handshake is everything.
"She took fifteen seconds of videotape showing the applicant as he or she knocks on the door, comes in, shakes the hand of the interviewer, sits down, and the interviewer welcomes the person," Bernieri explained. Then, like Ambady, Prickett got a series of strangers to rate the applicants based on the handshake clip, using the same criteria that the interviewers had used. Once more, against all expectations, the ratings were very similar to those of the interviewers. "On nine out of the eleven traits the applicants were being judged on, the observers significantly predicted the outcome of the interview," Bernieri says. "The strength of the correlations was extraordinary.”
This research takes Ambady's conclusions one step further. In the Toledo experiment, the interviewers were trained in the art of interviewing. They weren't dashing off a teacher evaluation on their way out the door. They were filling out a formal, detailed questionnaire, of the sort designed to give the most thorough and unbiased account of an interview. And still their ratings weren't all that different from those of people off the street who saw just the greeting.
This is why Hadi Partovi, Steve Ballmer, and I all agreed on Nolan Myers. Apparently, human beings don't need to know someone in order to believe that they know someone. Nor does it make that much difference, apparently, that Partovi reached his conclusion after putting Myers through the wringer for an hour, I reached mine after ninety minutes of amiable conversation at Au Bon Pain, and Ballmer reached his after watching and listening as Myers asked a question.
Bernieri and Ambady believe that the power of first impressions suggests that human beings have a particular kind of prerational ability for making searching judgments about others. In Ambady's teacher experiments, when she asked her observers to perform a potentially distracting cognitive task—like memorizing a set of numbers—while watching the tapes, their judgments of teacher effectiveness were unchanged. But when she instructed her observers to think hard about their ratings before they made them, their accuracy suffered substantially. Thinking only gets in the way. "The brain structures that are involved here are very primitive," Ambady speculates. "All of these affective reactions are probably governed by the lower brain structures." What we are picking up in that first instant would seem to be something quite basic about a person's character, because what we conclude after two seconds is pretty much the same as what we conclude after twenty minutes or, indeed, an entire semester. "Maybe you can tell immediately whether someone is extroverted, or gauge the person's ability to communicate," Bernieri says. "Maybe these clues or cues are immediately accessible and apparent.'' Bernieri and Ambady are talking about the existence of a powerful form of human intuition. In a way, that's comforting, because it suggests that we can meet a perfect stranger and immediately pick up on something important about him. It means that I shouldn't be concerned that I can't explain why I like Nolan Myers, because, if such judgments are made without thinking, then surely, they defy explanation.
But there's a troubling suggestion here as well. I believe that Nolan Myers is an accomplished and likable person. But I have no idea from our brief encounter how honest he is, or whether he is self-centered, or whether he works best by himself or in a group, or any number of other fundamental traits. That people who simply see the handshake arrive at the same conclusions as people who conduct a full interview also implies, perhaps, that those initial impressions matter too much— that they color all the other impressions that we gather over time.
For example, I asked Myers if he felt nervous about the prospect of leaving school for the workplace, which seemed like a reasonable question, since I remember how anxious I was before my first job. Would the hours scare him? Oh no, he replied, he was already working between eighty and a hundred hours a week at school. "Are there things that you think you aren't good at that, make you worry?" I continued.
His reply was sharp: "Are there things that I'm not good at, or things that I can't learn? I think that's the real question. There are a lot of things I don't know anything about, but I feel comfortable that given the right environment and the right encouragement I can do well at." In my notes, next to that reply, I wrote "Great answer, and I can remember at the time feeling the little thrill you experience as an interviewer when someone's behavior conforms with your expectations. Because I had decided, right off, that I liked him, what I heard in his answer was toughness and confidence. Had I decided early on that I didn't like Nolan Myers, I would have heard in that reply arrogance and bluster. The first impression becomes a self-fulfilling prophecy: we hear what we expect to hear. The interview is hopelessly biased in favor of the nice.
3.
When Ballmer and Partovi and I met Nolan Myers, we made a prediction. We looked at the way he behaved in our presence— at the way he talked and acted and seemed to think— and drew conclusions about how he would behave in other situations. I had decided, remember, that Myers was the kind of person you called the night before the big test in seventh grade. Was I right to make that kind of generalization.
This is a question that social psychologists have looked at closely. In the late 1920s, in a famous study, the psychologist Theodore Newcomb analyzed extroversion among adolescent boys at a summer camp. He found that how talkative a boy was in one setting— say, at lunch— was highly predictive of how talkative that boy would be in the same setting in the future. A boy who was curious at lunch on Monday was likely to be curious at lunch on Tuesday. But his behavior in one setting told you almost nothing about how he would behave in a different setting: from how someone behaved at lunch, you couldn't predict how he would behave during, say, afternoon playtime. In a more recent study, of conscientiousness among students at Carleton College, the researchers Walter Mischel, Neil Lutsky, and Philip K. Peake showed that how neat a student's assignments were or how punctual he was told you almost nothing about how often he attended class or how neat his room or his personal appearance was. How we behave at any one time, evidently, has less to do with some immutable inner compass than with the particulars of our situation.
This conclusion, obviously, is at odds with our intuition. Most of the time, we assume that people display the same character traits in different situations. We habitually underestimate the large role that context plays in people's behavior. In the Newcomb summer-camp experiment, for example, the results showing how little consistency there was from one setting to another in talkativeness, curiosity, and gregariousness were tabulated from observations made and recorded by camp counselors on the spot. But when, at the end of the summer, those same counselors were asked to give their final impressions of the kids, they remembered the children's behavior as being highly consistent.
"The basis of the illusion is that we are somehow confident that we are getting what is there, that we are able to read off a person's disposition," Richard Nisbett, a psychologist at the University of Michigan, says. "When you have an interview with someone and have an hour with them, you don't conceptualize that as taking a sample of a person's behavior, let alone a possibly biased sample, which is what it is. What you think is that you are seeing a hologram, a small and fuzzy image but still the whole person.
Then Nisbett mentioned his frequent collaborator, Lee Ross, who teaches psychology at Stanford. "There was one term when he was teaching statistics and one term when he was teaching a course with a lot of humanistic psychology. He gets his teacher evaluations. The first referred to him as cold, rigid, remote, finicky, and uptight. And the second described this wonderful warmhearted guy who was so deeply concerned with questions of community and getting students to grow. It was Jekyll and Hyde. In both cases, the students thought they were seeing the real Lee Ross.”
Psychologists call this tendency— to fixate on supposedly stable character traits and overlook the influence of context— the Fundamental Attribution Error, and if you combine this error with what we know about snap judgments, the interview becomes an even more problematic encounter. Not only had I let my first impressions color the information I gathered about Myers, but I had also assumed that the way he behaved with me in an interview setting was indicative of the way he would always behave. It isn't that the interview is useless; what I learned about Myers— that he and I get along well— is something I could never have gotten from a resume or by talking to his references. It's just that our conversation turns out to have been less useful, and potentially more misleading, than I had supposed. That most basic of human rituals— the conversation with a stranger— turns out to be a minefield.
4.
Not long after I met with Nolan Myers, I talked with a human-resources consultant from Pasadena named Justin Menkes. Menkes's job is to figure out how to extract meaning from face-to-face encounters, and with that in mind he agreed to spend an hour interviewing me the way he thinks interviewing ought to be done. It felt, going in, not unlike a visit to a shrink, except that instead of having months, if not years, to work things out, Menkes was set upon stripping away my secrets in one session. Consider, he told me, a commonly asked question like "Describe a few situations in which your work was criticized. How did you handle the criticism?" The problem, Menkes said, is that it's much too obvious what the interviewee is supposed to say. "There was a situation where I was working on a project, and I didn't do as well as I could have," he said, adopting a mock-sincere singsong. "My boss gave me some constructive criticism. And I redid the project. It hurt. Yet we worked it out." The same is true of the question "What would your friends say about you?"—to which the correct answer (preferably preceded by a pause, as if to suggest that it had never dawned on you that someone would ask such a question) is "My guess is that they would call me a people person— either that or a hard worker.”
Myers and I had talked about obvious questions, too. "What is your greatest weakness?" I asked him. He answered, "I tried to work on a project my freshman year, a children's festival. I was trying to start a festival as a benefit here in Boston. And I had a number of guys working with me. I started getting concerned with the scope of the project we were working on— how much responsibility we had, getting things done. I really put the brakes on, but in retrospect I really think we could have done it and done a great job.”
Then Myers grinned and said, as an aside, "Do I truly think that is a fault? Honestly, no." And, of course, he's right. All I'd really asked him was whether he could describe a personal strength as if it were a weakness, and in answering as he did, he had merely demonstrated his knowledge of the unwritten rules of the interview.
But, Menkes said, what if those questions were rephrased so that the answers weren't obvious? For example: "At your weekly team meetings, your boss unexpectedly begins aggressively critiquing your performance on a current project. What do you do?"
I felt a twinge of anxiety. What would I do? I remembered a terrible boss I'd had years ago. "I'd probably be upset," I said. "But I doubt I'd say anything. I'd probably just walk away." Menkes gave no indication whether he was concerned or pleased by that answer. He simply pointed out that another person might well have said something like “I’d go and see my boss later in private, and confront him about why he embarrassed me in front of my team." I was saying that I would probably handle criticism— even inappropriate criticism— from a superior with stoicism; in the second case, the applicant was saying he or she would adopt a more confrontational style. Or, at least, we were telling the interviewer that the workplace demands either stoicism or confrontation— and to Menkes these are revealing and pertinent pieces of information.
Menkes moved on to another area— handling stress. A typical question in this area is something like "Tell me about a time when you had to do several things at once. How did you handle the situation? How did you decide what to do first?" Menkes says this is also too easy. "I just had to be very organized," he began again in his mock-sincere singsong. "I had to multitask. I had to prioritize and delegate appropriately. I checked in frequently with my boss." Here's how Menkes rephrased it: "You're in a situation where you have two very important responsibilities that both have a deadline that is impossible to meet. You cannot accomplish both. How do you handle that situation?"
"Well," I said, "I would look at the two and decide what I was best at, and then go to my boss and say, 'It's better that I do one well than both poorly,' and we'd figure out who else could do the other task.”
Menkes immediately seized on a telling detail in my answer. I was interested in what job I would do best. But isn't the key issue what job the company most needed to have done? With that comment, I had revealed something valuable: that in a time of work-related crisis I start from a self-centered consideration. "Perhaps you are a bit of a solo practitioner," Menkes said diplomatically. "That's an essential bit of information.”
Menkes deliberately wasn't drawing any broad conclusions. If we are not people who are shy or talkative or outspoken but people who are shy in some contexts, talkative in other situations, and outspoken in still other areas, then what it means to know someone is to catalog and appreciate all those variations. Menkes was trying to begin that process of cataloging. This interviewing technique is known as structured interviewing, and in studies by industrial psychologists it has been shown to be the only kind of interviewing that has any success at all in predicting performance in the workplace. In the structured interviews, the format is fairly rigid. Each applicant is treated in precisely the same manner. The questions are scripted. The interviewers are carefully trained, and each applicant is rated on a series of predetermined scales.
What is interesting about the structured interview is how narrow its objectives are. When I interviewed Nolan Myers I was groping for some kind of global sense of who he was; Menkes seemed entirely uninterested in arriving at that same general sense of me —he seemed to realize how foolish that expectation was for an hour-long interview. The structured interview works precisely because it isn't really an interview; it isn't about getting to know someone, in a traditional sense. It's as much concerned with rejecting information as it is with collecting it.
Not surprisingly, interview specialists have found it extraordinarily difficult to persuade most employers to adopt the structured interview. It just doesn't feel right. For most of us, hiring someone is essentially a romantic process, in which the job interview functions as a desexualized version of a date. We are looking for someone with whom we have a certain chemistry, even if the coupling that results ends in tears and the pursuer and the pursued turn out to have nothing in common. We want the unlimited promise of a love affair. The structured interview, by contrast, seems to offer only the dry logic and practicality of an arranged marriage.
5.
Nolan Myers agonized over which job to take. He spent half an hour on the phone with Steve Ballmer, and Ballmer was very persuasive. “He gave me very, very good advice,” Myers says of his conversations with the Microsoft CEO. “He felt that I should go to the place that excited me the most and that I thought would be best for my career. He offered to be my mentor.” Myers says he talked to his parents every day about what to do. In February, he flew out to California and spent a Saturday going from one Tellme executive to another, asking and answering questions. “Basically, I had three things I was looking for. One was long-term goals for the company. Where did they see themselves in five years? Second, what position would I be playing in the company?” He stopped and burst out laughing. “And I forget what the third one is.” In March, Myers committed to Tellme.
Will Nolan Myers succeed at Tellme? I think so, although I honestly have no idea. It’s a harder question to answer now than it would have been thirty or forty years ago. If this were 1965, Nolan Myers would have gone to work at IBM and worn a blue suit and sat in a small office and kept his head down, and the particulars of his personality would not have mattered so much. It was not so important that IBM understood who you were before it hired you, because you understood what IBM was. If you walked through the door at Armonk or at a branch office in Illinois, you knew what you had to be and how you were supposed to act. But to walk through the soaring, open offices of Tellme, with the bunk beds over the desks, is to be struck by how much more demanding the culture of Silicon Valley is. Nolan Myers will not be provided with a social script, that blue suit, and organization chart. Tellme, like any technology startup these days, wants its employees to be part of a fluid team, to be flexible and innovative, to work with shifting groups in the absence of hierarchy and bureaucracy, and in that environment, where the workplace doubles as the rec room, the particulars of your personality matter a great deal.
This is part of the new economy's appeal, because Tellme's soaring warehouse is a more productive and enjoyable place to work than the little office boxes of the old IBM. But the danger here is that we will be led astray in judging these newly important particulars of character. If we let personability — some indefinable, prerational intuition, magnified by the Fundamental Attribution Error — bias the hiring process today, then all we will have done is replace the old-boy network, where you hired your nephew, with the new-boy network, where you hire whoever impressed you most when you shook his hand. Social progress, unless we're careful, can merely be the means by which we replace the obviously arbitrary with the not so obviously arbitrary.
Myers has spent much of the past year helping to teach Introduction to Computer Science. He realized, he says, that one of the reasons that students were taking the courses that they wanted to get jobs in the software industry. I decided that, having gone through all this interviewing, I had developed some expertise, and I would like to share that. There is a real skill and art in presenting yourself to potential employers. And so what we did in this class was talk about the kinds of things that employers are looking for—what are they looking for in terms of personally. One of the most important things is that you have to come across as being confident in what you are doing and in who you are. How do you do that? Speak clearly and smile." As he said that, Nolan Myers smiled. "For a lot of people, that's a very hard skill to learn. But for some reason I seem to understand it intuitively.”
— May 29, 2000
Week 05: A Short History of Tracking
Response deadline: 9AM Wednesday, Sept. 24
Seven weeks after the terrorist attacks that killed thousands of people and demolished the World Trade Center in New York, one of the nation’s top code breakers walked out of the premier spy agency in the United States for the last time.
It was October 31, 2001. Lower Manhattan was still smoldering. Letters containing anthrax had been sent to members of Congress and media outlets across the nation. Bomb scares were reported seemingly every day. A jittery nation was at war with an unseen enemy.
But Bill Binney, a code breaker who had risen to the level equivalent to a general within the National Security Agency, wasn't joining the fight. He was retiring after more than thirty years at the agency. As he reached the bottom of the steps at the agency headquarters in Fort Meade, Maryland, he said, “Free at last. Free at last.”
Binney had spent years trying to modernize the spy agency’s surveillance methods so that it could monitor Internet communications that bounce all over the world, while still respecting the privacy of U.S. citizens’ communications. But his efforts had been thwarted at every turn.
Now, his colleagues were telling him that the agency was collecting the communications of U.S. citizens without any privacy protections. He wanted no part of it.
As he left the Fort Meade compound, Binney was fleeing what he viewed as the scene of a crime. "I could not stay after the NSA began purposefully violating the Constitution," he later declared in court testimony against his former employer.
We have since learned, of course, that Binney was right. After the 9/11 terrorist attacks, the U.S. government established sweeping, possibly illegal dragnets that captured the phone call and e-mail traffic of nearly every American.
In my quest to understand the history and origins of mass surveillance, I kept returning to the year 2001. Not only was it the year of the devastating terrorist attacks on the United States, but it was also the year that the technology industry was left reeling from the bursting of the dot-com bubble. These two seemingly unrelated events each set in motion a chain of events that created the legal and technical underpinnings of today's dragnets. For the U.S. government, the terrorist attacks showed that its traditional methods of intelligence gathering weren't working. And for Silicon Valley, the crash showed that it needed to find a new way to make money.
Both arrived at the same answer to their disparate problems: collecting and analyzing vast quantities of personal data.
Of course, each had a different purpose. The government was seeking to find and extract terrorists who might be hiding within the population. The tech industry was seeking to lure advertisers with robust dossiers about individuals. But, inevitably, the two became intertwined as the U.S. government used its power to dip into the tech industry's profiles.
Together, the government and the tech industry hatched our Dragnet Nation. This is the story of how it all began.
In the eighteenth century, the British were having a hard time controlling their American colonies. The Americans were rebelling against British attempts to block trade between the colonies and other European countries and against British demands that they pay taxes without receiving representation in Parliament.
To combat the smuggling epidemic, the British instituted a new type of surveillance technique: general search warrants, known as writs of assistance, which allowed British officers to conduct what basically amounted to suspicionless house-to-house searches.
Americans were outraged that British officers could storm into any house at any time, even during a wedding or funeral. "It appears to me the worst instrument of arbitrary power," the lawyer James Otis Jr. argued in a famous speech in Boston in 1761.
Outrage over the general warrants helped prompt the American Revolution. And outrage over the general warrants are the underpinning of the Fourth Amendment to the U.S. Constitution, which states: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”
The Fourth Amendment is a bedrock principle for law enforcement officers in the United States. However, technology has enabled the exploitation of loopholes in the interpretation of the Fourth Amendment. Some of the most important loopholes are:
Public space. The Fourth Amendment protects only "persons, houses, papers and effects." The Supreme Court has interpreted this language to mean that individuals have no reasonable expectation of privacy in public. However, technology has reduced the protective confines of private space by enabling surveillance of computer use in one's own home and drones that fly over backyards.
Third-Party Doctrine. The Supreme Court has established the "Third-Party Doctrine," which states that individuals do not have a reasonable expectation of privacy in information they give to third parties such as their bank or their phone company. As a result, even sensitive information that is stored with third parties, such as e-mail, can often be obtained without a search warrant.
Metadata. Metadata is data about data-for example, the envelope containing a letter can be considered metadata; the data is the letter itself. The court has traditionally set lower legal standards for searches of metadata than for searches of data. For instance, the post office can take a photograph of the envelope of your letter without a warrant, but it cannot open the letter without a warrant. In the digital era, metadata can reveal a lot, such as all the phone numbers you call, the people you e-mail, and your location.
Border searches. Courts have largely supported a "border search exception" to the Fourth Amendment, which allows government to conduct searches at the border without obtaining a search warrant. In today's electronic age, that means that agents can-and often do-download the entire contents of an individual's phone or computer at the border. U.S. Customs and Border Patrol says that it conducts about fifteen electronic media searches per day. In March 2013 the U.S. Court of Appeals for the Ninth Circuit in California set a new limit on device searches at the border, ruling in United States v. Cotterman that reasonable suspicion of criminal activity was required for a forensic search of a device-such as using software to analyze encrypted or deleted data, as opposed to performing a more cursory look at documents, photos, or other files.
In the digital age, these loopholes have become large enough to allow for the type of suspicionless searches that outraged the Founding Fathers.
U.S. presidents have long been cautious about overstepping the bounds of the Fourth Amendment. In 1981, when President Ronald Reagan authorized limited domestic spying in order to seek Soviet infiltrators, he ordered the intelligence agencies to use "the least intrusive collection techniques feasible within the United States or directed against United States persons abroad." Over the years, Reagan's directive has been interpreted to mean that domestic spying should be done cautiously, and only in cases where there is reason to suspect a crime.
But after 9/11, the requirement to establish some kind of suspicion before engaging in domestic spying was, for all intents and purposes, tossed aside. Documents revealed by the former NSA contractor Edward Snowden paint a devastating portrait of how a single decision made in the days after the attack opened the floodgates for vast domestic dragnets. According to a leaked draft of a 2009 inspector general's report, the NSA's domestic spying began on September 14, 2001, three days after the attacks, when the agency's director, Michael Hayden, approved warrantless interception of any U.S. phone call to or from specific terrorist-identified phone numbers in Afghanistan. On September 26, Hayden expanded the order to cover all phone numbers in Afghanistan.
But soon Hayden wanted more data. He believed there was an "international gap" between what the NSA was collecting overseas and what the FBI was looking at domestically. No one was monitoring communications to the United States that originated abroad. So Hayden worked with Vice President Dick Cheney, who asked his legal counsel to help draft a legal memo that would aid the NSA in filling the international gap. On October 4, President George W. Bush issued a memorandum titled, "Authorization for specified electronic surveillance activities during a limited period to detect and prevent acts of terrorism within the United States." The memo allowed Hayden to continue to target communications between Afghanistan and the United States without seeking approval from the Foreign Intelligence Surveillance Court, which normally oversees electronic surveillance that involves U.S. residents. The program was authorized for thirty days.
At the time, it seemed like an understandable emergency measure. In an era when terrorists could mask their Internet traffic by bouncing it all over the world, it was sometimes difficult to sort out U.S. from foreign communications. The order gave the NSA a temporary reprieve from sorting out U.S. communications during a time of crisis.
However, Hayden's narrowly crafted, short-term program eventually metastasized into a full-blown domestic spying effort. The thirty-day order was perpetually renewed and expanded. Within a year, it expanded beyond just U.S.-Afghanistan communications. The NSA used the presidential order to justify obtaining e-mail and phone communications from thousands of targets at a time. It also began obtaining bulk long-distance and international calling records, to conduct "chaining," that is, finding a person who called a person who called a suspected terrorist. And the NSA began collecting Internet traffic (who you e-mailed and Web pages you visit) from sources where a "preponderance of communications was from foreign sources" and there was a "high probability" of collecting terrorist traffic.
To collect all this data, the NSA sought cooperation from Internet and phone companies. The report states that seven companies (who are not named) were approached. Three declined to participate.
In 2005, the New York Times broke the story of the warrantless wiretapping program, describing it as a major shift in intelligencegathering practices. The broad sweep of the program became clear a few months later when a retired AT&T technician, Mark Klein, went public with the news that the NSA had installed equipment in a secret room in AT&T's San Francisco office that could tap all the communications that flowed through that portion of the Internet. "This is the infrastructure for an Orwellian police state. It must be shut down!" Klein said in a public statement.
Then in May 2006, USA Today published an article stating that AT&T, Verizon, and BellSouth began providing the NSA with the phone call records of their customers soon after 9/11. "It's the largest database ever assembled in the world," said an unnamed official quoted in the article.
Under pressure, President Bush briefly shut down parts of the program. But in 2008, he signed into law amendments to the Foreign Intelligence Surveillance Act, which reinstated and legalized the wiretapping program and immunized the telecommunications providers against lawsuits for their previous participation in a possibly illegal program.
The FISA amendments established a new class of search warrants that allowed the government to intercept communications without obtaining the name of a target-essentially continuing the broad sweeps that it had conducted under warrantless wiretapping. But this time, a judge had to approve the algorithm being used to target suspects. The PRISM program, disclosed by Snowden, described the Internet companies that were complying with the algorithmic warrants. Yahoo! apparently fought to declare one of the warrants unconstitutional in a secret court hearing, but it lost and was forced to comply with the warrant under a threat of civil contempt.
Amazingly, it turns out that the warrantless wiretapping was one of the more restrained NSA programs, since it captured only U.S.-to-foreign communications. Far more sweeping were the vast amounts of phone and Internet traffic that the NSA began collecting within the United States. Because it was just "metadata," the NSA argued that sweeping up domestic phone calling records and Internet traffic was not violating Americans' privacy.
Snowden revealed a secret court order requiring Verizon to turn over daily calling records to the NSA. Soon after, Senator Dianne Feinstein of California confirmed that the NSA had been collecting domestic and international calling records from all the major telecommunications companies for seven years.
Snowden also revealed a 2007 memo written by Kenneth Wainstein, a Justice Department attorney, in which he pushed for the NSA to be granted legal authority to collect more Internet traffic within the United States. "Through the use of computer algorithms, NSA creates a chain of contacts linking communicants," Wainstein wrote. "NSA's present practice is to 'stop' when a chain hits a telephone number or address believed to be used by a United States person." He then asked the attorney general for permission to conduct "contact chaining" of U.S. residents.
Apparently, his wish was briefly granted. The Obama administration said that the Internet traffic-monitoring program ended in 2011 and was not restarted. But it remains likely that the NSA is still monitoring domestic Internet traffic under another guise.
Regardless, Snowden's revelations confirmed what many had long suspected: the creation of a tiny thirty-day dragnet covering U.S.-Afghanistan communications had mushroomed into a massive domestic dragnet.
After 9/11, a massive rush of counterterrorism spending fueled dragnet surveillance at the state and local levels as well. Federal intelligence agency budgets ballooned to $75 billion in 2013, up from about $27 billion prior to the attacks. And some of that trickled down to the states in the form of grants.
Consider just the activities of the Department of Homeland Security. Since 9/11, the department has doled out more than $7 billion for grants to help "high-threat, high-density urban areas" to prevent and respond to terrorism. More than $50 million of DHS's grants were doled out to state law enforcement agencies to purchase automated license plate readers that allow them to keep tabs on citizens' movements in ways never before possible. The department also helped fund the creation of "fusion centers" in nearly every state that were tasked with crunching data from different agencies-and often from commercial data brokers-to look for clues that could prevent future acts of terrorism. And local police increasingly began tracking people using signals emitted by their cell phones.
At the same time, suspicionless investigations became more common.
In 2008, the attorney general issued pew guidelines that allowed the FBI to launch investigations without "any particular factual predication." Under the new rules, the FBI was charged with "obtaining information on individuals, groups, or organizations of possible investigative interest, because they may be involved in criminal or national security-threatening activities or because they may be targeted for attack or victimization by such activities.”
And in 2012, the Justice Department authorized the National Counterterrorism Center to copy entire government databases of information about U.S. citizens—flight records, lists of casino employees, the names of Americans hosting foreign-exchange students-and examine the files or suspicious behavior.
Previously, the agency had been barred from storing information about U.S. residents unless the person was a terrorism suspect or was related to an investigation. Suspicionless dragnets had become the new normal.
The terrorist attacks of 2001 also ushered in an era of dragnets in Silicon Valley.
Until the late 1990s, the consumer software industry was a retail business. Software was sold in shrink-wrapped boxes on store shelves. Of course, companies also bought industrial-grade software wholesale. But the popular market—consisting mostly of games and office productivity tools—was a retail business.
The Internet blew up the software business entirely.
The first real piece of Internet software was the Web browser Netscape Navigator, introduced in 1994. The prospect of the first truly mass-market software propelled Netscape to a stratospheric initial public offering. Its stock price shot up in its first day of trading, closing the day at four times its initial offering price. Netscape's cofounder Marc Andreessen, only twenty-four years old, suddenly found himself worth $171 million. The following year, Andreessen was pictured on the cover of Time magazine, barefoot and wearing a crown, next to the caption "The Golden Geeks.”
But the profits never came. Microsoft began including a free Web browser, Internet Explorer, along with its Windows 95 operating system. As a result, Netscape was never able to charge for its software.
In 1998, the Department of Justice and attorneys general from twenty states and the District of Columbia sued Microsoft, alleging that it was acting as a monopoly in bundling Internet Explorer with Windows 95. But by the time Microsoft signed a consent decree in 2002, the damage was done. In 1998, Internet Explorer surpassed Netscape in market share, and by 2008 Netscape's software was officially abandoned.
The first truly mass-market software had been built. But it hadn't made any money. The lesson was clear: the retail software market was dead. But technology requires software. How was it going to be financed?
At first it seemed that advertising might be the answer. In the late 1990s, Silicon Valley was awash in dot-com businesses, many of them based on the premise that advertising would support their efforts. But the bubble burst in 2000. Yahoo!, whose revenue came mostly from online advertising, saw its market capitalization plummet from $113.9 billion in early 2000 to just $7.9 billion a year later.
The conventional wisdom was that online advertising had failed. "Two years ago, nearly all advertisers were saying, 'I have to be on the Internet,'" Pat McGrath, CEO of the Arnold McGrath ad agency, said in November 2001. "Today, they are stepping back and saying, 'Does the Internet make sense as one of the ways to promote this brand?'" McGrath's assessment was echoed across the industry. Wendy Taylor, the editor of Ziff Davis Smart Business, was the most succinct. "Online advertising is dead," she declared.
An industry with the best tools to measure the size of its audience in the history of advertising was accused of having no metrics to prove the effectiveness of its product. Internet companies began to search for even better measuring sticks. A great tracking technology called cookies could track Web users from site to site. But it wasn't clear if it was legal.
In 2000, a federal class action suit was brought against the online advertising company DoubleClick, alleging that its installation of cookies on the computers of website visitors was violating laws that limit wiretapping, hacking, and electronic surveillance. A year later, Judge Naomi Reice Buchwald, in the Southern District of New York, ruled that DoubleClick’s actions were not illegal because websites authorized DoubleClick to install cookies on their visitors' computers. "We find that the DoubleClick-affiliated Web sites are 'parties to the communication[s]' from plaintiffs and have given sufficient consent to DoubleClick to intercept them,'' she wrote. Her ruling amounted to a free pass for corporate Internet surveillance: when a person visits a website, the website is free to invite others to secretly wiretap the visitor.
Finally, Silicon Valley had a business model: tracking.
Of course, private companies have long collected data about their customers and employees. But buying and selling personal data didn't become an industry until the rise of modern computing.
In 1971, Vinod Gupta's boss asked him to get a list of every mobile-home dealer in the country. Gupta, a recent immigrant from India who had completed an MBA at the University of Nebraska, sat down with a bunch of yellow page directories and began creating his own list. He soon realized there must be a better way to create a marketing list. In 1972, he founded a company, American Business Information, which used the yellow page listings to build custom lists for marketers to use. The company, now known as Infogroup, soon branched out to include data from the white pages and began buying data from professional associations and scooping up any kind of public data available-from driver's license records to voter registration cards to court records.
"Just about every list is available," Gupta later said. "If you want left-handed golfers or left-handed fishermen or fly fishermen or dog owners, all those lists are available.”
Across the country, in Conway, Arkansas, another company was tackling the same problem. In 1969, Charles Ward, a local businessman who was active in the Democratic Party, set up a small company called Demographics Inc. to help local candidates run direct mail campaigns. His company helped Dale Bumpers in his run for governor of Arkansas, and Lloyd Bentsen in his unsuccessful presidential bid, before eventually expanding beyond politics. In 1989, the company changed its name to Acxiom.
Acxiom soared in the 1990s, as businesses needed companies with computer expertise to manage their customer data. Between 1993 and 1998, Acxiom's revenue quadrupled to $402 million from $91 million. "The data has always been there," Donald Hinman, an executive at Acxiom, told the Washington Post in 1998. "It's just that now, with the technology, you can access it."
The new data troves fueled new businesses. The credit card companies Capital One and Discover found ways to slice and dice the population into profitable segments that they could target by direct mail. Selling data became a lucrative business for governments at all levels. The state of Florida alone makes about $62 million a year selling driver's license data. The U.S. Postal Service generates $9.5 million in revenue a year allowing companies like Acxiom to access its National Change of Address database.
In the 2000s, as the Internet became pervasive, marketers became interested in "fresher" data about where people were browsing online. The DoubleClick legal decision had spawned an entire industry devoted to following Web users' every click o1iline. In 2007, all the Internet giants jumped into the online tracking business. AOL bought the behavioral targeting firm TACODA for $275 million, Google paid $3.1 billion for DoubleClick, and Microsoft paid $6 billion for the online ad company Quantive. All those companies were in the business of building profiles of Web users.
The big data brokers reacted quickly. Acxiom, along with others, began working to merge its files with Web-browsing records, allowing advertisers to target online ads as precisely as they targeted their mail. At the same time, Acxiom started selling its data to companies such as Facebook that wanted to enhance their own tracking.
Online tracking also fueled a new industry: data trading. On exchanges similar to the stock market, advertisers bought and sold customer profiles in millisecond trades. It works like this: When you look at a digital camera on eBay, the Web page is embedded with code from a data exchange such as BlueKai. Once BlueKai is alerted that you are on the page, it instantly auctions off your "cookie" to advertisers who want to reach camera buyers. The highest bidder wins the right to show you a digital camera advertisement on subsequent pages that you visit. That's often why online ads appear to follow you around.
Due in large part to tracking, online advertising is growing fast. Industry revenues rose to $36.6 billion in 2012, up from just $7.3 billion in 2003. Tracking is so crucial to the industry that in 2013 Randall Rothenberg, the president of the Interactive Advertising Bureau, said that if the industry lost its ability to track people, "billions of dollars in Internet advertising and hundreds of thousands of jobs dependent on it would disappear.”
Meglena Kuneva, a member of the European Commission, summed it up best in 2009 when she said: "Personal data is the new oil of the Internet and the new currency of the digital world.”
If you were to build a taxonomy of trackers it would look something like this:
GOVERNMENT:
• Incidental collectors. Agencies that collect data in their normal course of business, such as state motor vehicle registries and the IRS, but are not directly in the data business.
• Investigators. Agencies that collect data about suspects as part of law enforcement investigations, such as the FBI and local police.
• Data analysts. A new class of agencies that scoop up and analyze data from government agencies and commercial data brokers, such as state fusion centers and the National Counterterrorism Center.
• Espionage. Agencies such as the NSA that are supposed to focus on foreign spying, but have turned their attention to domestic spying as well.
COMMERCIAL:
Incidental collectors. This is basically all businesses that collect personal information in the course of regular business, ranging from the local dry cleaner to banks and telecommunications providers.
• The "Freestylers." These are mostly software companies, such as Google and Facebook, which provide free services and make money from their customers' data-usually by selling access to the data to marketers.
• Marketers. The rise of Internet tracking as a basis for digital advertising business has put marketers primarily in the data business.
• Data brokers. These are companies that buy from incidental government and commercial collectors, analyze the data, and resell it. Some, like Acxiom, sell primarily to businesses. Others, such as Intelius, sell primarily to individuals.
• Data exchanges. Marketers and data brokers increasingly trade information on real-time trading desks that mimic stock exchanges.
INDIVIDUALS:
• Democratized dragnets. Technology has become cheap enough that everyone can do their own tracking, with items such as dashboard cameras, build-it-yourself drones, and Google Glass eyeglasses that contain tiny cameras that can take photos and videos.
The trackers are deeply intertwined. Government data are the lifeblood for commercial data brokers. And government dragnets rely on obtaining information from the private sector.
Consider just one example: voting. To register to vote, citizens must fill out a government form that usually requires their name, address, and, in all but one state, birth date. But few voters realize that those lists are often sold to commercial data brokers. A 2011 study found that a statewide voter list sold for as little as $30 in California and as high as $6,050 in Georgia.
Commercial data brokers combine the voting information with other data to create rich profiles of individuals. For instance, the data broker Aristotle Inc. markets its ability to identify 190 million voters by more than "500 consumer data points" such as their credit rating and size of their mortgage.
And guess who buys Aristotle's enriched data? Politicians, who are sometimes using government money. Aristotle crows that "every U.S. President—Democrat and Republican—from Reagan through Obama, has used Aristotle products and/or services." In fact, an intrepid 2012 thesis by a Harvard undergraduate, Melissa Oppenheim, found that fifty-one members of the U.S. House of Representatives bought data from Aristotle using some of their congressional allowances, allowing them to identify their constituents by the age of their children, whether they subscribe to religious magazines, or if they have a hunting license. And thus, the data come full circle in what Oppenheim calls the "Dark Data Cycle." The government requires citizens to create data and then sells it to commercial entities, which launder the data and sell it back to the government.
The dark data cycle occurs with nearly every type of data. State auto vehicle records are swept into LexisNexis reports, which are enhanced with other data and sold to the Department of Homeland Security. Foreclosure records are compiled in state courts and then collected by data brokers such as CoreLogic, which sells packages of real estate data to clients including the government.
An even darker data cycle occurs in the secret Foreign Intelligence Surveillance Court, where the government can demand that private industry hand over data about their customers. In those circumstances, giant companies such as Google, Yahoo!, AT&T, Verizon, and Microsoft have been forced to hand over customer data to the NSA.
The reality is that corporate and government dragnets are inextricably linked; neither can exist without the other.
Bill Binney suffered for speaking out against the NSA's dragnets.
While at the NSA, Binney had developed what he believed was a dragnet that respected and protected individual privacy. Called ThinThread, it was a clever program that intercepted tons of Internet and phone data, encrypted it, and analyzed it for patterns. It would be decrypted only if a specific threat was found and a court had approved a search warrant to decrypt the data.
But he couldn't get the program deployed. After several years of internal battles, during which Binney and his colleagues took their case directly to congressional leaders, the NSA's top leaders declined to support ThinThread. One reason: in the pre-9/11 era, the NSA's lawyers worried that ThinThread would violate Americans' privacy because it might collect domestic communications, even though they were encrypted. Another reason: NSA director Michael Hayden had thrown his support behind a much more expensive program called Trailblazer, built by private contractors, which. also aimed to analyze the NSA's oceans of data but didn't use encryption. Trailblazer eventually was abandoned after massive cost overruns and technical failures.
In 2002, Binney's colleague Kirk Wiebe, who had worked on ThinThread, contacted the Department of Defense's inspector general to report what he believed was "waste, fraud and abuse" at the NSA. The inspector general's report, issued in 2005, was heavily redacted, but the few unredacted parts seemed to vindicate ThinThread.
In 2006, the Baltimore Sun published an article about the battles over ThinThread. "NSA Rejected System That Sifted Phone Data Legally," the headline stated.
On July 26, 2007, the FBI raided Binney's home in suburban Maryland. Binney was in the shower. "The guy came in and pointed a gun at me," he recalled. "I just said, 'Do you suppose I could put some clothes on?'"
Wiebe, who had retired from the NSA the same day as Binney in 2001, was also raided on this day. Neither Binney nor Wiebe was ever charged with a crime.
On November 28, 2007, the FBI raided the home of another ThinThread supporter, Thomas Drake, an NSA executive who had collaborated anonymously on the inspector general's investigation. Agents seized Drake's papers, computers, and hard drives and alleged that they found classified documents in the basement. Two and a half years later, Drake was indicted and charged with violating the Espionage Act because of his "willful retention" of classified documents.
Drake was financially devastated by the prosecution. He was five and a half years from retirement at the NSA. He lost his pension, which would have been $60,000 a year. He took out a second mortgage on his house and withdrew most of his 401(k)-retirement plan to pay for his expenses. He was unemployable in the intelligence community, so he started working at an Apple retail store. After spending $82,000 on legal fees, he was declared indigent by the court and was represented by a public defender.
In 2011, after a wave of publicity about Drake's plight, the government dropped all ten felony counts against Drake, as a condition for Drake pleading guilty to a misdemeanor of "exceeding the authorized use of a government computer." During the sentencing, the U.S. District Court judge Richard D. Bennett called the government's two-and-a-half-year delay between the search and indictment "unconscionable." "It was one of the most fundamental things in the Bill of Rights that this country was not to be exposed to people knocking on the door with government authority and coming into their homes," he wrote. "And when it happens, it should be resolved pretty quickly.”
Judge Bennett didn't overtly accuse the government of using its power to harass a whistle-blower. But he gave Drake the lightest sentence possible—one-year probation, during which he was required to do twenty hours of community service a month and no fine. He closed the sentencing hearing by addressing Drake: "I wish you the best of luck in the rest of your life.”
Prior to Drake's prosecution, Binney, Drake, and Wiebe had tried to reform the agency from within. But as Drake's trial approached, they went public. And after Drake's exoneration, they became full-time critics of the NSA, giving scathing interviews to media outlets and warning of the power of an unchecked agency that has information on everyone.
When I first met Binney, the first thing he said to me was that the amount of data being assembled by the NSA was "orders of magnitude" more than the world's most repressive secret police regimes, the Gestapo, the Stasi, and the KGB.
"It's a real danger when a government assembles that much information about a citizen," he told me. "Gathering that much information gives them power over everybody.”
Week 06: Is Connectivity dooming or saving us?
Response deadline: 9AM Wednesday, Oct. 1
The Shallows
It's been a while since the first-person singular was heard in these pages. This seems like a good time for me, your word-processing scribe, to make a brief reappearance. I realize that I've dragged you through a lot of space and time over the last few chapters, and appreciate your fortitude in sticking with me. The journey you've been on is the same one I took in trying to figure out what's been going on inside my head. The deeper I dug into the science of neuroplasticity and the progress of intellectual technology, the clearer it became that the Internet's import and influence can be judged only when viewed in the fuller context of intellectual history. As revolutionary as it may be, the Net is best understood as the latest in a long series of tools that have helped mold the human mind.
Now comes the crucial question: What can science tell us about the actual effects that Internet use is having on the way our minds work? No doubt, this question will be the subject of a great deal of research in the years ahead. Already, though, there is much we know or can surmise. The news is even more disturbing than I had suspected. Dozens of studies by psychologists, neurobiologists, educators, and Web designers point to the same conclusion: when we go online, we enter an environment that promotes cursory reading, hurried and distracted thinking, and superficial learning. It's possible to think deeply while surfing the Net, just as it's possible to think shallowly while reading a book, but that's not the type of thinking the technology encourages and rewards.
One thing is very clear: if, knowing what we know today about the brain's plasticity, you were to set out to invent a medium that would rewire our mental circuits as quickly and thoroughly as possible, you would probably end up designing something that looks and works a lot like the Internet. It's not just that we tend to use the Net regularly, even obsessively. It's that the Net delivers precisely the kind of sensory and cognitive stimuli—repetitive, intensive, interactive, addictive-that have been shown to result in strong and rapid alterations in brain circuits and functions. With the exception of alphabets and number systems, the Net may well be the single most powerful mind-altering technology that has ever come into general use. At the very least, it's the most powerful that has come along since the book.
During the course of a day, most of us with access to the Web spend at least a couple of hours online-sometimes much more and during that time, we tend to repeat the same or similar actions over and over again, usually at a high rate of speed and often in response to cues delivered through a screen or a speaker. Some of the actions are physical ones. We tap the keys on our PC keyboard. We drag a mouse and click its left and right buttons and spin its scroll wheel. We draw the tips of our fingers across a trackpad. We use our thumbs to punch out text on the real or simulated keypads of our BlackBerrys or mobile phones. We rotate our iPhones, iPods, and iPads to shift between "landscape" and "portrait" modes while manipulating the icons on their touch-sensitive screens.
As we go through these motions, the Net delivers a steady stream of inputs to our visual, somatosensory, and auditory cortices. There are the sensations that come through our hands and fingers as we click and scroll, type and touch. There are the many audio signals delivered through our ears, such as the chime that announces the arrival of a new e-mail or instant message and the various ringtones that our mobile phones use to alert us to different events. And, of course, there are the myriad visual cues that flash across our retinas as we navigate the online world: not just the ever-changing arrays of text and pictures and videos but also the hyperlinks distinguished by underlining or colored text, the cursors that change shape depending on their function, the new e-mail subject lines highlighted in bold type, the virtual buttons that call out to be clicked, the icons and other screen elements that beg to be dragged and dropped, the forms that require filling out, the pop-lip ads and windows that need to be read or dismissed. The Net engages all of our senses-except, so far, those of smell and taste-and it engages them simultaneously.
The Net also provides a high-speed system for delivering responses and rewards-"positive reinforcements," in psychological terms-which encourage the repetition of both physical and mental actions. When we click a link, we get something new to look at and evaluate. When we Google a keyword, we receive, in the blink of an eye, a list of interesting information to appraise. When we send a text or an instant message or an e-mail, we often get a reply in a matter of seconds or minutes. When we use Facebook, we attract new friends or form closer bonds with old ones. When we send a tweet through Twitter, we gain new followers. When we write a blog post, we get comments from readers or links from other bloggers. The Net's interactivity gives us powerful new tools for finding information, expressing ourselves, and conversing with others. It also turns us into lab rats constantly pressing levers to get tiny pellets of social or intellectual nourishment.
The Net commands our attention with far greater insistency than our television or radio or morning newspaper ever did. Watch a kid texting his friends or a college student looking over the role of new messages and requests on her Face book page or a businessman scrolling through his e-mails on his BlackBerry-or consider yourself as you enter keywords into Google's search box and begin following a trail of links. What you see is a mind consumed with a medium. When we're online, we're often oblivious to everything else going on around us. The real world recedes as we process the flood of symbols and stimuli coming through our devices.
The interactivity of the Net amplifies this effect as well. Because we're often using our computers in a social context, to converse with friends or colleagues, to create "profiles" of ourselves, to broadcast our thoughts through blog posts or Facebook updates, our social standing is, in one way or another, always in play, always at risk. The resulting self-consciousness—even, at times, fear—magnifies the intensity of our involvement with the medium. That's true for everyone, but it's particularly true for the young, who tend to be compulsive in using their phones and computers for texting and instant messaging. Today's teenagers typically send or receive a message every few minutes throughout their waking hours. As the psychotherapist Michael Hausauer notes, teens and other young adults have a "terrific interest in knowing what's going on in the lives of their peers, coupled with a terrific anxiety about being out of the loop." If they stop sending messages, they risk becoming invisible. Our use of the Internet involves many paradoxes, but the one that promises to have the greatest long-term influence over how we think is this one: the Net seizes our attention only to scatter it. We focus intensively on the medium itself, on the flickering screen, but we're distracted by the medium's rapid-fire delivery of competing messages and stimuli. Whenever and wherever we log on, the Net presents us with an incredibly seductive blur. Human beings "want more information, more impressions, and more complexity," writes Torkel Klingberg, the Swedish neuroscientist. We tend to "seek out situations that demand concurrent performance or situations in which [we] are overwhelmed with information." If the slow progression of words across printed pages dampened our craving to be inundated by mental stimulation, the Net indulges it. It returns us to our native state of bottom-up distractedness, while presenting us with far more distractions than our ancestors ever had to contend with.
Not all distractions are bad. As most of us know from experience, if we concentrate too intensively on a tough problem, we can get stuck in a mental rut. Our thinking narrows, and we struggle vainly to come up with new ideas. But if we let the problem sit unattended for a time-if we "sleep on it"—we often return to it with a fresh perspective and a burst of creativity. Research by Ap Dijksterhuis, a Dutch psychologist who heads the Unconscious Lab at Radboud University in Nijmegen, indicates that such breaks in our attention give our unconscious mind time to grapple with a problem, bringing to bear information and cognitive processes unavailable to conscious deliberation. We usually make better decisions, his experiments reveal, if we shift our attention away from a difficult mental challenge for a time. But Dijksterhuis's work also shows that our unconscious thought processes don't engage with a problem until we've clearly and consciously defined the problem. If we don't have a particular intellectual goal in mind, Dijksterhuis writes, "unconscious thought does not occur.”
The constant distractedness that the Net encourages—the state of being, to borrow another phrase from Eliot's Four Quartets, "distracted from distraction by distraction"—is very different from the kind of temporary, purposeful diversion of our mind that refreshes our thinking when we're weighing a decision. The Net's cacophony of stimuli short-circuits both conscious and unconscious thought, preventing our minds from thinking either deeply or creatively. Our brains turn into simple signal-processing units, quickly shepherding information into consciousness and then back out again.
In a 2005 interview, Michael Merzenich ruminated on the Internet's power to cause not just modest alterations but fundamental changes in our mental makeup. Noting that "our brain is modified on a substantial scale, physically and functionally, each time we learn a new skill or develop a new ability," he described the Net as the latest in a series of "modern cultural specializations" that "contemporary humans can spend millions of 'practice' events at [and that] the average human a thousand years ago had absolutely no exposure to." He concluded that "our brains are massively remodeled by this exposure." He returned to this theme in a post on his blog in 2008, resorting to capital letters to emphasize his points, "When culture drives changes in the ways that we engage our brains, it creates DIFFERENT brains," he wrote, noting that our minds "strengthen specific heavily-exercised processes." While acknowledging that it's now hard to imagine living without the Internet and online tools like the Google search engine, he stressed that "THEIR HEAVY USE HAS NEUROLOGICAL CONSEQUENCES.”
What we're not doing when we're online also has neurological consequences, Just as neurons that fire together wire together, neurons that don't fire together don't wire together. As the time we spend scanning Web pages crowds out the time we spend reading books, as the time we spend exchanging bite-sized text messages crowds out the time we spend composing sentences and paragraphs, as the time we spend hopping across links crowds out the time we devote to quiet reflection and contemplation, the circuits that support those old intellectual functions and pursuits weaken and begin to break apart, the brain recycles the disused neurons and synapses for other, more pressing work. We gain new skills and perspectives but lose old ones.
Gary Small, a professor of psychiatry at UCLA and the director of its Memory and Aging Center, has been studying the physiological and neurological effects of the use of digital media, and what he's discovered backs up Merzenich's belief that the Net causes extensive brain changes. "The current explosion of digital technology not only is changing the way we live and communicate but is rapidly and profoundly altering our brains," he says. The daily use of computers, smartphones, search engines, and other such tools "stimulates brain cell alteration and neurotransmitter release, gradually strengthening new neural pathways in our brains while weakening old ones.”
In 2008, Small and two of his colleagues carried out the first experiment that actually showed people's brains changing in response to Internet use. The researchers recruited twenty-four volunteers—a dozen experienced Web surfers and a dozen novices—and scanned their brains as they performed searches on Google. (Since a computer won't fit inside a magnetic resonance imager, the subjects were equipped with goggles onto which were projected images of Web pages, along with a small handheld touchpad to navigate the pages.) The scans revealed that the brain activity of the experienced Googlers was much broader than that of the novices. In particular, "the computer-savvy subjects used a specific network in the left front part of the brain, known as the dorsolateral prefrontal cortex, [while] the Internet-naive subjects showed minimal, if any, activity in this area," As a control for the test, the researchers also had the subjects read straight text in a simulation of book reading; in this case, scans revealed no significant difference in brain activity between the two groups, Clearly, the experienced Net users' distinctive neural pathways had developed through their Internet use.
The most remarkable part of the experiment came when the tests were repeated six days later. In the interim, the researchers had the novices spend an hour a day online, searching the Net. The new scans revealed that the area in their prefrontal cortex that had been largely dormant now showed extensive activity—just like the activity in the brains of the veteran surfers. "After just five days of practice, the exact same neural circuitry in the front part of the brain became active in the Internet-naive subjects," reports Small. "Five hours on the Internet, and the naive subjects had already rewired their brains." He goes on to ask, "If our brains are so sensitive to just an hour a day of computer exposure, what happens when we spend more time [online]?”
One other finding of the study sheds light on the differences between reading Web pages and reading books. The researchers found that when people search the Net they exhibit a very different pattern of brain activity than they do when they read book-like text. Book readers have a lot of activity in regions associated with language, memory, and visual processing, but they don't display much activity in the prefrontal regions associated with decision making and problem solving. Experienced Net users, by contrast, display extensive activity across all those brain regions when they scan and search Web pages. The good news here is that Web surfing, because it engages so many brain functions, may help keep older people's minds sharp. Searching and browsing seem to "exercise" the brain in a way similar to solving crossword puzzles, says Small.
But the extensive activity in the brains of surfers also points to why deep reading and other acts of sustained concentration become so difficult online. The need to evaluate links and make related navigational choices, while also processing a multiplicity of fleeting sensory stimuli, requires constant mental coordination and decision making, distracting the brain from the work of interpreting text or other information. Whenever we, as readers, come upon a link, we have to pause, for at least a split second, to allow our prefrontal cortex to evaluate whether or not we should click on it. The redirection of our mental resources, from reading words to making judgments, may be imperceptible to us—our brains are quick—but it's been shown to impede comprehension and retention, particularly when it's repeated frequently. As the executive functions of the prefrontal cortex kick in, our brains become not only exercised but overtaxed. In a very real way, the Web returns us to the time of scriptura continua, when reading was a cognitively strenuous act. In reading online, Maryanne Wolf says, we sacrifice the facility that makes deep reading possible. We revert to being "mere decoders of information.”
Our ability to make the rich mental connections that form when we read deeply and without distraction remains largely disengaged.
Steven Johnson, in his 2005 book Everything Bad Is Good for You, contrasted the widespread, teeming neural activity seen in the brains of computer users with the much more muted activity evident in the brains of book readers. The comparison led him to suggest that computer use provides more intense mental stimulation than does book reading. The neural evidence could even, he wrote, lead a person to conclude that "reading books chronically understimulates the senses."" But while Johnson's diagnosis is correct, his interpretation of the differing patterns of brain activity is misleading. It is the very fact that book reading "understimulates the senses" that makes the activity so intellectually rewarding. By allowing us to filter out distractions, to quiet the problem-solving functions of the frontal lobes, deep reading becomes a form of deep thinking. The mind of the experienced book reader is a calm mind, not a buzzing one. When it comes to the firing of our neurons, it's a mistake to assume that more is better.
John Sweller, an Australian educational psychologist, has spent three decades studying how our minds process information and, in particular, how we learn. His work illuminates how the Net and other media influence the style and the depth of our thinking. Our brains, he explains, incorporate two very different kinds of memory: short-term and long-term. We hold our immediate impressions, sensations, and thoughts as short-term memories, which tend to last only a matter of seconds. All the things we've learned about the world, whether consciously or unconsciously, are stored as long-term memories, which can remain in our brains for a few days, a few years, or even a lifetime. One particular type of short-term memory, called working memory, plays an instrumental role in the transfer of information into long-term memory and hence in the creation of our personal store of knowledge. Working memory forms, in a very real sense, the contents of our consciousness at any given moment. "We are conscious of what is in working memory and not conscious of anything else," says Sweller.
If working memory is the mind's scratch pad, then long-term memory is its filing system. The contents of our long-term memory lie mainly outside of our consciousness. In order for us to think about something we've previously learned or experienced, our brain has to transfer the memory from long-term memory back into working memory. "We are only aware that something was stored in long-term memory when it is brought down into working memory," explains Sweller. It was once assumed that long-term memory served merely as a big warehouse of facts, impressions, and events, that it "played little part in complex cognitive processes such as thinking and problem-solving." But brain scientists have come to realize that long-term memory is actually the seat of understanding. It stores not just facts but complex concepts, or "schemas." By organizing scattered bits of information into patterns of knowledge, schemas give depth and richness to our thinking. "Our intellectual prowess is derived largely from the schemas we have acquired over long periods of time," says Sweller. "We are able to understand concepts in our areas of expertise because we have schemas associated with those concepts.”
The depth of our intelligence hinges on our ability to transfer information from working memory to long-term memory and weave it into conceptual schemas. But the passage from working memory to long-term memory also forms the major bottleneck in our brain. Unlike long-term memory, which has a vast capacity, working memory is able to hold only a very small amount of information. In a renowned 1956 paper, "The Magical Number Seven, Plus or Minus Two," Princeton psychologist George Miller observed that working memory could typically hold just seven pieces, or "elements," of information. Even that is now considered an overstatement. According to Sweller, current evidence suggests that "we can process no more than about two to four elements at any given time with the actual number probably being at the lower [rather] than the higher end of this scale." Those elements that we are able to hold in working memory will, moreover, quickly vanish "unless we are able to refresh them by rehearsal.”
Imagine filling a bathtub with a thimble; that's the challenge involved in transferring information from working memory into long-term memory. By regulating the velocity and intensity of information flow, media exert a strong influence on this process. When we read a book, the information faucet provides a steady drip, which we can control by the pace of our reading. Through our single-minded concentration on the text, we can transfer all or most of the information, thimbleful by thimbleful, into long-term memory and forge the rich associations essential to the creation of schemas. With the Net, we face many information faucets, all going full blast. Our little thimble overflows as we rush from one faucet to the next. We're able to transfer only a small portion of the information to long-term memory, and what we do transfer is a jumble of drops from different faucets, not a continuous, coherent stream from one source.
The information flowing into our working memory at any given moment is called our "cognitive load." When the load exceeds our mind's ability to store and process the information—when the water overflows the thimble—we're unable to retain the information or to draw connections with the information already stored in our long-term memory. We can't translate the new information into schemas. Our ability to learn suffers, and our understanding remains shallow. Because our ability to maintain our attention also depends on our working memory "we have to remember what it is we are to concentrate on," as Torkel Klingberg says—a high cognitive load amplifies the distractedness we experience. When our brain is overtaxed, we find "distractions more distracting." (Some studies link attention deficit disorder, or ADD, to the overloading of working memory.) Experiments indicate that as we reach the limits of our working memory, it becomes harder to distinguish relevant information from irrelevant information, signal from noise. We become mindless consumers of data.
Difficulties in developing an understanding of a subject or a concept appear to be "heavily determined by working memory load," writes Sweller, and the more complex the material we're trying to learn, the greater the penalty exacted by an overloaded mind. There are many possible sources of cognitive overload, but two of the most important, according to Sweller, are "extraneous problem-solving" and "divided attention." Those also happen to be two of the central features of the Net as an informational medium. Using the Net may, as Gary Small suggests, exercise the brain the way solving crossword puzzles does. But such intensive exercise, when it becomes our primary mode of thought, can impede deep learning and thinking. Try reading a book while doing a crossword puzzle; that's the intellectual environment of the Internet.
Back in the 1980s, when schools began investing heavily in computers, there was much enthusiasm about the apparent advantages of digital documents over paper ones. Many educators were convinced that introducing hyperlinks into text displayed on computer screens would be a boon to learning. Hypertext would, they argued, strengthen students' critical thinking by enabling them to switch easily between different viewpoints. Freed from the lockstep reading demanded by printed pages, readers would make all sorts of new intellectual connections among diverse texts. The academic enthusiasm for hypertext was further kindled by the belief, in line with the fashionable postmodern theories of the day, that hypertext would overthrow the patriarchal authority of the author and shift power to the reader. It would be a technology of liberation. Hypertext, wrote the literary theorists George Landow and Paul Delany, can "provide a revelation" by freeing readers from the "stubborn materiality" of printed text. By "moving away from the constrictions of page-bound technology," it "provides a better model for the mind's ability to reorder the elements of experience by changing the links of association or determination between them.”
By the end of the decade, the enthusiasm had begun to subside. Research was painting a fuller, and very different, picture of the cognitive effects of hypertext. Evaluating links and navigating a path through them, it turned out, involves mentally demanding problem-solving tasks that are extraneous to the act of reading itself. Deciphering hypertext substantially increases readers' cognitive load and hence weakens their ability to comprehend and retain what they're reading. A 1989 study showed that readers of hypertext often ended up clicking distractedly "through pages instead of reading them carefully." A 1990 experiment revealed that hypertext readers often "could not remember what they had and had not read." In another study that same year, researchers had two groups of people answer a series of questions by searching through a set of documents. One group searched through electronic hypertext documents, while the other searched through traditional paper documents. The group that used the paper documents outperformed the hypertext group in completing the assignment. In reviewing the results of these and other experiments, the editors of a 1996 book on hypertext and cognition wrote that, since hypertext "imposes a higher cognitive load on the reader," it's no surprise "that empirical comparisons between paper presentation (a familiar situation) and hypertext (a new, cognitively demanding situation) do not always favor hypertext." But they predicted that, as readers gained greater "hypertext literacy," the cognition problems would likely diminish.”
That hasn't happened. Even though the World Wide Web has made hypertext commonplace, indeed ubiquitous, research continues to show that people who read linear text comprehend more, remember more, and learn more than those who read text peppered with links. In a 2001 study, two Canadian scholars asked seventy people to read "The Demon Lover," a short story by the modernist writer Elizabeth Bowen. One group read the story in a traditional linear-text format; a second group read a version with links, as you'd find on a Web page. The hypertext readers took longer to read the story, yet in subsequent interviews they also reported more confusion and uncertainty about what they had read. Three-quarters of them said that they had difficulty following the text, while only one in ten of the linear-text readers reported such problems. One hypertext reader complained, "The story was very jumpy. I don't know if that was caused by the hypertext, but I made choices and all of a sudden it wasn't flowing properly, it just kind of jumped to a new idea I didn't really follow.”
A second test by the same researchers, using a shorter and more simply written story, Sean O'Faolain's "The Trout," produced the same results. Hypertext readers again reported greater confusion following the text, and their comments about the story's plot and imagery were less detailed and less precise than those of the linear-text readers. With hypertext, the researchers concluded, "the absorbed and personal mode of reading seems to be discouraged." The readers' attention "was directed toward the machinery of the hypertext and its functions rather than to the experience offered by the story."" The medium used to present the words obscured the meaning of the words.
In another experiment, researchers had people sit at computers and review two online articles describing opposing theories of learning. One article laid out an argument that "knowledge is objective"; the other made the case that "knowledge is relative." Each article was set up in the same way, with similar headings, and each had links to the other article, allowing a reader to jump quickly between the two to compare the theories. The researchers hypothesized that people who used the links would gain a richer understanding of the two theories and their differences than would people who read the pages sequentially, completing one before going on to the other. They were wrong. The test subjects who read the pages linearly actually scored considerably higher on a subsequent comprehension test than those who clicked back and forth between the pages. The links got in the way of learning, the researchers concluded.”
Another researcher, Erping Zhu, conducted a different kind of experiment that was also aimed at discerning the influence of hypertext on comprehension. She had groups of people read the same piece of online writing, but she varied the number of links included in the passage. She then tested the readers' comprehension by asking them to write a summary of what they had read and complete a multiple-choice test. She found that comprehension declined as the number of links increased. Readers were forced to devote more and more of their attention and brain power to evaluating the links and deciding whether to click on them. That left less attention and fewer cognitive resources to devote to understanding what they were reading. The experiment suggested a strong correlation "between the number of links and disorientation or cognitive overload," wrote Zhu. "Reading and comprehension require establishing relationships between concepts, drawing inferences, activating prior knowledge, and synthesizing main ideas. Disorientation or cognitive overload may thus interfere with cognitive activities of reading and comprehension.
In 2005, Diana Destefano and Jo-Anne Lefevre, psychologists with the Centre for Applied Cognitive Research at Canada's Carleton University, undertook a comprehensive review of thirty-eight past experiments involving the reading of hypertext. Although not all the studies showed that hypertext diminished comprehension, they found "very little support" for the once-popular theory "that hypertext will lead to an enriched experience of the text." To the contrary, the preponderance of evidence indicated that "the increased demands of decision-making and visual processing in hypertext impaired reading performance," particularly when compared to "traditional linear presentation." They concluded that "many features of hypertext resulted in increased cognitive load and thus may have required working memory capacity that exceeded readers' capabilities.”
Cognitive Surplus
In the 1720s, London was busy getting drunk. Really drunk. The city was in the grips of a gin-drinking binge, largely driven by new arrivals from the countryside in search of work. The characteristics of gin were attractive: fermented with grain that could be bought locally, packing a kick greater than that of beer, and considerably less expensive than imported wine, gin became a kind of anesthetic for the burgeoning population enduring profound new stresses of urban life. These stresses generated new behaviors, including what came to be called the Gin Craze.
Gin pushcarts plied the streets of London; if you couldn't afford a whole glass, you could buy a gin-soaked rag, and flophouses did brisk business renting straw pallets by the hour if you needed to sleep off the effects. It was a kind of social lubricant for people suddenly tipped into an unfamiliar and often unforgiving life, keeping them from completely falling apart. Gin offered its consumers the ability to fall apart a little bit at a time. It was a collective bender at civic scale.
The Gin Craze was a real event-gin consumption rose dramatically in the early 1700s, even as consumption of beer and wine remained flat. It was also a change in perception. England's wealthy and titled were increasingly alarmed by what they saw in the streets of London. The population was growing at a historically unprecedented rate, with predictable effects on living conditions and public health, and crime of all sorts was on the rise. Especially upsetting was that the women of London had taken to drinking gin, often gathering in mixed-sex gin halls, proof positive of its corrosive effects on social norms.
It isn't hard to figure out why people were drinking gin. It is palatable and intoxicating, a winning combination, especially when a chaotic world can make sobriety seem overrated. Gin drinking provided a coping mechanism for people suddenly thrown together in the early decades of the industrial age, making it an urban phenomenon, especially concentrated in London. London was the site of the biggest influx of population as a result of industrialization. From the mid-I600s to the mid-1700s, the population of London grew two and a half times as fast as the overall population of England; by 1750, one English citizen in ten lived there, up from one in twenty-five a century earlier.
Industrialization didn't just create new ways of working, it created new ways of living, because the relocation of the population destroyed ancient habits common to country living, while drawing so many people together that the new density of the population broke the older urban models as well. In an attempt to restore London's preindustrial norms, Parliament seized on gin. Starting in the late 1720s, and continuing over the next three decades, it passed law after law prohibiting various aspects of gin's production, consumption, or sale. This strategy was ineffective, to put it mildly. The result was instead a thirty-year cat-and-mouse game of legislation to prevent gin consumption, followed by the rapid invention of ways to defeat those laws. Parliament outlawed "flavored spirits"; so distillers stopped adding juniper berries to the liquor. Selling gin was made illegal; women sold from bottles hidden beneath their skirts, and some entrepreneurial types created the “puss and mew,” a cabinet set on the streets where a customer could approach and, if they knew the password, hand their money to the vendor hidden inside and receive a dram of gin in return.
What made the craze subside wasn't any set of laws. Gin consumption was treated as the problem to be solved, when in fact it was a reaction to the real problem—dramatic social change and the inability of older civic models to adapt. What helped the Gin Craze subside was the restructuring of society around the new urban realities created by London's incredible social density, a restructuring that turned London into what we'd recognize as a modern city, one of the first. Many of the institutions we mean when we talk about "the industrialized world" actually arose in response to the social climate created by industrialization, rather than to industrialization itself. Mutual aid societies provided shared management of risk outside the traditional ties of kin and church. The spread of coffeehouses and later restaurants was spurred by concentrated populations. Political parties began to recruit the urban poor and to field candidates more responsive to them. These changes came about only when civic density stopped being treated as a crisis and started being treated as a simple fact, even an opportunity. Gin consumption, driven upward in part by people anesthetizing themselves against the horrors of city life, started falling, in part because the new social structures mitigated these horrors. The increase in both population and aggregate wealth made it possible to invent new kinds of institutions; instead of madding crowds, the architects of the new society saw a civic surplus, created as a side effect of industrialization.
And what of us — What of our historical generation? That section of the global population we still sometimes refer to as "the industrialized world" has actually been transitioning to a post-industrial form for some time. The postwar trends of emptying rural populations, urban growth, and increased suburban density, accompanied by rising educational attainment across almost all demographic groups, have marked a huge increase in the number of people paid to think or talk, rather than to produce or transport objects. During this transition, what has been our gin, the critical lubricant that eased our transition from one kind of society to another?
The sitcom. Watching sitcoms—and soap operas, dramas, and the host of other amusements offered by TV—has absorbed the lion's share of the free time available to the citizens of the developed world.
Since the Second World War, increases in GDP, educational attainment, and life span have forced the industrialized world to grapple with something we'd never had to deal with on a national scale: free time. The amount of unstructured time cumulatively available to the educated population ballooned, both because the educated population itself ballooned, and because that population was living longer while working less. (Segments of the population experienced an upsurge of education and free time before the 1940s, but they tended to be in urban enclaves, and the Great Depression reversed many of the existing trends for both schooling and time off from work.) This change was accompanied by a weakening of traditional uses of that free time as a result of suburbanization-moving out of cities and living far from neighbors-and of periodic relocation as people moved for jobs. The cumulative free time in the postwar United States began to add up to billions of collective hours per year, even as picnics and bowling leagues faded into the past. So what did we do with all that time? Mostly, we watched TV.
We watched I Love Lucy. We watched Gilligan's Island. We watched Malcolm in the Middle. We watched Desperate Housewives. We had so much free time to burn and so few other appealing ways to burn it that every citizen in the developed world took to watching television as if it were a duty. TV quickly took up the largest chunk of our free time' an average of over twenty hours a week, worldwide. In the history of media, only radio has been as omnipresent, and much radio listening accompanies other activities, like work or travel. For most people most of the time, watching TV is the activity. (Because TV goes in through the eyes as well as the ears, it immobilizes even moderately attentive users, freezing them on chairs and couches, as a prerequisite for consumption.)
The sitcom has been our gin, an infinitely expandable response to the crisis of social transformation, and as with drinking gin, it isn't, hard to explain why people watch individual television programs—some of them are quite good. What's hard to explain is how, in the space of a generation, watching television became a part-time job for every citizen in the developed world. Toxicologists like to say, "The dose makes the poison"; both alcohol and caffeine are fine in moderation but fatal in excess. Similarly, the question of TV isn't about the content of individual shows but about their volume: the effect on individuals, and on the culture as a whole, comes from the dose. We didn't just watch good TV or bad TY, we watched everything-sitcoms, soap operas, infomercials, the Home Shopping Network. The decision to watch TV often preceded any concern about what might be on at any given moment. It isn't what we watch, but how much of it, hour after hour, day after day, year in and year out, over our lifetimes. Someone born in 1960 has watched something like fifty thousand hours of TV already, and may watch another thirty thousand hours before she dies.
This isn't just an American phenomenon. Since the 1950s, any country with rising GDP has invariably seen a reordering of human affairs; in the whole of the developed world, the three most common activities are now work, sleep, and watching TV. All this is despite considerable evidence that watching that much television is an actual source of unhappiness. In an evocatively titled 2007 study from the Journal of Economic Psychology—"Does Watching TV Make Us Happy7"—the behavioral economists Bruno Frey, Christine Benesch, and Alois Stutzer conclude that not only do unhappy people watch considerably more TV than happy people, but TV watching also pushes aside other activities that are less immediately engaging but can produce longer-term satisfaction. Spending many hours watching TY, on the other hand, is linked to higher material aspirations and to raised anxiety.
The thought that watching all that TV may not be good for us has hardly been unspoken. For the last half century, media critics have been wringing their hands until their palms chafed over the effects of television on society, from Newton Minow's famous description of TV as a "vast wasteland" to epithets like "idiot box" and "boob tube" to Roald Dahl's wicked characterization of the television-obsessed Mike Teavee in Charlie and the Chocolate Factory. Despite their vitriol, these complaints have been utterly ineffective-in every year of the last fifty, television watching per capita has grown. We've known about the effects of TV on happiness, first anecdotally and later through psychological research, for decades, but that hasn't curtailed its growth as the dominant use of our free time. Why?
For the same reason that the disapproval of Parliament didn't reduce the consumption of gin: the dramatic growth in TV viewing wasn't the problem, it was the reaction to the problem. Humans are social creatures, but the explosion of our surplus of free time coincided with a steady reduction in social capital-our stock of relationships with people we trust and rely on. One clue about the astonishing rise of TV-watching time comes from its displacement of other activities, especially social activities. As Jib Fowles notes in Why Viewers Watch, "Television viewing has come to displace principally (a) other diversions, (b) socializing, and (c) sleep." One source of television's negative effects has been the reduction in the amount of human contact, an idea called the social surrogacy hypothesis.
Social surrogacy has two parts. Fowles expresses the first—we have historically watched so much TV that it displaces all other uses of free time, including time with friends and family. The other is that the people we see on television constitute a set of imaginary friends. The psychologists Jaye Derrick and Shira Gabriel of the University at Buffalo and Kurt Hugenberg of Miami University of Ohio concluded that people turn to favored programs when they are feeling lonely, and that they feel less lonely when they are viewing those programs. This shift helps explain how TV became our most embraced optional activity, even at a dose that both correlates with and can cause unhappiness: whatever its disadvantages, it's better than feeling like you're alone, even if you actually are. Because watching TV is something you can do alone, while it assuages the feelings of loneliness, it had the right characteristics to become popular as society spread out from dense cities and tightly knit rural communities to the relative disconnection of commuter work patterns and frequent relocations. Once a home has a TV, there is no added cost to watching an additional hour.
Watching TV thus creates something of a treadmill. As Luigino Bruni and Luca Stanca note in "Watching Alone," a recent paper in the Journal of Economic Behavior and Organization, television viewing plays a key role in crowding-out social activities with solitary ones. Marco Gui and Luca Stanca take on the same phenomenon in their 2009 working paper "Television Viewing, Satisfaction and Happiness": “Television can play a significant role in raising people's materialism and material aspirations, thus leading individuals to underestimate the relative importance of interpersonal relations for their life satisfaction and, as a consequence, to overinvest in income-producing activities and under-invest in relational activities." Translated from the dry language of economics, underinvesting in relational activities means spending less time with friends and family, precisely because watching a lot of TV leads us to shift more energy to material satisfaction and less to social satisfaction.
Our cumulative decision to commit the largest chunk of our free time to consuming a single medium really hit home for me in 2008, after the publication of Here Comes Everybody, a book I'd written about social media. A TV producer who was trying to decide whether I should come on her show to discuss the book asked, “What interesting uses of social media are you seeing now?”
I told her about Wikipedia, the collaboratively created encyclopedia, and about the Wikipedia article on Pluto. Back in 2006, Pluto was getting kicked out of the planet club-astronomers had concluded that it wasn't enough like the other planets to make the cut, so they proposed redefining planet in such a way as to exclude it. As a result, Wikipedia's Pluto page saw a sudden spike in activity. People furiously edited the article to take account of the proposed change in Pluto's status, and the most committed group of editors disagreed with one another about how best to characterize the change. During this conversation, they updated the article-contesting sections, sentences, and even word choice throughout—transforming the essence of the article from "Pluto is the ninth planet" to "Pluto is an odd-shaped rock with an oddshaped orbit at the edge of the solar system.”
I assumed that the producer and I would jump into a conversation about social construction of knowledge, the nature of authority, or any of the other topics that Wikipedia often generates. She didn't ask any of those questions, though. Instead, she sighed and said, "Where do people find the time?" Hearing this, I snapped, and said, "No one who works in TV gets to ask that question. You know where the time comes from." She knew, because she worked in the industry that had been burning off the lion's share of our free time for the last fifty years.
Imagine treating the free time of the world's educated citizenry as an aggregate, a kind of cognitive surplus. How big would that surplus be? To figure it out, we need a unit of measurement, so let's start with Wikipedia. Suppose we consider the total amount of time people have spent on it as a kind of unit-every edit made to every article, and every argument about those edits, for every language that Wikipedia exists in. That would represent something like one hundred million hours of human thought, back when I was talking to the TV producer. (Martin Wattenberg, an IBM researcher who has spent time studying Wikipedia, helped me arrive at that figure. It's a back-of-the-envelope calculation, but it's the right order of magnitude.) One hundred million hours of cumulative thought is obviously a lot. How much is it, though, compared to the amount of time we spend watching television?
Americans watch roughly two hundred billion hours of TV every year. That represents about two thousand Wikipedias' projects' worth of free time annually. Even tiny subsets of this time are enormous: we spend roughly a hundred million hours every weekend just watching commercials. This is a pretty big surplus. People who ask, "Where do they find the time" about those who work on Wikipedia don't understand how tiny that entire project is, relative to the aggregate free time we all possess. One thing that makes the current age remarkable is that we can now treat free time as a general social asset that can be harnessed for large, communally created projects, rather than as a set of individual minutes to be whiled away one person at a time.
Society never really knows what to do with any surplus at first. (That's what makes it a surplus.) For most of the time when we've had a truly large-scale surplus in free time—billions and then trillions of hours a year—we've spent it consuming television, because we judged that use of time to be better than the available alternatives. Sure, we could have played outdoors or read books or made music with our friends, but we mostly didn't, because the thresholds to those activities were too high, compared to just sitting and watching. Life in the developed world includes a lot of passive participation: at work we're office drones, at home we're couch potatoes. The pattern is easy enough to explain by assuming we've wanted to be passive participants more than we wanted other things. This story has been, in the last several decades, pretty plausible; a lot of evidence certainly supported this view, and not a lot contradicted it.
But now, for the first time in the history of television, some cohorts of young people are watching TV less than their elders. Several population studies-of high school students, broadband users, YouTube users-have noticed the change, and their basic observation is always the same: young populations with access to fast, interactive media are shifting their behavior away from media that presupposes pure consumption. Even when they watch video online, seemingly a pure analog task, they have opportunities to comment on the material — to share it with their friends, to label, rate, or rank it, and of course, to discuss it with other viewers around the world. As Dan Hill noted in a much-cited online essay, "Why Lost ls Genuinely New Media," the viewers of that show weren't just viewers—they collaboratively created a compendium of material related to that show called (what else?) Lostpedia. Even when they are engaged in watching TV, in other words, many members of the networked population are engaged with one another, and this engagement correlates with behaviors other than passive consumption.
The choices leading to reduced TV consumption are at once tiny and enormous. The tiny choices are individual; someone simply decides to spend the next hour talking to friends or playing a game or creating something instead of just watching. The enormous choices are collective ones, an accumulation of those tiny choices by the millions; the cumulative shift toward participation across a whole population enables the creation of a Wikipedia. The television industry has been shocked to see alternative uses of free time, especially among young people, because the idea that watching TV was the best use of free time, as ratified by the viewers has been such a stable feature of society for so long. (Charlie Leadbeater, the U.K. scholar of collaborative work, reports that a TV executive recently told him that participatory behavior among the young will go away when they grow up, because work will so exhaust them that they won't be able to do anything with their free time but "slump in front of the TV.") Believing that the past stability of this behavior meant it would be a stable behavior in the future as well turned out to be a mistake—and not just any mistake, but a particular kind of mistake.
MILKSHAKE MISTAKES
When McDonald's wanted to improve sales of its milkshakes, it hired researchers to figure out what characteristics its customers cared about. Should the shakes be thicker? Sweeter? Colder? Almost all of the researchers focused on the product. But one of them, Gerald Berstell, chose to ignore the shakes themselves and study the customers instead. He sat in a McDonald's for eighteen hours one day, observing who bought milkshakes and at what time. One surprising discovery was that many milkshakes were purchased early in the day, odd, as consuming a shake at eight A.M. plainly doesn't fit the bacon-and-eggs model of breakfast. Berstell also garnered three other behavioral clues from the morning milkshake crowd: the buyers were always alone, they rarely bought anything besides a shake, and they never consumed the shakes in the store.The breakfast-shake drinkers were clearly commuters, intending to drink them while driving to work. This behavior was readily apparent, but the other researchers had missed it because it didn't fit the normal way of thinking about either milkshakes or breakfast. As Berstell and his colleagues noted in "Finding the Right Job for Your Product," their essay in the Harvard Business Review, the key to understanding what was going on was to stop viewing the product in isolation and to give up traditional notions of the morning meal. Berstell instead focused on a single, simple question: "What job is a customer hiring that milkshake to do at eight A.M.?”
lf you want to eat while you are driving, you need something you can eat with one hand. It shouldn't be too hot, too messy, or too greasy. It should also be moderately tasty, and take a while to finish. Not one conventional breakfast item fits that bill, and so without regard for the sacred traditions of the morning meal, those customers were hiring the milkshake to do the job they needed done.
All the researchers except Berstell missed this fact, because they made two kinds of mistakes, things we might call "milkshake mistakes." The first was to concentrate mainly on the product and assume that everything important about it was somehow implicit in its attributes, without regard to what role the customers wanted it to play—the job they were hiring the milkshake for. The second mistake was to adopt a narrow view of the type of food people have always eaten in the morning, as if all habits were deeply rooted traditions instead of accumulated accidents. Neither the shake itself nor the history of breakfast mattered as much as customers needing food to do a nontraditional job—serve as sustenance and amusement for their morning commute—for which they hired the milkshake.
We have the same problems thinking about media. When we talk about the effects of the web or text messages, it's easy to make a milkshake mistake and focus on the tools themselves. (I speak from personal experience—much of the work I did in the 1990s focused obsessively on the capabilities of computers and the Internet, with too little regard for the way human desires shaped them.) The social uses of our new media tools have been a big surprise, in part because the possibility of these uses wasn't implicit in the tools themselves. A whole generation had grown up with personal technology, from the portable radio through the PC, so it was natural to expect them to put the new media tools to personal use as well. But the use of a social technology is much less determined by the tool itself; when we use a network, the most important asset we get is access to one another. We want to be connected to one another, a desire that the social surrogate of television deflects, but one that our use of social media actually engages.
It's also easy to assume that the world as it currently exists represents some sort of ideal expression of society, and that all deviations from this sacred tradition are both shocking and bad. Although the internet is already forty years old, and the web half that age, some people are still astonished that individual members of society, previously happy to spend most of their free time consuming, would start voluntarily making and sharing things. This making-and-sharing is certainly a surprise compared to the previous behavior. But pure consumption of media was never a sacred tradition; it was just a set of accumulated accidents, accidents that are being undone as people start hiring new communications tools to do jobs older media simply can't do.
To pick one example; a service called Ushahidi was developed to help citizens track outbreaks of ethnic violence in Kenya. In December 2007 a disputed election pitted supporters and opponents of President Mwai Kibaki against one another. Ory Okolloh, a Kenyan political activist, blogged about the violence when the Kenyan government banned the mainstream media from reporting on it. She then asked her readers to e-mail or post comments about the violence they were witnessing on her blog. The method proved so popular that her blog, Kenyan Pundit, became a critical source of first-person reporting. The observations kept flooding in, and within a couple of days Okolloh could no longer keep up with it. She imagined a service, which she dubbed Ushahidi (Swahili for "witness" or "testimony"), that would automatically aggregate citizen reporting (she had been doing it by hand), with the added value of locating the reported attacks on a map in near-real time. She floated the idea on her blog, which attracted the attention of the programmers Erik Hersman and David Kobia. The three of them got on a conference call and hashed out how such a service might work, and within three days, the first version of Ushahidi went live.
People normally find out about the kind of violence that took place after the Kenyan election only if it happens nearby. There is no public source where people can go to locate trouble spots, either to understand what's going on or to offer help. We've typically relied on governments or professional media to inform us about collective violence, but in Kenya in early 2008 the professionals weren't covering it, out of partisan fervor or censorship, and the government had no incentive to report anything.
Ushahidi was developed to aggregate this available but dispersed knowledge, to collectively weave together all the piecemeal awareness among individual witnesses into a nationwide picture. Even if the information the public wanted existed someplace in the government, Ushahidi was animated by the idea that rebuilding it from scratch, with citizen input, was easier than trying to get it from the authorities. The project started as a website, but the Ushahidi developers quickly added the ability to submit information via text message from mobile phones, and that's when the reports really poured in. Several months after Ushahidi launched, Harvard's Kennedy School of Government did an analysis that compared the site's data to that of the mainstream media and concluded that Ushahidi had been better at reporting acts of violence as they started, as opposed to after the fact, better at reporting acts of nonfatal violence, which are often a precursor to deaths, and better at reporting over a wide geographical area, including rural districts.
All of this information was useful—governments the world over act less violently toward their citizens when they are being observed, and Kenyan NGOs used the data to target humanitarian responses. But that was just the beginning. Realizing the site's potential, the founders decided to turn Ushahidi into a platform so that anyone could set up their own service for collecting and mapping information reported via text message. The idea of making it easy to tap various kinds of collective knowledge has spread from the original Kenyan context. Since its debut in early 2008, Ushahidi has been used to track similar acts of violence in the Democratic Republic of Congo, to monitor polling places to prevent voter fraud in India and Mexico, to record supplies of vital medicines in several East African countries, and to locate the injured after the Haitian and Chilean earthquakes.
A handful of people, working with cheap tools and little time or money to spare, managed to carve out enough collective goodwill from the community to create a resource that no one could have imagined even five years ago. Like all good stories, the story of Ushahidi holds several different lessons: People want to do something to make the world a better place. They will help when they are invited to. Access to cheap, flexible tools removes many of the barriers to trying new things. You don't need fancy computers to harness cognitive surplus; simple phones are enough. But one of the most important lessons is this: once you've figured out how to tap the surplus in a way that people care about, others can replicate your technique, over and over around the world.
Ushahidi.com, designed to help a distressed population in a difficult time, is remarkable, but not all new communications tools are so civically engaged; in fact, most aren't. For every remarkable project like Ushahidi or Wikipedia, there are countless pieces of throwaway work, created with little effort, and targeting no positive effect greater than crude humor. The canonical example at present is the lolcat, a cute picture of a cat that is made even cuter by the addition of a cute caption, the ideal effect of “cat plus caption" being to make the viewer laugh out loud (thus putting the lol in lolcat). The largest collection of such images is a website called ICanHasCheezburger.com, named after its inaugural image: a gray cat, mouth open, staring maniacally, bearing the caption "I Can Has Cheezburger”(Lolcats are notoriously poor spellers.) CanHasCheezburger.com has more than three thousand lolcat images—''i have bad day," "im steelin som ur foodz k thx bai," "BANDIT CAT JUST ATED UR BURRITOZ"—each of which garners dozens or hundreds of comments, also written in lolspeak. We are far from Ushahidi now.
Let's nominate the process of making a lolcat as the stupidest possible creative act. (There are other candidates, of course, but lolcats will do as a general case.) Formed quickly and with a minimum of craft, the average lolcat image has the social value of a whoopee cushion and the cultural life span of a mayfly. Yet anyone seeing a lolcat gets a second, related message: You can play this game too. Precisely because lolcats are so transparently created, anyone can add a dopey caption to an image of a cute cat ( or dog, or hamster, or walrus—Cheezburger is an equal-opportunity time waster) and then share that creation with the world. Lolcat images, dumb as they are, have internally consistent rules, everything from "Captions should be spelled phonetically" to “The lettering should use a sans-serif font." Even at the stipulated depths of stupidity, in other words, there are ways to do a lolcat wrong, which means there are ways to do it right, which means there is some metric of quality, even if limited. However little the world needs the next lolcat, the message You can play this game too is a change from what we're used to in the media landscape. The stupidest possible creative act is still a creative act.
Much of the objection to lolcats focuses on how stupid they are; even a funny lolcat doesn't amount to much. On the spectrum of creative work, the difference between the mediocre and the good is vast. Mediocrity is, however, still on the spectrum; you can move from mediocre to good in increments. The real gap is between doing nothing and doing something, and someone making lolcats has bridged that gap.
As long as the assumed purpose of media is to allow ordinary people to consume professionally created material, the proliferation of amateur-created stuff will seem incomprehensible. What amateurs do is so, well, unprofessional—lolcats as a kind of lowgrade substitute for the Cartoon Network. But what if, all this time, providing professional content isn't the only job we've been hiring media to do? What if we've also been hiring it to make us feel connected, engaged, or just less lonely? What if we've always wanted to produce as well as consume, but no one offered us that opportunity? The pleasure in You can play this game too isn't just in the making, it's also in the sharing. The phrase "user-generated content," the current label for creative acts by amateurs, really describes not just personal but also social acts. Lolcats aren't just user-generated, they are user-shared. The sharing, in fact, is what makes the making fun—no one would create a lolcat to keep for themselves.
The atomization of social life in the twentieth century left us so far removed from participatory culture that when it came back, we needed the phrase "participatory culture" to describe it. Before the twentieth century, we didn't really have a phrase for participatory culture; in fact, it would have been something of a tautology. A significant chunk of culture was participatory-local gatherings, events, and performances-because where else could culture come from? The simple act of creating something with others in mind and then sharing it with them represents, at the very least, an echo of that older model of culture, now in technological raiment. Once you accept the idea that we actually like making and sharing things, however dopey in content or poor in execution, and that making one another laugh is a different kind of activity from being made to laugh by people paid to make us laugh, then in some ways the Cartoon Network is a low-grade substitute for lolcats.
Week 07: Reasons for optimism
Response deadline: 9AM Wednesday, Oct. 8
What ‘Death of the Newspaper’ stories leave out
You may have recently learned — from a newspaper, perhaps — that McClatchy, one of the largest newspaper publishers in the country, filed for bankruptcy protection. For the sky-is-falling industry observers, it’s the latest sign that newspapers are joining the horse and buggy in the broom closet of nostalgia and obscurity. There is no room for the printed news, they would tell you. Not in a world where splitting a grubhub is a hot date.
To be fair, the naysayers have real evidence they can point to. It’s true that there are fewer newspapers around the country; it’s also true that there are fewer full-time reporter positions. Circulation is declining nationwide, and the double-punch of evaporating pre-print revenue (the money newspapers make when your local supermarket puts their specials in the paper) and falling classified revenue has made the economics of the business tougher.
“Death of the newspaper” narratives focus mainly on two tiers in the business: the mega-chains, which are increasingly owned by investment funds or are publicly traded, and the tiny mom-and-pop outlets, based in small towns, that are slowly disappearing. But such narratives ignore an entire segment of the industry that isn’t just surviving, but persisting: medium-sized, independent and family-owned newspaper chains that remain fully committed to producing newspapers in the communities they serve. We are not going away, we are not selling out, and we believe that journalism’s best days are ahead of us.
There is a morbid joke in this business: Every time we print an obituary, we lose another subscriber. As an entire generation of newspaper readers slowly leave us, they aren’t being replaced. The industry spent the better part of a decade trying to fight that trend, and lost. New readers will consume the news in a different format, on different platforms than their parents or grandparents. But Netflix wasn’t in the business of sending around red envelopes; they were in the entertainment business. Likewise, we are not in the business of making newspapers. We are in the business of making the news.
The basic value proposition still holds: people need to know what’s going on in their communities. They require credible, accurate and objective information that can inform their daily lives, and they’re willing to pay for it.
Two years ago, we pulled the Anchorage Daily News out of bankruptcy. We managed the operation to profitability within 11 months. Today, about one-third of our paid readership is online, and that number is growing faster than our print circulation is declining. For the first time in decades, our paid audience is growing; so is our top-line revenue. We’re adding new lines of business to diversify our revenue, and we’re interacting with and listening to our audience more than ever.
Wick Communications partnered with the Daily News to provide more efficient printing at our Wasilla facility, helping the bottom line for both companies. Wick has a generational commitment to newspapers going back nearly a hundred years. Francis Wick, the company’s CEO, collaborates with and promotes with other likeminded, privately held organizations. Wick has invested heavily in smaller communities around the country that are working toward a multichannel reader-engagement experience. These investments are focused on long-term sustainability; they require time and commitment for change.
We’re not alone. In Arkansas, Walter Hussman, the 73-year-old founder of WEHCO Media, is asking readers to trade in their printed newspaper for an iPad; Hussman will then come to your house to show you how to use it. He’s converted a shocking number of readers—about three quarters—and provided the rest of us with an ambitious example of what’s possible.
Our industry is communicating and working together like never before. Nationwide efforts by the American Press Institute, the Knight Foundation, and the Lenfest Institute have helped newsrooms share best practices and promoted experimentation. Though each community and newsroom is different, we often ask ourselves the same questions. Together, we’ve been able to plow new ground by sharing what we’re learning.
Honesty requires us to acknowledge that, in the long term, newspapers will exist online. Along the way, we will slowly ramp down production of the physical paper; with it, we’ll strip out the costs of production and delivery. But that will be a slow and incremental process. In the meantime, we are proving that we can also deliver and monetize the news online. Our newsrooms today might not look like you’d expect: Our contributors include millennial developers as well as seasoned editors, UX specialists and social-media ninjas working alongside reporters with deep roots in their beats. We are building distributed and agile organizations that can rapidly test novel ideas for engagement that are filtered through a century of journalistic traditions and values. And, as the online landscape changes and fractures, we will continue to meet our readers on the platform-du-jour and continue to lean on that basic value proposition. Newspapers once functioned as spotlights in a world of darkness. As we transition to the digital landscape and its overabundance of information, newspapers will remain the bearers of truth—even without the paper itself.
If you hear that hedge funds are strangling the news business and papers are dying every day, remember that there is a third group of us who clearly see a bright future for newspapers. We are investing in that future, learning and growing, and we continue to produce news for people who depend on it. We are moving forward unconstrained by legacy debt or legacy thinking. So when you hear from people that the newspaper business is dead, ask them where they got their information. The answer might be online, or on Facebook, or on their phone—but, chances are, that news was likely still produced by a newspaper.
By Ryan Binkley and Francis Wick
March 11, 2020
Will A.I. Save the News?
Artificial intelligence could hollow out the media business—but it also has the power to enhance journalism.
Iam a forty-five-year-old journalist who, for many years, didn’t read the news. In high school, I knew about events like the O. J. Simpson trial and the Oklahoma City bombing, but not much else. In college, I was friends with geeky economics majors who read The Economist, but I’m pretty sure I never actually turned on CNN or bought a paper at the newsstand. I read novels, and magazines like Wired and Spin. If I went online, it wasn’t to check the front page of the Times but to browse record reviews from College Music Journal. Somehow, during this time, I thought of myself as well informed. I had all sorts of views about the world. Based on what, I now wonder? Chuck Klosterman, in his cultural history “The Nineties,” describes that decade as the last one during which it was both possible and permissible to have absolutely no idea what was going on. So maybe the bar was low.
The 9/11 attacks, which happened during my senior year, were a turning point. Afterward, as a twentysomething, I subscribed to the Times and The Economist and, eventually, The New Yorker and The New York Review of Books. My increasing immersion in the news felt like a transition into adult consciousness. Still, it’s startling to recall how shallow, and how fundamentally optional, my engagement with the news was then. Today, I’m surrounded by the news at seemingly every moment; checking on current events has become almost a default activity, like snacking or daydreaming. I have to take active steps to push the news away. This doesn’t feel right—shouldn’t I want to be informed?—but it’s necessary if I want to be present in my life.
It also doesn’t feel right to complain that the news is bad. There are many crises in the world; many people are suffering in different ways. But studies of news reporting over time have found that it’s been growing steadily more negative for decades. It’s clearly not the case that everything has been getting worse, incrementally, for the past eighty years. Something is happening not in reality but in the news industry. And since our view of the world beyond our direct experience is so dramatically shaped by the news, its growing negativity is consequential. It renders us angry, desperate, panicked, and fractious.
The more closely you look at the profession of journalism, the stranger it seems. According to the Bureau of Labor Statistics, fewer than fifty thousand people were employed as journalists in 2023, which is less than the number of people who deliver for DoorDash in New York City—and this small group is charged with the impossible job of generating, on a daily basis, an authoritative and interesting account of a bewildering world. Journalists serve the public good by uncovering disturbing truths, and this work contributes to the improvement of society, but the more these disturbing truths are uncovered, the worse things seem. Readers bridle at the negativity of news stories, yet they click on scary or upsetting headlines in greater numbers—and so news organizations, even the ones that strive for accuracy and objectivity, have an incentive to alarm their own audiences. (Readers also complain about the politicization of news, but they click on headlines that seem to agree with their political views.) It’s no wonder that people trust journalists less and less. Gone are the days when cable was newfangled, and you could feel informed if you read the front page and watched a half-hour newscast while waiting for “The Tonight Show” to start. But this is also a bright spot when it comes to the news: it can change.
Certainly, change is coming. Artificial intelligence is already disrupting the ways we create, disseminate, and experience the news, on both the demand and the supply sides. A.I. summarizes news so that you can read less of it; it can also be used to produce news content. Today, for instance, Google decides when it will show you an “A.I. overview” that pulls information from news stories, along with links to the source material. On the science-and-tech podcast “Discovery Daily,” a stand-alone news product published by the A.I.-search firm Perplexity, A.I. voices read a computer-generated script.
It’s not so easy to parse the implications of these developments, in part because a lot of news already summarizes. Many broadcasts and columns essentially catch you up on known facts and weave in analysis. Will A.I. news summaries be better? Ideally, columns like these are more surprising, more particular, and more interesting than what an A.I. can provide. Then there are interviews, scoops, and other kinds of highly specific reporting; a reporter might labor for months to unearth new information, only for A.I. to hoover it up and fold it into some bland summary. But if you’re interested in details, you probably won’t be happy with an overview, anyway. From this perspective, the simplest human-generated summaries—sports recaps, weather reports, push alerts, listicles, clickbait, and the like—are most at risk of being replaced by A.I. (Condé Nast, the owner of The New Yorker, has licensed its content to OpenAI, the maker of ChatGPT; it has also joined a lawsuit against Cohere, an A.I. company accused of using copyrighted materials in its products. Cohere denies any wrongdoing.)
And yet there’s a broader sense in which “the news,” as a whole, is vulnerable to summary. There’s inherently a lot of redundancy in reporting, because many outlets cover the same momentous happenings, and seek to do so from multiple angles. (Consider how many broadly similar stories about the Trump Administration’s tariffs have been published in different publications recently.) There’s value in that redundancy, as journalists compete with one another in their search for facts, and news junkies value the subtle differences among competing accounts of the same events. But vast quantities of parallel coverage also enable a reader to ask a service like Perplexity, “What’s happening in the news today?,” and get a pretty well-rounded and specific answer. She can explore subjects of interest, see things from many sides, and ask questions without ever visiting the website of a human-driven news organization.
The continued spread of summarization could make human writers—with their own personalities, experiences, contexts, and insights—more valuable, both as a contrast to and a part of the A.I. ecosystem. (Ask ChatGPT what a widely published writer might think about any given subject—even subjects they haven’t written about—and their writing can seem useful in a new way.) It could also be that, within newsrooms, A.I. will open up new possibilities. “I really believe that the biggest opportunity when it comes to A.I. for journalism, at least in the short term, is investigations and research,” Zach Seward, the editorial director of A.I. initiatives at the Times, told me. “A.I. is actually opening up a whole new category of reporting that we weren’t even able to contemplate taking on previously—I’m talking about investigations that involve tens of thousands of pages of unorganized documents, or hundreds of hours of video, or every federal court filing.” Because reporters would be in the driver’s seat, Seward went on, they could use it to further the “genuine reporting of new information” without compromising “the fundamental obligation of a news organization—to be a reliable source of truth.” (“Our principle is we never want to shift the burden of verification to the reader,” Seward said at a forum on A.I. and journalism this past fall.)
But there’s no getting around the money problem. Even if readers value human journalists and the results they produce, will they still value the news organizations—the behind-the-scenes editors, producers, artists, and businesspeople—on which A.I. depends? It’s quite possible that, as A.I. rises, individual voices will survive while organizations die. In that case, the news could be hollowed out. We could be left with A.I.-summarized wire reports, Substacks, and not much else.
News travels through social media, which is also being affected by A.I. It’s easy to see how text-centric platforms, such as X and Facebook, will be transformed by A.I.-generated posts; as generative video improves, the same will be true for video-based platforms, such as YouTube, TikTok, and Twitch. It may become genuinely difficult to tell the difference between real people and fake ones—which sounds bad. But here, too, the implications are uncertain. A.I.-based content could find an enthusiastic social-media audience.
To understand why, you have to stop and think about what A.I. makes possible. This is a technology that separates form from content. A large language model can soak up information in one form, grasp its meaning to a great extent, and then pour the same information into a different mold. In the past, only a human being could take ideas from an article, a book, or a lecture, and explain them to another human being, often through the analog process we call “conversation.” But this can now be automated. It’s as though information has been liquefied so that it can more easily flow. (Errors can creep in during this process, unfortunately.)
It’s tempting to say that the A.I. result is only re-presenting information that already exists. Still, the power of reformulation—of being able to tell an A.I., “Do it again, a little differently”—shouldn’t be underestimated. A single article or video could be re-created and shared in many formats and flavors, allowing readers (or their algorithms) to decide which ones suit them best. Today, if you want to fix something around the house, you can be pretty sure that someone, somewhere, has made a YouTube video about how to do it; the same principle might soon apply to the news. If you want to know how the new tariffs might affect you—as a Christian mother of three, say, with a sub-six-figure income living in Hackensack, New Jersey—A.I. may be able to offer you an appropriate article that you can share it with your similar friends.
At the same time, however, the fluidity of A.I. could work against social platforms. Personalization might allow you to skip the process of searching, discovering, and sharing altogether; in the near future, if you want to listen to a podcast covering the news stories you care about most, an A.I. may be able to generate one. If you like a particular human-made podcast—“Radiolab,” say, or “Pod Save America”—an A.I. may be able to edit it for you, nipping and tucking until it fits into your twenty-four-minute commute.
Right now, the variable quality and uncertain accuracy of A.I. news protects sophisticated news organizations. “As the rest of the internet fills up with A.I.-generated slop, and it’s harder to tell the provenance of what you’re reading, then the value of being able to say, ‘This was reported and written by the reporters whose faces you see on the byline’ only goes up and up,” Seward said. As time passes and A.I. improves, however, different kinds of readers may find ways of embracing it. Those who enjoy social media may discover A.I. news content through it. (Some people are already doing this, on TikTok and elsewhere.) Those who don’t frequent social platforms may go directly to chatbots or other A.I. sources, or may settle on news products that are explicitly marketed as combining human journalists with A.I. Others may continue to prefer the old approach, in which discrete units of carefully vetted, thoroughly fact-checked journalism are produced by people and published individually.
Is it possible to imagine a future in which the script is flipped? As I wrote last week, many people who work in A.I. believe that the technology is improving far faster than is widely understood. If they’re right—if we cross the milestone of “artificial general intelligence,” or A.G.I., by 2030 or sooner—then we may come to associate A.I. “bylines” with balance, comprehensiveness, and a usefully nonhuman perspective. That might not mean the end of human reporters—but it would mean the advent of artificial ones.
One way to glimpse the possible future of news, right now, is to use A.I. tools for yourself. Earlier this year, on social media, I came across the Substack “Letters from an American,” by the historian Heather Cox Richardson, who publishes nearly every day on the ongoing Trump emergency. I find her pieces illuminating, but I often fall behind; I’ve discovered that ChatGPT, with the right encouragement, can give me a reasonably good summary of what she’s written about. Sometimes I stick with the summary, but often I read a post. Using A.I. to catch up can be great. Imagine asking the Times what happened in Ukraine while you were on vacation, or instructing The New Yorker to recap the first half of that long article you started last week.
For a while, I’ve been integrating A.I. into my news-reading process. I peruse the paper but keep my phone nearby, asking one of the A.I.s that I use (Claude, ChatGPT, Grok, Perplexity) questions as I go. “Tell me more about that prison in El Salvador,” I might say aloud. “What do firsthand accounts of life inside reveal?” Sometimes I’ve followed stories mainly through Perplexity, which is like a combination of ChatGPT and Google: you can search for information and then ask questions about it. “What’s going on with the Supreme Court?” I might ask. Then, beneath a bulleted list of developments, the A.I. will suggest follow-up questions. (“What are the implications of the Supreme Court’s decision on teacher-training grants?”) It’s possible to move seamlessly from a news update into a wide-ranging Q. & A. about whatever’s at stake. Articles are replaced by a conversation.
The news, for the most part, follows events forward in time. Each day—or every few hours—newly published stories track what’s happened. The problem with this approach is presentism. In reporting on the dismantling of the federal agency U.S.A.I.D., for instance, news organizations weren’t able to dedicate much space to discussing the agency’s history. But A.I. systems are biased toward the past—they are smart only because they’ve learned from what’s already been written—and they move easily among related ideas. Since I followed the U.S.A.I.D. story partly using A.I., it was easy for me to learn about the agency’s origins, and about the debates that have unfolded for decades about its purpose and value: Was it mainly a humanitarian organization, or an instrument of American soft power, or both? (A.I.s can be harder to politicize than you might think: even Grok, the system built by Elon Musk’s company xAI, partly with the intent of being non-woke, provided nuanced and evenhanded answers to my questions.) It was easy, therefore, to follow the story backward in time—even, in some sense, sideways, into subjects like global health and the mounting influence of China and India. I could’ve done this in what is now the usual fashion—Googling, tapping, scrolling. But working in a single text chat was more efficient, fun, and intellectually stimulating.
I could also ask stupid questions—questions that I might not have found answered in the articles I was reading, or that I was too embarrassed to ask of the smart people I know. One morning, while I cleaned the house, I spent three-quarters of an hour talking to ChatGPT about tariffs. Just how protectionist is the United States relative to other countries? Does the E.U. have tariffs? What about all those regulations on cheese and wine, which put up barriers to trade—how are they different? What separates countries that might find tariffs useful from countries that will find them counterproductive? Now that I’m a middle-aged news obsessive, I’ve become a critical reader of ordinary news; I know to look out for bias, conceptual errors, unchecked facts. I keep the same precautions in mind when conversing with A.I.s—and, also, I make a habit of pushing back against the people-pleasing agreeableness that they’ve been trained to deploy. (“That’s an insightful question!” an A.I. might say.)
Talking this way with the newest version of ChatGPT, I got good and useful answers—especially when I told it to disagree with itself, or with me. I asked, and asked, and asked. Then, in the traditional way, I read, and read, and read. It was a helpful combination, I thought. It seemed to me that A.I. could improve the news—if it doesn’t destroy it in the process.
By Joshua Rothman
April 8, 2025