If you worked in advertising or digital media in 2015, then you likely recall—with at least a faint sense of dread—the infamous “pivot to video.” If you were lucky enough not to live through this, let me walk you through it.
Journalists, already having made peace with a life of writing clickbait and listicles, and copywriters, themselves now committed to a life of writing promoted tweets, were told that the real money was in video, not text. And so, people who had never been in front of a camera were suddenly creating quirky, budget-friendly recipe videos for Facebook or lurid true-crime clips for Instagram and YouTube. The promised windfall from video never materialized, though, and there were a lot of layoffs. And thus, the mid-2010s “pivot to video” earned its own Wikipedia entry, another footnote in the history of millennial young urban creatives (or, as they were briefly called, the “yuccie”).
By 2020, it became clear that those content strategists urging everyone to “pivot to video” weren’t wrong, exactly. They were just early. In the intervening years between 2015 and 2020, video content continued to grow steadily across platforms, but it wasn’t until the pandemic that conditions became perfect for explosive growth. Not only were there technical changes—unlimited, high-bandwidth data was much cheaper, TikTok perfected the video feed, and by this time, everyone had a smartphone—there were also significant social ones. If we had previously been prone to being glued to our smartphones and laptops, our surroundings now forced us fully into it. We were finally ready for the torrent of short videos that ad agencies and digital media had been anticipating for nearly half a decade.
So TikTok—often considered the genesis of this shift—only added fuel to a fire that was already burning.
TikTok, initially popular mostly among teens, internet culture reporters of indeterminate age, and digital marketers, exploded in popularity during lockdowns. Tech companies rightly saw the writing on the wall. Soon enough, there was a renewed focus on video—especially short-form video. Instagram launched Reels. Facebook introduced Reels within its app by September 2021, recognizing that getting its predominantly older demographic onto Instagram would be challenging and slow. YouTube introduced Shorts in late 2020. Even text-focused platforms, like Substack, followed suit: Substack integrated native video in 2022 and introduced a TikTok-style video feed by 2025. Recently, X launched its own dedicated short-form video tab, and the algorithm privileges users who share video natively. Today, every popular social media platform includes a dedicated video feed.
But our appetite for short-form video has deeper roots in the Internet itself, before the 2010s. While it’s easy—though not at all misguided—to think TikTok-style content dominates because we’re more addicted to our devices and this is a ploy to keep all of us on our phones, the reason behind it is a little more complicated than that. The short-form video craze has a lot to do with how the Internet has reshaped our cognition.
Media ecologist Andrey Mir, author of the 2023 book Digital Future in the Rearview Mirror, describes how the Internet has gradually reshaped our communication into what he calls “digital orality,” blending the immediacy of spoken interactions with the permanence of written media. We see this in how people communicate online today—think of the casual, conversational tone of tweets, the rapid-fire exchanges in comment sections, or how TikTok creators address viewers directly as if speaking to friends. These digital interactions mimic spoken conversation while still existing as archivable, and at least, ostensibly “permanent” text or video.
Mir’s work builds on the earlier concept of “secondary orality,” introduced in the 1980s by the Jesuit priest, philosopher, and English literature professor Walter Ong. Ong believed that electronic media—then radio and television—would (or rather, did) revive characteristics of traditional oral cultures, where communities share information through direct conversation rather than the written word. What Mir recognized was that the Internet has taken this transformation even further, creating a hybrid form of communication that combines elements of both oral culture and print culture but functions according to its own unique logic.
But how exactly did we arrive at digital orality and, critically, the short-form video craze? The evolution was gradual but persistent, beginning long before smartphones existed.
Before we even had Facebook statuses and Twitter threads, early Internet communities like Usenet newsgroups, multi-user dungeons (text-based roleplaying games popular in the late ’80s through the ’90s), chat rooms, and instant messaging had already begun to shift the way we communicated with one another. Unlike email—though, email isn’t exempt here, either, as email is no letter-writing—these platforms prioritized conversational, informal exchanges resembling spoken conversation more closely than formal writing.
If you think Zoomers have weird slang, people were already complaining about Internet-native acronyms in the ’90s, including “RTFA,” or “read the f—ing article,” and “AFK,” or “away from keyboard.” These shorthand expressions reflected how digital communication was already beginning to function more like speech than traditional writing—immediate, reactive, and often emotionally charged. The development of emoticons and emojis further emphasized the need to inject emotional cues—typically conveyed through tone of voice or facial expressions in face-to-face conversations—into digital conversations.
By the time MySpace, Facebook, Twitter, and Tumblr popularized statuses and microblogging in the 2000s, Internet culture had already cultivated a preference for quick emotional responses and fragmented communication. The Internet, as it turned out, was not a place for the sustained reflection books theoretically were encouraged. Instead, we saw a new communication style, one that prioritized immediacy, emotional connection, and tribal belonging over rationality, logical structure and critical distance.
Communicating through writing, as a result, only superficially looked like literacy.
This evolution continued steadily through the 2010s, with each platform iteration further embracing these oral-like qualities. Now, with all the large platforms giving major real estate to video, users are already accustomed to this style of communication. The shift to short-form video wasn’t a radical departure; rather, it was the natural culmination of decades of digital communication evolution.
Today, it seems as though we’re evolving even past digital orality, or at least we’re entering a new version of it. We speak aloud to Siri, Alexa, and ChatGPT rather than typing detailed requests. Millennials and Zoomers increasingly choose voice memos over texts or instant messages.
Short-form videos thrive because they are quick, entertaining, and easily digestible—often using storytelling as a hook. TikTok feels like hanging out with friends, sitting around a campfire, or watching a vaudeville show. Occasionally, it might feel like watching TV. It never feels like a stream of dense information, though. It never feels like something that requires deep analysis.
This is one of the biggest cognitive shifts we’re experiencing.
So what do all these changes mean in practice?
We’re closer to an oral culture, not exactly in the sense that we’re incapable of reading—though, that’s unfortunately, showing up too in falling literacy rates— but in the way our writing has begun to mimic our speech. Online, though we are often communicating in text, it’s not designed to be reread or archived (though it often is, with great loss of context), but to be absorbed instantly and replied to in real time.
In this way, the move to video and voice memos make sense—if the way we’ve been using text is, essentially, the way we use voice and video, why not cut out the middleman?
If this sounds confusing, allow me to offer an example. Tweets make the most sense in context of what’s happening on the whole X timeline. Unlike a book, which stands alone, posts on social media are in dialogue with all other posts on the platform. Grammar becomes optional, punctuation expressive, and meaning often hinges on tone, timing, or context as much as on actual wording.
In contrast, a successful book builds its own world and guides the reader through a specific moment. A good book should hold up no matter when or where it’s read. The conversation around it can be relevant—and can enhance understanding—but it isn’t strictly necessary, or at least, shouldn’t be. The reader may need particular subject matter expertise, but that’s not the same as environmental or contextual knowledge. Social media, on the other hand, is far more situational. A post rarely makes sense in isolation, just as an inside joke or colloquialism loses meaning outside of its original context. We interpret posts based on what’s trending, who’s speaking, and what’s already been said. Even the user interface of the platform at any given moment can shape our understanding, whereas the form of books has been, for the most part, uniform since the dawn of the printing press. Like speech, social media is more dependent on the environment than print.
If digital orality mimics speech through text, what we have now is a further progression. We’re not only mimicking speech through text anymore—we’ve come full circle back to the spoken word.
In this way, it makes sense that we’re becoming inured to video, and to voice (again, think of the popularity of voice memos, audiobooks, and podcasts). Our culture is dominated by the physicality that defined oral cultures: it’s just, paradoxically, mediated through a screen. We operate with the logic of oral cultures—not even Mir’s digital orality, strictly. We’re still bound by the paradigm of mediation. A cute dog video, a camp‑fire‑style “story time,” or a “Get Ready With Me” video are all oral performances, but staged within the endless scroll rather than shared physical space.
This shift also affects our memory.
We now archive everything: tweets, posts, websites. But we rarely return to those archives. When was the last time you revisited your bookmarks, saved posts, or liked tweets? What about screenshots? How many of us are carrying around thousands of rarely looked at screenshots on our phones at any given moment? Information doesn’t stick when it’s stored; it sticks when it circulates. It is repetition, not preservation, that keeps ideas alive in the Internet’s “perpetual now.” This is why the meme has become a dominant form. But this also means that deeper, more complex knowledge—information that demands longer term engagement, like literature or the sciences—is less likely to be engaged with and retained. We remember social dynamics, jokes, and gossip far more vividly because it’s more active.
And so this brings us to the Truth: the great change of the Internet Age. In the print era, truth was often mediated by institutions. Newspapers, publishers, and academic organizations acted as gatekeepers, deeply influencing what counted as credible knowledge.
Historical narratives were debated, but there was an assumption that truth could be built slowly, layer by layer, through evidence and expertise. It was the age of rational thought. Online, that model has fractured. It’s a model that can’t exist outside of print.
Truth is increasingly determined by collective perception; by what resonates, spreads, or “feels right” to a given community. Authority stems from personality rather than credentials. Followers, influence, and visibility carry more weight than institutions. It’s possible this can be useful for certain types of social knowledge, but it’s not especially useful for more complex types of information.
The Internet isn’t rotting our brains as much as it’s reprogramming them. It’s easy to chalk up these changes to laziness or “decadence,” but the cultural shift is much more complex than the rhetoric around “kids these days,” or “it’s the phones” suggests.
Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.
With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.