What comes after social media?
For quite some time, social media has been portrayed as the antagonists in an endless Netflix like series, much like the endless scroll we navigate with our fingers. They’re a sinister character in the sense Freud described, naming something as familiar as it is terrifying. There’s nothing more familiar than logging into Instagram, X, or […]
For quite some time, social media has been portrayed as the antagonists in an endless Netflix like series, much like the endless scroll we navigate with our fingers. They’re a sinister character in the sense Freud described, naming something as familiar as it is terrifying.
There’s nothing more familiar than logging into Instagram, X, or TikTok. We do it without thinking, as part of our routine, like brushing our teeth but without fighting plaque. Once inside, we encounter posts from friends interspersed with retouched photos of Kate Middleton, the staged – albeit false – poses of Taylor Swift, advertisements for things we’ve already searched for and usually bought, or that grab our attention with their urgency “call now,” the outbursts, the silly jokes, the delicious food, and the clever and funny comments with links that we save for later but never read.
The next episode of this saga has already begun, and it’s called artificial intelligence. It’s not just about fake content; it’s about content, period. Millions of pieces of content that could end up inundating us. A recent job posting from a media company was seeking an “AI editor” who could produce “between 200 and 250 articles per week.” What will happen when everything we receive is multiplied by a thousand, and we don’t even know if there’s a human behind each post?
A couple of weeks ago, YouTube made a change to the form for uploading videos to the platform. There are three questions under the heading “altered content”–”Does your content make a real person appear to say or do something they didn’t do? Does it alter recordings of a real event or place? Does it create a scene that looks real even though it never happened?”
Youtubers can answer yes or no. It’s like those immigration forms that ask if you’re a human organ trafficker: has anyone ever answered yes to that?
In his public letter from almost a year ago, Yuval Harari wanted to alert us to our difficulty in regulating AI and used social media as an example–in that battle, we’ve already lost, he said, so why think we’ll win the next one.
Why did social media win? Or, put another way, why do we still use them despite everything? The answer probably lies in the business model. Social networks are profitable as long as they can capture our attention and sell it to advertisers. Just like old-fashioned TV. Except in the digital environment, the ways to capture us –giving quick rewards to our brains, offering distraction, telling us exactly what we want to hear– are much easier, and therefore more effective.
So perhaps social networks will change when someone finds an alternative business model. There are several attempts underway. One example is Mastodon, a decentralized social network where each group can set its own moderation rules.
It’s promising but complex to use, currently restricted to a niche. Another example is Post, an app based on web3 technology, whose users receive news from the media they choose and can pay for them with tokens, curating their own content. And then there’s ActivityPub, an old internet protocol that allows each user to own their content and followers, to take them to any platform. Last year, it received a boost when it was bought by the company that owns WordPress, the world’s leading blog editor.
Half a decade ago, an article popularized the phenomenon of social networks under the title Status as a service.
Its main argument is that we are so desperate to be recognized that we give everything for exposure and potential likes. The problem with new technologies and new business models is that they can change, but humans will remain the same.
Read more
What will 2037 look like for think tanks?
How will think tanks communicate in 2037? We explore tensions around trust, the “human premium,” and digital structure in this joint foresight exercise.
Learn more
DeepSeek is Biased, but I’m Worse
DeepSeek shook the AI world with power like ChatGPT at a fraction of the cost. It not only reflects our limitations, it can help us overcome them. Want to learn how to use it strategically? Join the third edition of our program for Think Tanks and advocacy organizations.
Learn more
Facing the tide: paralysis or reinvention?
For many advocacy groups, this is a time of reckoning. Some are choosing to step back—but others see an opportunity to sharpen their focus, engage beyond their usual circles, and rethink their long-term strategy. We’re helping them navigate this moment with clarity and purpose.
Learn more
Are we communicating information that our audiences don’t want to receive?
Communicating policy in a world of too much information
Learn more
From OOO to OHH! Tools to improve the out of office message
Even though we should have learned throughout the holiday season to distrust each “new message” notification, they still get our interest. This time of the year is terrible. So many times, instead of the expected reply to our message, we receive automatic emails with the subject: “out of the office”.
Learn more
Marikondoing at work: the life-changing magic of ordering files
Some practical tips to organize our workflow when we share files with other people
Learn more
Green eggs and ham
While we wait to see if and how our habits will change, the peak of experimentation is providing algorithms with great insights
Learn more
World Cup armbands won’t change the world
Problems in the global communication of such important issues as climate change, humanitarian catastrophes and migratory crises.
Learn more
How to develop a data project without panicking / losing your mind
Tips for companies and organizations eager to work on their data
Learn more