With AI and remixing culture, we no longer care about the origins of our greatest thoughts
A few months ago at a Lux team offsite, we discussed different shifts we expect to see in society in the coming years. AI and its implications were obviously the grounding for much of the conversation.
I suggested plagiarism as a form of cheating will be drastically reconsidered. Today, plagiarism (alongside research fraud) is the cardinal sin of academics and artists alike. Claudine Gay, the former president of Harvard who faced immense scrutiny over her administrative decisions related to diversity and antisemitism, ultimately resigned when a slew of plagiarized passages were discovered in her published research.
This standard of integrity is hardly the historical norm. The invention of the printing press led to the widespread pirating of books and the ideas contained within them, and early-modern states were unable to enforce any form of private copyright for centuries. Complicating matters, many controversial authors wrote pseudonymously to avoid censure or worse, making attribution difficult. Eventually, new norms around proper ownership, quotation, citation and originality would take their primacy in the academy.
These norms are still in place today as seen from Gay’s resignation, but I expect they will change rapidly in the years ahead. One cause is what Jeff Jarvis has termed the “Gutenberg Parenthesis.” He argues that our print-centric culture appears to be an historical oddity, and that we are returning to an oral and visual culture mediated by devices. In turn, understanding the provenance of a thought is significantly more difficult. Another cause is that social media platforms like Instagram and TikTok have aggressively pushed a remixing culture that dramatically lowers the intrinsic value of authorship of an original work. Users view content in endless scrollable streams with little context, and the platforms want to divorce engagement from individual creators as much as possible.
Yet, the bigger assault on the norm against plagiarism is coming from artificial intelligence itself. There’s of course the fact that AI models are trained on the comprehensive output of human knowledge, but it goes much further than that. I wrote last week about how autonomous AI will increasingly become central to art, but one of the interesting side effects of this new medium is that AI models will inevitably reuse the work of others in generating those experiences. Without full-blown AGI and the critical and original thinking we’d expect from it, all AI models are probabilistic generators. They ultimately just cheat — for both themselves and their human users.
Indeed, the effects of this cheating are already endemic across grade schools into universities. Take Troy Jollimore’s essay earlier this month on teaching college students today and what they are losing in the age of AI:
I once believed my students and I were in this together, engaged in a shared intellectual pursuit. That faith has been obliterated over the past few semesters. It’s not just the sheer volume of assignments that appear to be entirely generated by AI—papers that show no sign the student has listened to a lecture, done any of the assigned reading, or even briefly entertained a single concept from the course.
It’s other things too. It’s the students who say: I did write the paper, but I just used AI for a little editing and polishing. Or: I just used it to help with the research.
As someone who always avoided office hours because I was worried that I would get some sort of hint that would undermine my own intellectual maturation in statistics, this is of course shocking but not surprising. Neither is it to Jollimore:
That moment, when you start to understand the power of clear thinking, is crucial. The trouble with generative AI is that it short-circuits that process entirely. One begins to suspect that a great many students wanted this all along: to make it through college unaltered, unscathed. To be precisely the same person at graduation, and after, as they were on the first day they arrived on campus. As if the whole experience had never really happened at all.
We talked about this exact dynamic with Nicholas Rush Smith on the Riskgaming podcast, but as any writer — including me — will say, structuring complex ideas into workable text requires enormous facility with language, critical thinking and originality. Many brilliant ideas burn out as soon as they have to be put to paper, since no cogent argument can underwrite them. Writing and thinking are incredibly useful, even when a subject has been debated forever by specialists. Obviously, many leading LLMs can write a paper on Plato; the point isn’t the final output, but rather rigorously deliberating the argument’s logic. Unsurprisingly, notes Jollimore:
My students have been shaped by a culture that has long doubted the value of being able to think and write for oneself—and that is increasingly convinced of the power of a machine to do both for us. As a result, when it comes to writing their own papers, they simply disregard it. They look at instructors who levy such prohibitions as irritating anachronisms, relics of a bygone, pre-ChatGPT age.
Literacy and numeracy scores on international assessments are dropping rapidly, not just for children but for people of all ages. We are outsourcing more and more of our thinking to intelligent agents, and increasingly substituting complex reasoning for glib tweet-length rejoinders. Scientific fraud is pervasive, as I discussed back in “Fabricated Knowledge.” Everyone apparently needs to cheat a bit now just to hold their lives together — are we really surprised that tools like AI that make this cheating even easier are so widespread and increasingly popular?
Jollimore summarizes the key quandary: “…AI won’t help you figure out just which questions need to be asked and answered. And in real life, this is often the most difficult part.” When I talk about the difference between policy memos and Riskgaming, I always say that we don’t tell you the answer but instead offer better questions to frame challenging problems.
That may be so, but AI is already helping users with figuring out the questions too, outsourcing our thinking even further. That’s the dream of deep research models and autonomous lab agents, which can intelligently ask experimental questions, test them, and rigorously analyze the results in parallel, dramatically speeding up science. We don’t care if it’s remixing a bunch of ideas and plagiarizing other scientists if it gives humanity the results we need. AI further disintegrates the idea of intellectual ownership, where plagiarism will be very much the norm.
The pirating of work during the Renaissance and early Enlightenment was critical to those periods’ intellectual vibrancy. One hopes that with AI sloughing off the norms around plagiarism, we might similarly accelerate science and discovery. Once again though, we find that knowledge and the people who create it are being forcibly separated. We might get more Nobel Prize-quality work, but we won’t know who to award the prize to anymore.
“You can cause a lot of havoc with a cell phone and a cheap DJI drone”

This week, Laurence Pevsner and I talk to Colin P. Clarke, the director of research at The Soufan Group and formerly a long-time terrorism analyst at RAND. We talk about how new technologies are changing the threat landscape from terrorism as well as how the financing of terrorism is changing. We have a condensed and edited extract from our conversation here, and then listen to the full episode.
🔊 Listen to “‘You can cause a lot of havoc with a cell phone and a cheap DJI drone’”
Danny Crichton: Technology is rapidly changing the world, not just in terms of technology for detection but also for dissemination of information. How has that changed terrorism?
Colin Clarke: Technology is tremendously changing the way we look at terrorism. And I think we’re just at the tip of the iceberg. There's often a lag effect to these things, and in general, terrorists tend to be early adopters.
So if you are a small insurgent group based in the Sahel in West Africa, and you're now tinkering around with generative AI that you can set and forget in terms of propaganda, you've now freed up a significant amount of manpower hours. You can use those to go do what terrorists do, which is plan, plot, and conduct attacks.
I’d say that, so far, ISIS-K is one of the leaders in this space in terms of using AI-generated moderators to announce the news and pre-program and set propaganda in multiple languages across multiple platforms.
There are a number of other ways technology is changing the game, too. Think about unmanned aerial systems or drones. Those are a big issue for counterterrorism experts. Encrypted communications, virtual currencies, 3D printed weapons. I mean, we can go on and on.
And I think as more non-terrorists—more people in the general population—adopt and start using these technologies, we'll see greater adoption by terrorists because there's more cover. If, in two years, we're getting our Amazon packages delivered by drones, that sets the stage for someone with nefarious intentions to join the fray.
Laurence Pevsner: We saw this, of course, with the New Orleans attack, where my understanding is that the individual used Meta Ray-Bans to blend in, right? So it used to be if you were wearing some kind of AR-tracking goggles, you would stand out like a sore thumb.
Now, if it looks like you're just wearing Ray-Bans, then you completely blend in. Are these companies — whether it's Meta, whether it's OpenAI, or other folks on the frontiers of new technologies — working with counterterrorism experts to try to stop this type of usage? What kinds of safeguards can they build?
Colin Clarke: They are not working with terrorism experts nearly as much as they should be. I think they're doing just enough to keep people off their backs, including the federal government. And it's only going to be, sadly, when something terrible happens that we have a big commission. We look into it and say, "Actually, social media companies can and should be doing more." But they're only going to do as much as they need to do because their main business isn't counterterrorism. It's making technologies their consumers want to use.
I always think about it as the dark sordid underbelly of globalization. Anytime you show me a technology and tell me how great it is for the world, I'm going to start red teaming. I'm thinking about ways people will use it to kill others.
The OB: Alex Soojung-Kim Pang on REST & SHORTER
Over on The Orthogonal Bet, our scientist-in-residence Sam Arbesman talks with Alex Soojung-Kim Pang, who recently penned the new book, Rest: Why You Get More Done When You Work Less. They take about the tradeoffs between downtime and productivity.
🔊 Listen to “Alex Soojung-Kim Pang on REST & SHORTER”
Lux Recommends
- As a former gatekeeper (aka editor), I enjoyed this piece recommended by our gatekeeping editor Katie Salam on “The New Control Society” in The New Atlantis. “As we approach the moment when all information everywhere from all time is available to everyone at once, what we find is not new artistic energy, not explosive diversity, but stifling sameness. Everything is converging — and it’s happening even as the power of the old monopolies and centralized tastemakers is broken up.”
- Sam recommends Étienne Fortier-Dubois’s essay on “King of fruits” in Works in Progress on the once-luxurious pineapple and what its democratization has meant for the world. “Mirroring this competition with Nature itself, there was also a competition among the gardening-inclined rich. It soon became a must for an English gentleman to build, at great expense, a ‘pineapple pit’ on his estate. For the professional gardener, to succeed at maturing a pineapple was a top sign of competence. In France, the pineapple became a court favorite at Versailles after the first homegrown pineapples were presented to Louis XV in 1733.”
- A Riskgaming reader recommends Patrick Bury and David Murphy’s look at defense in “No Time To Spare: Irish Defense And Security In 2025.” "First, Ireland needs an informed national discussion and a process of education on the issue of neutrality. As Conor Gallagher has examined, like a geopolitical teddy bear, this issue is clung to by both politicians and the general public, without any real understanding of what it means or the obligations that come with being militarily neutral.”
- Katie recommends Matthew L. Wald’s look at “A Place where Tariffs Would Actually Help.” “Russia does not mine a lot of uranium, but it does enrich it. In Soviet times, it took uranium from what is today the independent country of Kazakhstan, in central Asia, enriched it and shipped it out through the port then called Leningrad, which is today Saint Petersburg. Post-Soviet Russia can still be a major player in uranium, and is not quite a market-based operator. It will price the product because of a government need for foreign currency, or because it wants to maintain political influence, rather than the traditional economic calculation of whether revenue will exceed costs.”
- Finally, Sam recommends a fascinating article by Niko McCarty on “What Limits a Cell’s Size?” “A molecule’s diffusion rate hinges on several factors. For instance, the cytoplasm is extremely crowded, and so molecules spend lots of time ricocheting off obstacles, delaying their arrival at a distant location. Every protein in a cell collides with about 10 billion water molecules per second, on average. These frequent collisions mean that the vast majority of proteins in a bacterium only diffuse between 5 and 10 µm per second.”
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.