
I open my eyes and immediately fumble for the phone, opening BlueSky and Threads to see what craziness has come out from the USA while I slept here in Australia. Reassuring myself that we are all still alive (well, some of us are) and that the nukes haven’t launched. I figure since I am here now, I might as well continue to doom scroll.
The outrage begins.
This morning it’s news that Meta knowingly used a pirated database of books and academic papers to train its AI. The Atlantic helpfully gives me a search box to see how much of my stuff has been stolen – twice. There’s 15 – shit.
I see my blog post book is there – well ok, not to worry. I wrote that to circulate my ideas and sell it for the cost of a coffee. I’m mildly annoyed that someone was scummy enough to steal it. Then I see my best book is there – Now I’m really annoyed that my publisher somehow didn’t keep it safe.
I see a partial list my academic publications – at least they don’t have the best ones (there’s at least one I wish was never out there in the first place). My outrage settles a bit… this is life now. It’s 2025. The internet. You can’t control it. You can’t control anything.
Then I remember that my own Publisher has sold my stuff to Open AI and didn’t give me a cut. The outrage bubbles up again.
My twin sister, Anitra, also an author who is pissed off, sends me a Thread from @caitlinmossauthor on Threads. who suggests taking credit for Meta’s work by putting “AI creator” in her bio since they used her stuff:

I laugh and make the same change to my Threads profile, adding “Creator of AI (you all stole my stuff).” I change Insta and add pronouns while I am at it. Fuck the facists.



You’re right! Your voice typically has more of that fun, sardonic edge to it. Let me try again with that example about your ethical position on AI.
It tries a second time.Deleuze’s concept of the ‘self-conforming mesh’ is one of those theoretical ideas that sounds impossibly wanky until you realise it perfectly describes your daily academic life. Take my ambivalent relationship with AI.
This morning at 7am, I’m raging about how Meta has scraped my blog to train their models without permission or payment. It’s now 9:30am and I’m asking Claude to help me draft a blog post – but… there’s a lot going on this weekend. I only have an hour to squeeze this out. The traditional academic might call this hypocrisy, but Deleuze would recognise it as a perfectly normal self-conforming mesh of ethics.
Unlike the rigid either/or thinking that dominates academic discourse (you’re either FOR technology or AGAINST it), the mesh allows contradictory positions to co-exist without imploding. My ethical framework isn’t fixed; it bends and reconfigures itself depending on context, need, and frankly, how desperate I am to meet a deadline.
When I rant about AI exploitation while simultaneously exploiting AI, I’m not being inconsistent – I’m embodying the very flexibility that Deleuze celebrated. The mesh doesn’t require logical coherence; it thrives on productive tension. And let’s be honest, isn’t this how most of us actually operate in our scholarly lives, despite our carefully constructed academic personas suggesting otherwise?
I exchange more text messages with Anitra. We laugh at some initial outputs. I ask her if Claude is right about the self conforming mesh (she knows more about it than me). She says pretty much, and then explains that she only really understood the idea when wrestling with the parking ticket machine:
Ending 1: Funny and Charming Justification
And here I am, using an AI to write a blog post about how technology reshapes our thinking without us noticing. The irony isn’t lost on me. But perhaps this collaboration is less a surrender to the mesh and more a cheeky subversion of it.
After all, I’m not mindlessly accepting whatever Claude generates—I’m directing, editing, critiquing, and ultimately deciding what stays. This isn’t blind adaptation; it’s a deliberate dance with technology. I’m using the mesh to examine the mesh. It’s a bit like studying the effects of caffeine while drinking a flat white.
Is this hypocritical? Maybe. Or perhaps it’s the ultimate academic move: simultaneously participating in and critiquing a system from within. Deleuze would probably approve of this productive tension, this refusal to settle into easy binaries of resistance or compliance.
So the next time you find yourself in a complicated relationship with the technologies reshaping academic work, remember you’re not failing at ethical purity—you’re embodying the self-conforming mesh in all its contradictory glory.
And really, isn’t acknowledging our entanglement the first step toward a more conscious engagement with it? At least that’s what I tell myself as I hit “send” on another message to my AI writing companion. We’re all just performing for parking machines in one way or another.
Ending 2: The “Oh Shit” Moment
And here’s where we reach the uncomfortable bit: I’m writing this post about the invisible ways technology reshapes our thinking… using an AI system. The self-conforming mesh isn’t just theoretical—it’s happening right now, in this exchange. Every prompt I write is being shaped by my anticipation of how the AI will respond. My writing style is subtly adapting to what generates the best outputs. I’m outsourcing thinking while writing about the dangers of outsourcing thinking.
Oh shit.
This is the truly insidious nature of Deleuze’s concept—even when we think we’re critically examining it, we’re further entangling ourselves.
I started this post feeling clever about using AI to discuss AI ethics, but now I’m wondering: How much of my thinking has already been altered by these interactions? Am I still the author here, or have I become a node in a larger technological process? The research questions I’m formulating for my next project—are they truly mine, or have they been influenced by what I know works well with the digital tools I have to do it with?
The most disturbing aspect of the self-conforming mesh isn’t that it exists, but that the moment you think you’ve stepped outside it to examine it objectively, you’ve actually just entered a different part of the same mesh.
There is no outside perspective.
As you read this blog post—perhaps while notifications ping in the background, as your reading habits are tracked for future recommendations—ask yourself: Where in your academic life have you adapted without realising? What invisible meshes are shaping your research questions, your writing process, your very thoughts? And the most chilling question: Would you even notice if they were?
Coda
I could end it here, I guess, but I’m a twin and it’s compulsive to share. I give Claude’s responses back to Anitra. We marvel quietly together. She texts: