Blame the boss, not the bot.

It’s easy to adopt a blanket anti-AI attitude when attention-grabbing arguments like “it’s stealing from us” and “it’s destroying the planet” have made their way to the front of the room. But in reality, most people are somewhere in the middle, stuck between concern and convenience, falling into an uncommitted “maybe it’s bad, but I’ll use it anyway” judgment. And sometimes this feels like the most disconcerting stance of all, even moreso than the opposing extreme where one straight up stans oligarchy and denies climate change. For this grey stance shows just how easy it is to quiet our concerns when we’re tethered to prompts and able to turn a blind eye.
AI, specifically generative AI chatbot tools like ChatGPT, scrape data without consent, tear through precious natural resources, and draw us into new dependencies that feel increasingly forced.
Nevertheless, my stance is that of the grey area. Am I one of those slippery types that I mentioned, taking the blue pill so I can chase the dopamine rush of indulging in a thoughtless conversation with a chatbot without having to feel too guilty about it later? That, of course, is not the way I look at it, but the polarity of the arguments makes it feel like my only choices are to swear off AI completely, or be a mindless consumer who prioritizes self-gratification over community or consequence. One or the other, no in between. And in the era of short-form content, our attention spans rarely tolerate nuance. But the truth is, my own relationship with AI simply isn’t an all-or-nothing position.
If you think about it, the root of most tech skepticism is actually Big Tech skepticism. The corporations rolling out the most prominent of AI tools are also the ones profiting off unethically harvested data, undermining privacy protections, experimenting on us in covert social engineering initiatives, and God knows what else! Their consolidated power has only given them more agency to pollute the planet, squeeze workers across global supply chains, and build ecosystems that force us to accept surveillance as the norm. The business models are extractive by design, so the products feel exploitative.
When the most powerful AI platforms are controlled by a handful of large tech oligarchs, taking a firm stance against these companies feels inseparable from rejecting AI technology altogether. But is AI inherently bad?
Technology didn’t always feel so monolithic. There was a time when innovation was driven by small communities building tools for open research and public benefit. The anti-war counterculture of the 1960’s was a driving force behind embracing technology as a tool for liberation and change. And some may recall the early internet as a decentralized playground of experimentation, collaboration, and possibility, far removed from today’s hyper-commercialized, dystopian space.
It is true that many grassroots innovations have been absorbed, repackaged, and monetized by Big Tech. The speed at which such projects have turned into the proprietary, exclusive, and increasingly extractive products of Big Tech will leave many of us doubting that an alternative world ever actually existed. Yet pockets of community-driven tech work still persist. There are still people building tech for justice, education, care, and community. Too often, these efforts are overlooked or dismissed in conversations that equate technology solely with Big Tech’s offerings.
I believe that tools are not inherently good or evil; it’s how we use them that matters. Choosing to reject AI tools because of their problematic uses or corporate ties can feel principled, but it also risks leaving innovation and influence entirely in the hands of Big Tech.
Consider how other technologies have evolved. Telephones, automobiles, electricity – all shaped by monopolies, extractive industries, and environmentally harmful practices – have nonetheless become essential to social movements, communication, and daily life. Smartphones, despite complex ethical concerns, have empowered grassroots organizing and expanded access in ways once unimaginable. If we had written off these tools entirely, we might have ceded them to the very forces we were trying to resist. Instead, history shows us that when people engage with technology, they’re more equipped to understand its mechanics and biases, advocate for better policies, and build community-driven alternatives.
In the same way, AI tools present both challenges and possibilities. What we do know for sure is that they are now part of the world we navigate, and they aren’t going anywhere. Refusing to engage will not halt the influence of AI; it will only limit the range of voices shaping its future.
It’s important to hold things accountable for abuse of power. But it’s also worth reflecting on why certain things are put under the microscope while others aren’t. The carbon footprint of doomscrolling social media, streaming our favorite shows, or endlessly FaceTiming loved ones rarely sparks the same level of public outrage, even though these activities also harvest our data, surveil us, and rely on similarly energy-intensive infrastructure. What troubles me the most about blanket anti-AI stances is that they seem to focus more on signaling moral purity than pursuing structural change. While we argue over whether using these tools makes us complicit in harm, the tools themselves are evolving under the control of a handful of powerful actors.
Ultimately, the question becomes: how can we participate in shaping AI’s role? How can we leverage its potential while staying critical of its origins and impacts? Perhaps that grey stance of “maybe it’s bad, but I’ll use it anyway” doesn’t have to be a position of apathy and complicity but one of awareness. It can reflect a willingness to engage with the tools while staying alert to the systems behind them. If opting out entirely means falling behind, and blind adoption means giving in, maybe this uncomfortable middle is the most honest and strategic place to be.