Category: Big Tech

  • The return of the webmaster

    Thank god one builder is enough again…

    In today’s tech landscape, entire disciplines exist just to manage the sheer bloat of oversized teams. Anyone unlucky enough to have held the role of ‘Scrum Master’ before knows the horror of a job where the core function is to herd developers through layers and layers of administrative hurdles, meetings, and formalities ending in reports and graphs that ultimately nobody really looks at.

    Why is this a thing? Because what used to be a simple, focused build has blown out into multiple frontend teams, backend teams, UX designers, UI designers, copywriters, content strategists, and a thick, suffocating layer of process on top of the actual work. If you’re in this field, then you know it’s perfectly commonplace for a single landing page build with minimal functionality to drag on for a year or more.

    It wasn’t always like this.

    In the 1990’s, when companies needed a website, they only hired one person – a webmaster. One person designed, coded, and managed the whole thing – often times on top of their regular admin or IT roles. Call me old fashioned, but I’m still wondering how the heck we got to a point where a six-figure budget and a multi-million-dollar agency with 200+ staffers are norms when it comes to shipping sites.

    In the good ol’ days, the webmaster was that full-service end-to-end digital product team. They mapped the page layouts, user flows, and navigation, and then hand-coded everything in raw HTML (later adding CSS, JavaScript, even Flash) to bring a site to life. On top of that, they’d design logos or edit graphics when needed.

    They also set up and maintained the web server, pushed deployments through FTP, and kept the content fresh by adding, updating, and formatting pages. Webmasters handled early SEO too, by tracking hit counters, submitting sites to search engines, and implementing clever techniques to up search rankings.

    And they even doubled as customer support. Remember those “Contact the Webmaster” links at the bottom of old sites? That was the catch-all inbox for broken links, typos, and cross-browser quirks, and the webmaster was the only one who knew how to answer to them.

    That role is a lost part of history. Search for “webmaster” jobs today and if you find the antiquated title at all, you’ll see see it in the form of caretakers of legacy systems – usually government or education who are still maintaining sprawling sites stuck on old frameworks – or you’ll find it’s branched off into more content or devops specific roles. All of these are only slices of what the webmaster used to be.

    Web 2.0 brought dynamic, database-driven sites with enhanced interactivity and more moving parts, and that leap in complexity also expanded and fragmented the work. Not to mention that tech companies became stuffed with venture capital, so they could afford to carve jobs into silos. Specialization became a way to show legitimacy. Suddenly, you “needed” backend engineers for databases and servers, frontend engineers for rich interfaces, UX designers to map flows, sysadmins for scaling uptime, QA testers to catch bugs, and project managers to herd it all. It’s only gotten bigger and more bloated as time has gone by. And the true generalist webmaster – the one who could conceive, design, build, and ship a whole platform end-to-end – was killed off by Web 2.0.

    But guess what? We’re back, baby! With today’s vibe coding and AI-assisted dev tools, much of that “necessary” specialization could be collapsing again. Because a single person can now spin up a full-feature MVP in hours.

    Of course, not everyone in the industry is rushing to embrace this concept. Apart from general skepticism around AI tools, there is the element of job security – if one person can ship what used to take a team of twenty, what happens to the other nineteen people? For many, vibe coding feels like another way for corporations to slash payrolls and scale profits. That fear is justified, too. Since gen AI tools popularized, we’ve been knee-deep in the biggest surge of tech layoffs since the dotcom bubble – whether causation or correlation, this is still our current reality.

    Corporations may have dollar signs in their eyes for AI tools, but not everything in the world has to be for them. The rise of AI tools also means that small teams and underfunded orgs can finally build what they’ve been priced out of. The same cost-cutting tools that makes Big Tech salivate can also put real power back in the hands of the people who’ve been locked out of innovation.

    AI tools are actually handing us something that Big Tech, in spite of their profit-rearing measures, has completely lost sight of: efficiency without bloat. That’s the rebellion. We can wield vibe-coding to help individuals, grassroots orgs, and small teams finally compete with institutions that waste millions just to stuff their own pockets, and maybe even render their services useless.

    And thank the fucking lord. After years of climbing through layers of administrative fluff to get products shipped, we can finally do the work in two weeks that everyone else drags out for two years.

  • A rideshare app that can’t navigate: why Uber’s GPS sucks

    Uber gaslit me with its ETA and all I got was gridlock and a missed train.

    I don’t use Uber often, but when I do, it’s usually for a time-sensitive trip: a train station, an airport, somewhere I can’t afford to be late…which I recently learned is probably not the best use case…

    Last week, I missed my train after an Uber ride took double the time estimated. I live about six miles from the station, a straight shot down the main road. It’s normally a twenty minute drive. But that day, Google Maps, Apple Maps, and Waze all showed that main road as deep red due to construction, congestion, and the sticky sluggishness of summer. Luckily, the maps also showed ample alternate routes – a brief detour on the interstate, for example, could keep the trip to its usual twenty minutes.

    When I booked my Uber, the app confidently asserted that I’d be at the station in exactly twenty minutes, a comfy fifteen minutes before my train’s departure time.

    Perfect, I thought. Uber’s *elite* GPS knows the best way to route around the traffic.

    Except! It doesn’t. The driver’s in-app GPS sent us straight into the most congested road, the exact one the other apps had warned against. Yet Uber’s ETA remained unchanged.

    As I sat at a standstill, I clung to the idea that maybe Uber had cracked something the other navigation giants hadn’t. This was a company with the resources to build its own routing engine – surely their billion-dollar infrastructure had spotted a coming break in traffic the other apps hadn’t caught yet?

    Nope. As my ETA came and passed, the app began revising my arrival time in painful minute-by-minute increments, all the way until I finally arrived at my destination with the train long departed from the platform.

    What the heck? Why did Uber’s navigation system do that?

    I felt compelled to report this issue to Uber, my naive mind imagining a product team eager to troubleshoot a potential flaw in their pride of a proprietary navigation system. After all, in a 2015 blog post, Uber’s engineering team did brag about the accuracy of their technology. “Ultra-efficient route planning and highly accurate ETAs are critical“, they said.

    In spite of this, every customer service channel shut down my attempt to escalate, offering nothing beyond the canned line: “We are working towards improving ETA accuracy.” A few responses reiterated Uber’s five-minute grace period to cancel a ride for free, as if that had anything to do with the problem at hand. It began to seem like this was a issue that Uber’s support team was probably trained to deflect and contain.

    After digging deeper, I found that Uber’s subpar GPS was in fact a known and widely reported issue by drivers and riders alike, with complaints going as far back as 2014 when Uber first rolled it out with the promise of getting riders to their destination “via the best possible route,” as stated in a press release at the time.

    “The [new] Uber navigation is quite poor. It often suggests a bad route” an early user reported in 2014.

    Over a decade later, it seems like not much has changed. In a recent post on r/uber titled “Why is the Uber GPS so bad?“, the OP said: “The GPS always gives my drivers the worst routes.” On RideGuru, a driver shared that Uber’s GPS sent them “in circles”, but switching to Waze found the destination “right away”. A Redditor on r/uberdrivers explained: “If you’re using Uber GPS … they’ll make you take the shortest route even if it takes much longer, then pocket the difference on their flat-rate trip.” Another Redditor added bluntly: “The algorithm was set to lowball drivers as much as possible.”

    Before rolling out their own system, Uber relied on Google Maps for routing (which Lyft still uses today). But Google’s API fees, combined with a well-publicized feud between the companies pushed Uber to invest in its own navigation system. Today, from what I can gather, Uber’s routing engine is a hybrid stack of global street data coupled with open map database to fill coverage gaps, as well as third-party imagery and geocoding services to enhance accuracy.

    What’s missing? Leading navigation apps all ingest traffic data from millions of devices and live incident reports, updating routes in near real time. Uber’s data comes primarily from its own insulated database of trips, a smaller and less continuous dataset. Traffic updates may only come periodically and from fewer data sources, making Uber’s routing service slower to reroute around unseen congestion.

    Ahh, I think back to an airport trip a few years ago when Uber routed my driver to drive straight into a construction site… where a street used to be. Makes sense now.

    So, why, after 10+ years of complaints from riders and drivers alike, is Uber’s strategy to smother the issue as quickly as possible? Do we even get a strategic PR response?

    Probably not.

    There isn’t really a compelling business case to change it.

    Uber’s routing technology was probably never built to prioritize accuracy. Its algorithms likely balance multiple incentives like trip acceptance rates, driver pay structures, and overall platform efficiency. For example, if an inaccurate but optimistic ETA nudges riders to book faster or drivers to accept more trips, then that could be seen as a success factor from a business standpoint, one that would compensate for any business cases to recalibrate toward more realistic arrival times.

    Because the system leans heavily on its own proprietary data, it probably suffers from something known as algorithmic inertia. Instead of drawing on the vast, real-time traffic datasets that power leading navigation apps, Uber’s navigation learns from the routes its own drivers have historically taken. If a particular path was used a lot in the past, the model likely treats that route as “high-confidence” simply because it has the most trip history behind it. Over time, that reinforces the same routing decisions, good or bad, locking the system into its own feedback loop.

    Despite years of complaints, the issue of mapping accuracy doesn’t seem like it’s gained enough traction to sway Uber’s drivers or customers toward competitors. And Uber spends only about 1% of its revenue on platform R&D, which seems rather low for a tech giant. But the message is clear. Uber is perfectly happy with its scotch tape of a navigation system.

    Anyway, when the routing feels slow, most riders blame the driver, not the platform. It’s the perfect setup for the tech-centered “share economy”: the platform keeps its margins, the worker takes the heat. But that’s a topic for another post.

  • I posted affiliate links before it was cool

    How a DIY space turned into capitalist clickbait hell…

    Affiliate marketing began in the 1990s as a way for early, scrappy web publishers to earn a percentage of e-commerce sales they helped generate. One of the earliest examples of this was CDNow’s BuyWeb program, which launched in 1994. The program was based on a simple idea: website owners who reviewed music releases could place tokenized product links for CDs their readers might be interested in purchasing. The website owners could then earn commissions from any resulting sales.

    In those early years, affiliate content focused on writing product reviews, posting about deals, and creating niche content. It was largely the domain of independent publishers, who ran forums, wrote long-form blog posts, and maintained product recommendation sites based on experience and genuine interest. It felt trustworthy, and many of these platforms gained loyal followings and strong SEO rankings early in the game, eventually becoming surprisingly profitable.

    As affiliate marketing turned DIY publishers into sustainable businesses in the 2010s, large corporations saw how lucrative this space had become, and their desire to get a piece of the pie led to edging out or swallowing up the independents.

    Media conglomerates began to acquire or launch their own affiliate commerce verticals. But their focus on prioritizing affiliate commissions over journalistic integrity meant that quality content quickly diminished into an algorithm-driven cash grab. When The New York Times acquired product review guide The Wirecutter, readers began to question whether products were still thoroughly tested or merely cursorily reviewed in pursuit of clicks.

    Today, large publishers churn out SEO-optimized listicles like “Top 25 Amazon Items You Didn’t Know You Needed” (spoiler: because you don’t) – frequently featuring low-quality, impulse-buy products selected only because price, trendiness, or other algorithmic signals predict strong rankings and conversions. I would assume that it’s pretty obvious to most readers now that content is now reverse-engineered from what converts rather than what informs. However, the conversions are still there, with Condé Nast reporting that as much as 20% of its revenue comes from affiliate partnerships. In any case, the landscape has shifted – affiliate content drives big sales not because the content is trustworthy or the products are quality, but because the content is everywhere, easy to skim, and exploits impulse behavior.

    Large finance and tech companies have also moved aggressively into affiliate marketing, with many investing in proprietary affiliate technology that embeds directly into their existing products. For example, Capital One uses its cardholder dashboards to promote its browser extension, Capital One Shopping (formerly Wikibuy), which promotes cash-back offers across online retailers. Powered behind the scenes by affiliate tracking, the plugin drops cookies, monitors activity, and applies Capital One’s own affiliate codes as users shop online. Capital One is uniquely positioned to maximize consumer reach with its affiliate technology, because it already has access to cross-promotion platforms, user segmentation and behavioral data, brand trust, and technological capital.

    With these advantages, large institutions are positioned to effectively absorb the affiliate model and sideline independent creators. In 2025, for example, a class-action lawsuit alleged that Capital One Shopping was overriding existing affiliate links, diverting commissions away from the smaller creators. Similar allegations have surfaced against other major players like Microsoft and Paypal. In turn, this has muddled advertiser trust, as these technologies can skew numbers to show referrals that the publishers didn’t originally capture. However, many advertisers continue to use these services, because they still reach millions of users, and, at the end of the day, advertisers are looking for results, not truth.

    Revenue-generating pathways for DIY publishers do still exist, but the playing field has shifted. A new kind of affiliate marketer has emerged in the form of short-form content creators. Due to the rise of TikTok and Instagram influencers, affiliate revenue has reshaped into something far more dependent on platform logic than personal trust. And unlike their blog-era predecessors, these new creators don’t own their distribution channels. They’re posting on platforms designed to prioritize engagement, meaning their reach depends on constantly shifting algorithms, and their success hinges on how well their content aligns with the goals of their platform. TikTok, for example, naturally boosts posts promoting products it already has inventory and deals for in TikTok Shop. Creators succeed when their content matches the platform’s sales priorities. It’s no longer based on products they actually use or trust, but reverse-engineered from what’s likely to convert.

    At the same time, we are seeing the affiliate networks themselves move beyond behind-the-scenes tracking and reporting – they’re launching their own publishing platforms and storefronts to capture more value across the affiliate marketing ecosystem. By running their own consumer-facing products, networks generate their own share of affiliate commissions rather than relying on the reach of third-party publishers. For example, Rakuten now operates its own cashback and deals site after acquiring Ebates, a successful publisher from within Rakuten’s own affiliate network.

    Although affiliate marketing as a side hustle remains popular and still enables small creators to generate a sizable income, the landscape has dramatically shifted. What began as an open, accessible system with low barrier to entry has been overtaken by large institutions that once dismissed the space yet now want to control it. These institutions have layered themselves between creator and consumer, and, in doing so, they have eroded the core values that made affiliate marketing effective in the first place: trust, authenticity, and genuine relationships between creators and their audiences. At its present state, content is only a vehicle for sales, not substance. It’s an afterthought.

    Instead of rewarding trust and insight, the system in its current state favors those who can scale traffic or intercept it. To restore balance and rebuild trust, we need new models that resist the logic of this extractive system, like cooperatively-owned affiliate networks where publishers of all sizes can help shape attribution standards and uphold guidelines for transparency and legitimacy. Such networks would empower all publishers to participate equitably and have a voice in shaping the ecosystem’s rules. And establishing such standards can benefit publishers, creators, advertisers, and consumers alike.

    Here’s how it could work:

    1. Creator-Led Governance
      Publisher members would vote on attribution policies, fee structures, and ethical standards. For example, most major affiliate networks use last-click attribution, meaning 100% of the commission goes to the final affiliate link clicked before a sale. While this model is easier to track, it disproportionately benefits large publishers that focus on intercepting purchase-ready users, through tactics like search engine capture, browser plugins, or retargeted ads. These methods often override the efforts of smaller or niche creators who may have introduced the product earlier in a consumer’s buying journey. Over time, networks become more aligned with large publishers, since they appear to generate the most revenue. A cooperative model shifts power back to creators, ensuring that platform policies reflect the diverse needs of the community, not just corporate-scale traffic machines.
    2. Transparent Attribution Technology
      In most major affiliate networks, publishers can usually only see if they got credit, but not why they didn’t. They can’t trace if a coupon plugin or retargeted ad may have swooped in and gotten that last click. In an equitable, cooperative network, members may opt for full transparency into how referrals are credited. Clear, open-source, auditable logs can show the full referral path, including who introduced the product, when, and through which channel. Members can verify, audit, adapt, or even vote to modify how attribution is handled, reducing disputes and preventing common abuses.
    3. Shared Revenue Pooling
      With transparency in the attribution flow, networks could move beyond awarding 100% commission to a single publisher. Instead, the commission pool could be distributed fairly based on actual, verified impact, ensuring that everyone who contributes along the referral path receives their share.
    4. Ethical Advertiser Vetting
      Because advertisers are the ones who pay the bills in affiliate networks, affiliate networks are pressured to cater to their business goals. This sometimes results in practices like revoking commissions with little transparency or recourse, enforcing strict rules on publisher content, or implementing program changes without advance notice. In a cooperative system, both advertisers and publishers would have equal representation and voice, ensuring that policies and decisions reflect the needs and fairness of the entire community rather than the business goals of advertisers.
    5. Discovery That Favors Quality
      If an affiliate network hosts its own consumer-facing platform, this platform could focus on recommendation and search algorithms that reward thoughtful, high-engagement content over what’s trending or keyword-packed, returning power to smaller publishers who focus on quality and insight.
    6. No Platform Middleman
      Unlike affiliate networks that use their consumer-facing platforms to capture commissions directly, a cooperative network would aim to remain neutral as a utility layer that supports the relationship between creator, consumer, and brand. If the cooperative were to operate a content platform, the network itself would not compete with its publisher network by creating its own storefront or consumer brand. It only facilitates those connections.

    A cooperative model may not match the scale of the existing players overnight. But what it lacks in mass, it makes up for in mission. By emphasizing shared governance, it aims to restore the foundational value of affiliate marketing – rewarding “mom-and-pop” publishers for their trustworthiness, unique taste, and commitment to transparency. More than that, it’s a step toward reclaiming the internet’s roots as a DIY space for free, honest information exchange and pushing back against the dystopian flood of ads and clickbait that dominate today.

  • The revolution will not be in a SaaS dashboard

    Who’s profiting from the outrage?

    In 2012, a massive internet protest brought Congress to a halt. Millions mobilized online to stop SOPA and PIPA, two bills that threatened net neutrality and the free flow of information online. More than 7 million people signed petitions, and 4.5 million contacted their representatives. It was a catalyst moment that revealed how digital platforms could command widespread attention from traditional institutions.

    That moment helped spark a market boom in online petitioning tools, CRMs, and campaign platforms designed to scale digital activism. More than a decade later, many of these tools have grown rapidly, and it’s time to take a closer look. Who owns them now? Are they controlled by private equity? How are their data practices? Are they properly amplifying grassroots power? As these platforms become essential infrastructure for modern movements, we have to ask: who really benefits from these platforms, and at what cost?

    Here are four things to keep in mind about what’s happening behind the screen:

    Most major advocacy platforms, like NationBuilder and Action Network, are proprietary and closed-source, meaning they don’t share their code base or data practices in the interest of transparency. While it’s up to client organizations to dictate how to use the data they collect, the platforms still control the underlying infrastructure, retaining broad control over how that data is stored, accessed, and potentially leveraged.

    Over the last five years, Apax Partners, a UK-based private equity firm, has slowly acquired much of the progressive digital advocacy ecosystem, including EveryAction, ActionKit, Salsa Labs, and NGP VAN, consolidating them under a single, profit-seeking enterprise known as Bonterra Tech. As the data processor, Bonterra Tech now controls the digital infrastructure housing vast amounts of user data collected by its subsidiaries. While data processors often uphold policies against selling or sharing user data, there’s often little clarity around how they may use the data internally. For example, Apax Partners also holds interests in defense contractors, including a company blacklisted by the UN for operating in illegal settlements in Palestine. With this in mind, it’s fair to wonder: Could insights from social justice campaigns quietly feed strategies for profit in entirely different industries?

    Implicit in their high costs, dominant advocacy platforms are often built to promote hierarchical organizing, where decisions are made by a well-resourced central org or campaign leadership without sustained collaboration with impacted communities. The platforms routinely gather personal data such as emails, zip codes, donation histories, and even personal stories but neglect to offer functionality that could drive the continued input or consent of the very people the campaigns aim to uplift, such as visibility into campaign strategies, insight into how their data is analyzed, or participatory tools for shaping campaigns.

    Instead, these insights are available only to campaign managers to help guide top-down decisions on messaging, targeting, and fundraising, often at the expense of local groups. For marginalized communities, these platforms reinforce a dynamic where they are mobilized only during urgent moments but excluded from ongoing strategic planning and decision-making.

    Nonprofits, like corporations, are also driven by return on investment. While the goal isn’t profit, success is still measured by quantifiable outcomes – usually the number of people who click, sign, or donate. Digital advocacy tools cater to this pressure by making it easy to generate high-volume engagement through quick actions. But this efficiency means focusing on surface-level interactions, favoring volume over substance. These tools can build off momentum and energy but aren’t designed to support long-term organizing or deeper community-building

    The advocacy platform 5 Calls took off this year, recording as many as 700,000 engagements in one week. As a nonprofit with an open-source product, they value transparency and community involvement. But is that enough to counteract the broader trends shaping digital advocacy today? 5 Calls’s reliance on prewritten scripts and curated issues can reduce collective energy to a top-down transaction. Without strong feedback loops, they risk losing meaningful participant input and sustained engagement. As advocacy campaigns chase bigger numbers, tools like this become treated as substitutes for organizing, rather than one small part of a broader ecosystem.

    When people sign a petition, donate, or join a campaign, advocacy platforms collect not just their basic info, but also behavioral data, like what they click, when they engage, and how often they take action. This data is used to create pathways for fundraising and marketing automation, creating detailed profiles that can be leveraged for targeted outreach. These targeting tools are often a key selling point for platforms, allowing them to charge clients more based on improved engagement and donation results.

    Platforms also benefit monetarily from the visibility and credibility of hosting high-profile campaigns. That attention attracts investors and institutional partners, concentrating power within the tech infrastructure. Grassroots movements, meanwhile, provide the energy but may not receive the resources. Take Change.org as a key example – while it’s branded as a public platform for citizen action, it’s actually operated by a for-profit company. The platform monetizes petitions by encouraging targeted viewers to chip in financially. However, those donations typically go to Change.org itself, not to the organizers of the petitions or the communities impacted. This is not an uncommon revenue model for online petitioning tools – the emotional urgency of social causes becomes a revenue stream, while the people doing the actual work remain unsupported.

    What’s the Alternative?

    To build truly liberatory digital infrastructure, we need advocacy tools that involve impacted individuals and communities in shaping how technology is designed and deployed. Open-source platforms offer a promising path, allowing movements to adapt and audit tools to serve their needs rather than corporate incentives. Beyond the tech stack, data sovereignty must also be prioritized with clear, enforceable guidelines about who controls information and how it’s used. And rather than tools that funnel people into campaigns for short-term gains, infrastructure should support long-term relationship-building and collective care.

    The future of movement tech shouldn’t look like a sales dashboard – it should look like a transparent, participatory space where people co-create, share responsibility, and benefit collectively