Category: AI

  • SWDETROIT.info

    swdetroit.info

    Role

    Product Design
    Data Collection

    Status

    Launched

    Partners

    Graft Living Studios
    nodu

    Find Resources. Share Resources.

    SWDetroit.info is a community-driven platform that connects residents with local resources, events, and services in Southwest Detroit. It provides real-time information about community initiatives, volunteer opportunities, and neighborhood news, with the goal of strengthening community engagement and improving access to essential needs in the area.

    The app was launched as a rapid response project with the goal of facilitating the flow of resources bidirectionally in the local community to provide more efficient relief to the flooding crisis that impacted the Southwest Detroit neighborhood in February 2025.

  • Services

    Services

    Affordable Website Security Audit for Small Orgs

    We provide a lightweight external security audit – no backend credentials required – to help you spot the most common risks that leave small organizations vulnerable.

    What you get: a report with screenshots, evidence, and clear explanations, a prioritized list of recommended fixes and quick wins your web admin or developer can apply immediately.

    Ideal for WordPress, Squarespace, and small custom sites

    Starting at

    $150

    We check for:

    • exposed accounts and API keys
    • weak endpoints
    • surface-level vulnerabilities linked to malware injection phishing, brute force attacks, and DoS risks
    • data tied to known leaks
    • exposure to common web exploits including SQL injection and cross-site scripting

    Human-Centered Website Accessibility Audit

    Get an affordable, human-centered accessibility audit designed for small organizations. Our audits focus on identifying common issues that impact people with disabilities and expose your organization to legal and reputational risk.

    What you get: a report with examples and evidence and a prioritized roadmap for implementation.

    Ideal for WordPress, Squarespace, and small custom sites

    Starting at

    $250

    We check for:

    • descriptive tags and text for screen readers
    • logical content structures
    • alt text coverage
    • hands-free navigation support
    • color contrast compliance
    • other common WCAG 2.1 and ADA risk areas
    • insights to improve the overall user experience

    Site Care for Small Teams

    Keep your site content fresh without hiring a full-time web team. We provide content management services with fast turnaround tailored for nonprofits, small businesses, and grassroots orgs.

    We handle: setting up content hierarchy and governance, creating new landing pages, updating existing pages, writing and publishing blog posts and news updates, graphics and media support, copywriting, basic SEO and accessibility alignment, and troubleshooting for layout or publishing issues

    What you get: reliable content updates with quick turnaround, direct access to a content manager, end-to-end support for web copy and graphics, and actionable recommendations for site optimization and upgrades

    Cost: $200+/month


    Product Design & Roadmapping

    Sometimes, your need a whole new digital tool. Whether it’s a resource hub, an interactive map, or a data-driven app, we help small organizations design products that fit their mission, budget, and users.

    What we do: product discovery & research, user-centered design, technical planning, roadmapping

    What you get: high-fidelity wireframes, user flows, and prototypes; recommended platforms and integrations for lightweight builds; a roadmap with prioritized features and clear milestones; an assessment of total cost of ownership and long-term sustainability; plus connections to lean, mission-aligned development teams

    Cost: Project-based – reach out to discuss your needs!

  • The return of the webmaster

    Thank god one builder is enough again…

    In today’s tech landscape, entire disciplines exist just to manage the sheer bloat of oversized teams. Anyone unlucky enough to have held the role of ‘Scrum Master’ before knows the horror of a job where the core function is to herd developers through layers and layers of administrative hurdles, meetings, and formalities ending in reports and graphs that ultimately nobody really looks at.

    Why is this a thing? Because what used to be a simple, focused build has blown out into multiple frontend teams, backend teams, UX designers, UI designers, copywriters, content strategists, and a thick, suffocating layer of process on top of the actual work. If you’re in this field, then you know it’s perfectly commonplace for a single landing page build with minimal functionality to drag on for a year or more.

    It wasn’t always like this.

    In the 1990’s, when companies needed a website, they only hired one person – a webmaster. One person designed, coded, and managed the whole thing – often times on top of their regular admin or IT roles. Call me old fashioned, but I’m still wondering how the heck we got to a point where a six-figure budget and a multi-million-dollar agency with 200+ staffers are norms when it comes to shipping sites.

    In the good ol’ days, the webmaster was that full-service end-to-end digital product team. They mapped the page layouts, user flows, and navigation, and then hand-coded everything in raw HTML (later adding CSS, JavaScript, even Flash) to bring a site to life. On top of that, they’d design logos or edit graphics when needed.

    They also set up and maintained the web server, pushed deployments through FTP, and kept the content fresh by adding, updating, and formatting pages. Webmasters handled early SEO too, by tracking hit counters, submitting sites to search engines, and implementing clever techniques to up search rankings.

    And they even doubled as customer support. Remember those “Contact the Webmaster” links at the bottom of old sites? That was the catch-all inbox for broken links, typos, and cross-browser quirks, and the webmaster was the only one who knew how to answer to them.

    That role is a lost part of history. Search for “webmaster” jobs today and if you find the antiquated title at all, you’ll see see it in the form of caretakers of legacy systems – usually government or education who are still maintaining sprawling sites stuck on old frameworks – or you’ll find it’s branched off into more content or devops specific roles. All of these are only slices of what the webmaster used to be.

    Web 2.0 brought dynamic, database-driven sites with enhanced interactivity and more moving parts, and that leap in complexity also expanded and fragmented the work. Not to mention that tech companies became stuffed with venture capital, so they could afford to carve jobs into silos. Specialization became a way to show legitimacy. Suddenly, you “needed” backend engineers for databases and servers, frontend engineers for rich interfaces, UX designers to map flows, sysadmins for scaling uptime, QA testers to catch bugs, and project managers to herd it all. It’s only gotten bigger and more bloated as time has gone by. And the true generalist webmaster – the one who could conceive, design, build, and ship a whole platform end-to-end – was killed off by Web 2.0.

    But guess what? We’re back, baby! With today’s vibe coding and AI-assisted dev tools, much of that “necessary” specialization could be collapsing again. Because a single person can now spin up a full-feature MVP in hours.

    Of course, not everyone in the industry is rushing to embrace this concept. Apart from general skepticism around AI tools, there is the element of job security – if one person can ship what used to take a team of twenty, what happens to the other nineteen people? For many, vibe coding feels like another way for corporations to slash payrolls and scale profits. That fear is justified, too. Since gen AI tools popularized, we’ve been knee-deep in the biggest surge of tech layoffs since the dotcom bubble – whether causation or correlation, this is still our current reality.

    Corporations may have dollar signs in their eyes for AI tools, but not everything in the world has to be for them. The rise of AI tools also means that small teams and underfunded orgs can finally build what they’ve been priced out of. The same cost-cutting tools that makes Big Tech salivate can also put real power back in the hands of the people who’ve been locked out of innovation.

    AI tools are actually handing us something that Big Tech, in spite of their profit-rearing measures, has completely lost sight of: efficiency without bloat. That’s the rebellion. We can wield vibe-coding to help individuals, grassroots orgs, and small teams finally compete with institutions that waste millions just to stuff their own pockets, and maybe even render their services useless.

    And thank the fucking lord. After years of climbing through layers of administrative fluff to get products shipped, we can finally do the work in two weeks that everyone else drags out for two years.

  • FOR IMMEDIATE RELEASE: PLEA Selected as 2024 Innovation Fund Fellow by The Workers Lab

    We’re thrilled to announce that PLEA—our AI-powered advocacy platform focused on increasing transparency and collective action around prison labor—has been selected as a 2024 Innovation Fund Fellow by The Workers Lab!

    Alongside eight other visionary projects, PLEA is part of a powerful cohort of worker-centered innovations tackling challenges at the intersection of climate justice, artificial intelligence, and labor rights. With The Workers Lab’s support, we’re building tools that empower communities, surface critical data, and drive systems-level change from the ground up. We’re honored to be part of this network working toward a future where all workers are safe, healthy, secure, and have power.

  • On blanket anti-AI attitudes

    Blame the boss, not the bot.

    It’s easy to adopt a blanket anti-AI attitude when attention-grabbing arguments like “it’s stealing from us” and “it’s destroying the planet” have made their way to the front of the room. But in reality, most people are somewhere in the middle, stuck between concern and convenience, falling into an uncommitted “maybe it’s bad, but I’ll use it anyway” judgment. And sometimes this feels like the most disconcerting stance of all, even moreso than the opposing extreme where one straight up stans oligarchy and denies climate change. For this grey stance shows just how easy it is to quiet our concerns when we’re tethered to prompts and able to turn a blind eye.

    AI, specifically generative AI chatbot tools like ChatGPT, scrape data without consent, tear through precious natural resources, and draw us into new dependencies that feel increasingly forced.

    Nevertheless, my stance is that of the grey area. Am I one of those slippery types that I mentioned, taking the blue pill so I can chase the dopamine rush of indulging in a thoughtless conversation with a chatbot without having to feel too guilty about it later? That, of course, is not the way I look at it, but the polarity of the arguments makes it feel like my only choices are to swear off AI completely, or be a mindless consumer who prioritizes self-gratification over community or consequence. One or the other, no in between. And in the era of short-form content, our attention spans rarely tolerate nuance. But the truth is, my own relationship with AI simply isn’t an all-or-nothing position.

    If you think about it, the root of most tech skepticism is actually Big Tech skepticism. The corporations rolling out the most prominent of AI tools are also the ones profiting off unethically harvested data, undermining privacy protections, experimenting on us in covert social engineering initiatives, and God knows what else! Their consolidated power has only given them more agency to pollute the planet, squeeze workers across global supply chains, and build ecosystems that force us to accept surveillance as the norm. The business models are extractive by design, so the products feel exploitative.

    When the most powerful AI platforms are controlled by a handful of large tech oligarchs, taking a firm stance against these companies feels inseparable from rejecting AI technology altogether. But is AI inherently bad?

    Technology didn’t always feel so monolithic. There was a time when innovation was driven by small communities building tools for open research and public benefit. The anti-war counterculture of the 1960’s was a driving force behind embracing technology as a tool for liberation and change. And some may recall the early internet as a decentralized playground of experimentation, collaboration, and possibility, far removed from today’s hyper-commercialized, dystopian space.

    It is true that many grassroots innovations have been absorbed, repackaged, and monetized by Big Tech. The speed at which such projects have turned into the proprietary, exclusive, and increasingly extractive products of Big Tech will leave many of us doubting that an alternative world ever actually existed. Yet pockets of community-driven tech work still persist. There are still people building tech for justice, education, care, and community. Too often, these efforts are overlooked or dismissed in conversations that equate technology solely with Big Tech’s offerings.

    I believe that tools are not inherently good or evil; it’s how we use them that matters. Choosing to reject AI tools because of their problematic uses or corporate ties can feel principled, but it also risks leaving innovation and influence entirely in the hands of Big Tech.

    Consider how other technologies have evolved. Telephones, automobiles, electricity – all shaped by monopolies, extractive industries, and environmentally harmful practices – have nonetheless become essential to social movements, communication, and daily life. Smartphones, despite complex ethical concerns, have empowered grassroots organizing and expanded access in ways once unimaginable. If we had written off these tools entirely, we might have ceded them to the very forces we were trying to resist. Instead, history shows us that when people engage with technology, they’re more equipped to understand its mechanics and biases, advocate for better policies, and build community-driven alternatives.

    In the same way, AI tools present both challenges and possibilities. What we do know for sure is that they are now part of the world we navigate, and they aren’t going anywhere. Refusing to engage will not halt the influence of AI; it will only limit the range of voices shaping its future.

    It’s important to hold things accountable for abuse of power. But it’s also worth reflecting on why certain things are put under the microscope while others aren’t. The carbon footprint of doomscrolling social media, streaming our favorite shows, or endlessly FaceTiming loved ones rarely sparks the same level of public outrage, even though these activities also harvest our data, surveil us, and rely on similarly energy-intensive infrastructure. What troubles me the most about blanket anti-AI stances is that they seem to focus more on signaling moral purity than pursuing structural change. While we argue over whether using these tools makes us complicit in harm, the tools themselves are evolving under the control of a handful of powerful actors.

    Ultimately, the question becomes: how can we participate in shaping AI’s role? How can we leverage its potential while staying critical of its origins and impacts? Perhaps that grey stance of “maybe it’s bad, but I’ll use it anyway” doesn’t have to be a position of apathy and complicity but one of awareness. It can reflect a willingness to engage with the tools while staying alert to the systems behind them. If opting out entirely means falling behind, and blind adoption means giving in, maybe this uncomfortable middle is the most honest and strategic place to be.

  • PLEA

    PLEA

    Role

    Product Design
    Data Collection
    Development

    partners

    The Workers Lab
    Raft Foundation

    Status

    In Development
    Github

    PLEA: Exposing the Hidden Economy of Prison Labor

    PLEA (Prison Labor Exploitation Awareness) is a community-led, AI-assisted organizing infrastructure and action toolkit that leverages machine learning and generative AI to collect, organize, and analyze publicly available and user-generated data on prison labor, transforming insights into opportunities for individual and collective action.

    PLEA’s goal is to create new pathways for transparency, dignity, and economic justice within one of the country’s least visible labor systems.

    Key Features

    Want updates or to join early testing?