Women in AI: The Dilemma Nobody Talks About

Last Friday, I spoke at AI Collective Singapore for International Women's Day, on the topic of "Women in AI". And the thing I kept coming back to while prepping for it wasn't a celebration, but it was a constant tension.
As a woman in tech who builds with AI every single day, I feel pulled in two directions. I want to champion AI as a tool that empowers women to do more with less. At the same time, I can't ignore what AI costs: the artists whose work was scraped without consent, the biases baked into systems that affect women disproportionately, the jobs being quietly displaced. How do you promote a tool you also have serious concerns about?
That's the dilemma. I don't think enough people in the AI space talk about it honestly.
The Gap Is Real, and It's Not What You Think
In the United States, 50% of men use popular AI tools compared to just 37% of women. That gap holds even within the same occupations. Women are 16 percentage points less likely to incorporate AI into their work tasks.
Image from Artiba
This is often framed as a problem to fix. Women aren't adopting AI fast enough. Women are being left behind.
But read the detail. Women cite privacy concerns as a primary deterrent. They worry more about AI hallucinations, inherent biases, and job displacement. AI chatbots have been documented recommending lower salaries for women than for men with identical profiles, perpetuating wage gaps that already exist in the real world.
Women aren't slow. We're informed. And that caution makes us better practitioners of AI, when we do choose to use it.
My Approach: Intentional AI
I don't avoid AI. I use it constantly, across my work for ragTech, for content, for code, for automations. But every single use is a deliberate choice, not a reflex. I call this intentional AI: weighing the scale of impact each use has against the known costs, and saying no when the math doesn't hold.
Here's what that looks like in practice.
I have never used image or video generation.
Image from Undetectable.ai
Except for the early days when I didn't realize the AI filters on Tiktok were, well, AI - I have never used image or video generation since. Even as someone who posts visual content every single day across social media, a podcast, and a tech brand. Instead I find lightweight alternatives: SVG and HTML code to generate graphics, code-generated illustrations. Our children's digital literacy initiative, FutureNet, uses entirely code-generated, doodle-like illustrations. This sidesteps the ethical problem of image generation drawing on artists' work without royalty, and it's more resource-efficient too.
To put numbers on that: here is the SVG code for a simple cartoon frog, the kind of graphic I'd generate instead of prompting an image model.
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 200 200">
<ellipse cx="100" cy="130" rx="60" ry="45" fill="#4CAF50"/>
<circle cx="100" cy="85" r="40" fill="#4CAF50"/>
<circle cx="82" cy="70" r="12" fill="#fff"/>
<circle cx="118" cy="70" r="12" fill="#fff"/>
<circle cx="84" cy="70" r="7" fill="#222"/>
<circle cx="120" cy="70" r="7" fill="#222"/>
<path d="M82 100 Q100 115 118 100" stroke="#222" stroke-width="3" fill="none"/>
<path d="M50 140 Q30 160 25 175" stroke="#388E3C" stroke-width="8" fill="none" stroke-linecap="round"/>
<path d="M150 140 Q170 160 175 175" stroke="#388E3C" stroke-width="8" fill="none" stroke-linecap="round"/>
<path d="M60 165 Q35 180 20 185" stroke="#388E3C" stroke-width="8" fill="none" stroke-linecap="round"/>
<path d="M140 165 Q165 180 180 185" stroke="#388E3C" stroke-width="8" fill="none" stroke-linecap="round"/>
<circle cx="93" cy="88" r="3" fill="#2E7D32"/>
<circle cx="107" cy="88" r="3" fill="#2E7D32"/>
</svg>
And here's the frog graphic that was generated from that code:
Granted, the frog still need some tweaking to get it to look like a frog. But this was a first attempt at generating a frog using SVG code. Given the computational cost comparison, I can still prompt my AI assistant to tweak it 17 times more for the compute and energy usage to be equivalent to generating a single image of a frog for the first time.
That code is ~992 bytes (~1KB). It renders at any resolution without quality loss, can be edited as plain text, and is version-controllable. Using the OpenAI cl100k tokenizer, generating it costs approximately ~300 tokens of LLM output.
Now compare: asking DALL-E 3 to generate "a simple cartoon frog" produces a PNG that typically lands at 200KB to 500KB, roughly 200 to 500 times larger, at a fixed resolution. The model runs approximately 50 denoising steps through a multi-billion parameter network to produce it.
The difference shows up in electricity. A 300-token SVG generation uses roughly 0.0003 kWh (based on Patterson et al. 2022's inference estimates of ~0.001 kWh per 1,000 tokens). A single AI image generation uses approximately 0.003 to 0.01 kWh depending on the model, around 10 to 33 times more. The IEA's 2024 energy report puts a representative cloud image generation at ~0.005 kWh, which is about 17 times the cost of the SVG equivalent.
Water compounds the gap. Li et al.'s 2023 paper "Making AI Less Thirsty" measured data center cooling at roughly 1 liter of freshwater per kWh on average. By that measure: the SVG costs around 0.3mL of water. The AI image costs around 5mL, about 17 times more. At a thousand graphics a year, that's 300mL (a glass of water) versus 5 liters. Individually trivial. DALL-E was processing millions of requests per day at peak. The aggregate is not trivial.
Everything I do, I do in code.
A blog post written in markdown code. Image from Developer Bacon
I write my posts in code, my workflows and plans live in code repositories. I call this "business as code". The upside beyond ethics: context is never wasted, I never spend AI compute translating formats, and everything is version-controlled. And because I default to code, I lean on AI where it genuinely excels: systematic, explicit, deterministic tasks, not open-ended creative generation.
I never generate the same thing twice.
"Don't Repeat Yourself (DRY)" is a Programming Principle. Image from Symflower
If I find myself asking AI to do the same task repeatedly, I take that as a signal to build a system for it instead. I co-wrote a script with AI that automatically converts our blog posts into newsletter format. Now I run it once per post and never think about it again. Repeated AI tasks are engineering problems waiting to be solved.
I don't use agents beyond coding agents.
Meta Security Researcher's AI Agent Accidentally Deleted Her Emails. Image from PCMag
Because my workflows are codified, I can write specific automations directly in my codebase: email automation, content generation, content repurposing. I prefer defining the rules through code so I never lose control or face the problem of an agent interpreting my intent incorrectly. There was a case recently where an AI agent deleted emails from a senior researcher's inbox because it misread the instructions. That's a real cost.
All my thinking is my own.
When I write, every point, every argument, every opinion started in my head. I do my own research using traditional search engines, read through articles myself, and verify information based on my own judgment. AI cleans up my language and sharpens my prose. The line I draw: if it's my work, AI can help me do it better. It doesn't do it for me.
Code is the least unethical use case.
Code is trained largely on open-source material that developers already intended for public use. It's explicit, systematic. There's no room for AI to make artistic or tonal choices on my behalf. That's why I use code as a native format for almost everything: diagrams in PlantUML or Mermaid, reports and presentations as HTML, pitch decks as websites rather than PDFs. Less data transfer, less AI compute, and the output is mine.
Women Are Already Leading Ethical AI
Globally, women make up about 22% of AI professionals, according to UNESCO. But look at who is doing the ethical work, and we're everywhere.
Fei-Fei Li. Image from Stanford University
Fei-Fei Li founded AI4ALL to make AI education inclusive and accessible for underrepresented groups. She has said: "I believe in human-centered AI to benefit people in positive and benevolent ways. It is deeply against my principles to work on any project that I believe weaponizes AI."
Joy Buolamwini. Image from Angel agyapong's Medium article
Joy Buolamwini founded the Algorithmic Justice League after her 2017 Gender Shades project at MIT revealed intersectional biases in facial recognition systems, biases that hit women of color hardest.
Timnit Gebru. Image from Newtral
Timnit Gebru co-led Google's Ethical AI team until she was fired for co-authoring a paper on the risks of large language models. She co-founded Black in AI and continues her research through the DAIR Institute.
Frances Haugen. Image from CAA Speakers
Frances Haugen left Facebook and, at great personal risk, became the whistleblower behind "The Facebook Files", exposing how Meta consistently prioritized profit over public safety.
Kate Crawford. Image from Kate Crawford's website
Kate Crawford, co-founder of the AI Now Institute at NYU, wrote "Atlas of AI", an award-winning examination of the hidden labor, environmental, and political costs of AI systems. She has spent her career demanding transparency and accountability from the industry.
Karen Hao. Image from Time
Karen Hao, AI expert and investigative journalist who author of "Empire of AI", an account of the history of OpenAI and its culture of secrecy and devotion to the promise of artificial general intelligence.
These women didn't just raise concerns in private. They built institutions, published research, gave up stable jobs, and took personal risks to make AI more accountable. They're playing a big part of doing the important work happening in the field.
Women Are Holding the Line in Big Tech
Within major tech companies, women are disproportionately in the roles that push for responsibility. Google has a woman leading Responsible AI Research in Marian Croak, a woman as Chief Sustainability Officer in Kate Brandt, and Google's first ever Chief Decision Scientist was a woman, Cassie Kozyrkov. Daniela Amodei is President and co-founder of Anthropic, one of the leading AI safety companies in the world. At Amazon, Apple, Dell, Salesforce, Tesla, and Verizon, women hold or have recently held the chief sustainability and chief impact roles.
Women currently hold 63% of executive sustainability roles in the corporate world. We are not absent from power. We are disproportionately in the rooms where the hard conversations are happening.
Women Build AI for People, Not Just Profit
Rana el Kaliouby co-founded Affectiva to build emotion AI for mental health applications, with explicit ethical guidelines around consent and privacy. Daniela Rus leads MIT's Computer Science and AI Laboratory, advancing soft robotics for disaster response, and creates environments where technical skills and ethical frameworks are taught together. Daphne Koller co-founded Coursera, giving over 100 million learners access to education, and later founded Insitro to use AI for drug discovery.
The throughline across all of these: building AI that solves real problems for real people, not just problems that are profitable to solve.
Where This Leaves Me
Research suggests women are naturally community-centered: when we adopt tools and practices, we tend to think about how they lift the people around us, not just ourselves. That orientation is exactly what AI needs more of right now.
The answer isn't for women to avoid AI out of caution. The answer is for more of us to enter the space and bring that community lens with us. To use AI deliberately, to hold it accountable, and to build things with it that actually make people's lives better.
The dilemma I opened with, championing AI while confronting its costs, doesn't have to be a contradiction. Women have been navigating exactly that tension at the highest levels of the field for years. We just don't always get credit for it.
I want AI to be a tool that genuinely empowers women without taking from others to do it. Getting there requires people who are willing to sit in the uncomfortable space where both of those things are true at once.
If you were at the talk yesterday, thank you. If you weren't, I hope this gives you a sense of what we covered. I'd love to hear where you land on this.
Watch our podcast episode on Women Leadership in Tech!
On Natasha
Image of Natasha Speaking at the AI Collective from AI Collective's LinkedIn
Natasha Ann, is a software engineer and one of three co-hosts at ragTech, a Singapore-based tech podcast and media brand on a mission to simplify technology and make it accessible, fun, and engaging for everyone. Through podcast, YouTube, Instagram, and TikTok, Natasha and her co-hosts Saloni Kaur and Victoria Lo cover AI, software, startups, and real life in tech with honesty and without the jargon. ragTech also runs FutureNet, a research initiative exploring the digital landscape for children and adolescents, with a focus on building safe and meaningful digital spaces for the next generation.
Beyond the podcast, Natasha serves as Partnerships Lead at Women Devs SG, a community supporting women in software development in Singapore. She is an active speaker on the local and regional tech conference circuit, with past talks spanning AI, ethics, and sustainable tech. She spoke at Green IO Singapore, the country's first tech sustainability conference, and will be returning as emcee for the April 2026 edition. Across her social media platforms, she creates content that makes technology approachable and engaging for both technical and non-technical audiences.
ragTech is a podcast by Natasha Ann Lum, Saloni Kaur, and Victoria Lo where real people talk about real life in tech. Our mission is to simplify technology and make it accessible to everyone. We believe that tech shouldn't be intimidating, it should be fun, engaging, and easy to understand!
✨ragTech Spotify: https://open.spotify.com/show/1KfM9JTWsDQ5QoMYEh489d
✨ragTech YouTube: https://www.youtube.com/@ragTechDev
✨Instagram: https://instagram.com/ragtechdev
✨Other Links: https://linktr.ee/ragtechdev
Recommended Articles
Subscribe to our newsletter!
Read articles from ragTech directly inside your inbox. Subscribe to the newsletter, and don't miss out.
