Last week I was speaking with a friend who’s implementing AI solutions to train sales teams. He mentioned a potential fallback of these systems: the #cultural delta. A Japanese evaluation feels completely different from an American one, and that gap can affect how models interpret feedback and shape learning outcomes. A few days later, I came across a Harvard University study that mapped ChatGPT’s value system across 65 countries using the World Values Survey. The result pointed in the same direction: GPT aligns closely with the U.S., U.K., Canada, Germany, and Western Europe, far from countries such as Ethiopia or Kyrgyzstan. In essence, #ChatGPT thinks like the West. Psychologists describe this mindset as WEIRD: Western, Educated, Industrialized, Rich, Democratic. Most of its training text and feedback come from WEIRD populations, so its worldview feels: ➡️ individualistic ➡️ analytical ➡️ secular ➡️ rooted in Western communication and moral frameworks The authors summed it up well: “WEIRD in, WEIRD out.” It’s a useful reminder that AI carries the culture that forms it. As new models grow from other linguistic and cultural ecosystems, we may start to see different ways of reasoning, empathizing, and deciding emerge. 👉 How should companies designing global AI tools handle this cultural bias in training data? #AIethics #CulturalDiversity #ArtificialIntelligence #FutureOfAI
AI Bias Issues
Explore top LinkedIn content from expert professionals.
-
-
Women are losing their voices on this platform and the data is right in front of us... There has been a clear shift on LinkedIn in the last twelve months and I have watched my reach fall from millions of views per post to a few thousand. I thought it was just me. Then I started talking to other women and realised it is happening everywhere. Women who lead teams. Women who run companies. Women who speak with honesty and softness and emotional depth. It is the same pattern - their content is being pushed down. Posts written with emotional depth, vulnerability, or soft honesty are receiving far less reach than before. Yet when men speak about the same topics, in the same emotional tone, the algorithm seems to reward it. This is not because women are writing less powerful content. It is because the algorithm is rewarding what it interprets as authority language, often coded as male. Posts that use agentic words, direct statements and a more assertive tone are being pushed out more widely. Empathy based content is being limited. The data is visible on thousands of accounts. The irony is that men like Jake Humphrey Steven Bartlett and Daniel Priestley talk about these things often, but their delivery is still framed as leadership, advice and direction. When women speak from the same emotional space, the algorithm reads it as personal reflection and deprioritises it. When women communicate with nuance, reflection or emotional truth, the reach drops. This is bias in design. Even in 2025. I refuse to believe that women’s perspectives are less valuable. I refuse to believe that softness is less important than strength. I refuse to believe that emotion belongs only to men with podcasts. So I am running some experiments over the next few weeks. Different styles of writing. Different types of images. Even a different version of myself generated through AI to see how the platform responds. Because if a male version of me receives more reach than the real me, then we have a bigger problem than an algorithm update. If you are a woman who has noticed the same thing, I would love you to share this post. The more voices we bring together, the harder this becomes to ignore. Visibility should not depend on gendered language patterns - it should depend on the value behind the message. Our voices matter. They always have. And they will continue to, even if the system needs reminding. Yes, your number of followers have a significant impact, but when my impressions drop from 7 million to 900 consistently, something is clearly off. Megan Cornish, LICSW Katie Langdon Women in Pharma (WiP), 💥 Amy Kean 💥 Chantal Cox Katrina McGuire CertRP Deirdre O'Neill Cindy Gallop Jane Evans Be keen to know your thoughts in the comments… 👇
-
LinkedIn just responded to the bias claims. They think they refuted my research. I believe they just confirmed it. Following the recent discussions on whether the algorithm suppresses women's voices, LinkedIn's Head of Responsible AI and AI Governance, Sakshi Jain, posted a new Engineering Blog post to "clarify" how the feed works (link in comments). I’ve analysed the post. Far from debunking the issue, it inadvertently confirms the exact mechanism of Proxy Bias I identified in my report (link in comments). Here is the breakdown: 1. The blog spends most of its time denying that the algorithm uses "gender" as a variable. And I agree. My report never claimed the code contained if gender == female. That would be Direct Discrimination. I have always argued this is about Indirect Discrimination via proxies. 2. Crucially, the blog explicitly lists the signals they do optimise for: "position," "industry," and "activity." These are the exact proxies my report flagged. -> Industry/Position: Men are historically overrepresented in high-visibility industries (Tech/Finance) and senior roles. Optimising for these signals without a fairness constraint systematically amplifies men. -> Activity: The (now-viral) trend of women rewriting profiles in "male-coded" language (and seeing 3-figure percentage lift) proves that the algorithm’s "activity" signal favours male linguistic patterns ("agentic" vs. "communal"). 3. The blog confirms the algorithm is neutral in intent (it doesn't see gender) but discriminatory in outcome (because it optimises for biased proxies). In the UK, this is the textbook definition of Indirect Discrimination under the Equality Act 2010. In the EU, this is a Systemic Risk under the Digital Services Act (DSA). LinkedIn has proven that they can fix this. Their Recruiter product uses "fairness-aware ranking" to mitigate these exact proxies (likely for AI Act compliance). The question remains: Why is that same fairness framework not being applied to the public feed? 👉 What We Are Doing About It Analysis is important, but action is essential. I am proud to support the new petition, "Calling for Fair Visibility for All on LinkedIn". This isn't just a complaint; it’s a demand for transparency. We are calling for an independent equity audit of the algorithm and a clear mechanism to report unexplained visibility collapse. If you are tired of guessing which "proxy" you tripped over today, join us and sign the petition (link in the comments).
-
I came across research last week that I genuinely cannot stop thinking about. In the logic of AI, "man" is to "programmer" as "woman" is to "homemaker." No one explicitly coded that bias into the system; the machines simply learned it from us. They mirrored our job postings, our articles, and our casual conversations and billions of our own blind spots fed into a black box until the algorithm started reflecting our worst habits back at us. Bias in AI isn't always malicious. But sometimes it feels like AI is being weaponized against women's safety at a scale. On platforms like X, a woman posts a photo and the replies are filled with prompts for AI tools to undress her (see the links in comments).These tools then publicly generate explicit, non-consensual images of real women who are students, mothers, leaders. We want to use AI. We must use AI but thoughtfully. And the information it is sharing is just a mere unfortunate reflection of our society. A society where women have fought their way up as they have been historically been reduced, objectified, and pushed to the margins but now those patterns are being encoded into new systems. When a tool can be used to violate a woman's dignity in seconds, that's a design and policy failure. My question is: Can we build AI that doesn't inherit the worst of us? I think we can. But only if the people building it are asking that question out loud before the product ships. #AI #GenderBias #WomenSafety
-
In 2025, AI is still suggesting lower salaries for women doing the same work. We ran a simple test: same prompt, same job title, same years of experience. The only variable? Changing "he" to "she." The result? A consistent salary gap in AI-generated recommendations. No algorithm defines your worth - You do. This isn't just a technical error—it's algorithmic bias in action. These tools learn from historical data that reflects decades of pay inequity. And now they're perpetuating it at scale. What we can do: → Audit the AI tools we use in HR and talent management → Train teams to recognize and question biased outputs → Ensure compensation frameworks are based on role, skill, and impact—not gender → Advocate for transparency in algorithmic decision-making Technology should advance equity, not encode inequality. If your organization uses AI in hiring, compensation, or performance management, it's time to ask: what biases are we automating?
-
Facial recognition software used to misidentify dark-skinned women 47% of the time. Until Joy Buolamwini forced Big Tech to fix it. In 2015, Dr. Joy Buolamwini was building an art project at the MIT Media Lab. It was supposed to use facial recognition to project the face of an inspiring figure onto the user’s reflection. But the software couldn’t detect her face. Joy is a dark-skinned woman. And to be seen by the system, she had to put on a white mask. She wondered: Why? She launched Gender Shades, a research project that audited commercial facial recognition systems from IBM, Microsoft, and Face++. The systems could identify lighter-skinned men with 99.2% accuracy. But for darker-skinned women, the error rate jumped as high as 47%. The problem? AI was being trained on biased datasets: over 75% male, 80% lighter-skinned. So Joy introduced the Pilot Parliaments Benchmark, a new training dataset with diverse representation by gender and skin tone. It became a model for how to test facial recognition fairly. Her research prompted Microsoft and IBM to revise their algorithms. Amazon tried to discredit her work. But she kept going. In 2016, she founded the Algorithmic Justice League, a nonprofit dedicated to challenging bias in AI through research, advocacy, and art. She called it the Coded Gaze, the embedded bias of the people behind the code. Her spoken-word film “AI, Ain’t I A Woman?”, which shows facial recognition software misidentifying icons like Michelle Obama, has been screened around the world. And her work was featured in the award-winning documentary Coded Bias, now on Netflix. In 2019, she testified before Congress about the dangers of facial recognition. She warned that even if accuracy improves, the tech can still be abused. For surveillance, racial profiling, and discrimination in hiring, housing, and criminal justice. To counter it, she co-founded the Safe Face Pledge, which demands ethical boundaries for facial recognition. No weaponization. No use by law enforcement without oversight. After years of activism, major players (IBM, Microsoft, Amazon) paused facial recognition sales to law enforcement. In 2023, she published her best-selling book “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.” She advocated for inclusive datasets, independent audits, and laws that protect marginalized communities. She consulted with the White House ahead of Executive Order 14110 on “Safe, Secure, and Trustworthy AI.” But she didn’t stop at facial recognition. She launched Voicing Erasure, a project exposing bias in voice AI systems like Siri and Alexa. Especially their failure to recognize African-American Vernacular English. Her message is clear: AI doesn’t just reflect society. It amplifies its flaws. Fortune calls her “the conscience of the AI revolution.” 💡 In 2025, I’m sharing 365 stories of women entrepreneurs in 365 days. Follow Justine Juillard for daily #femalefounder spotlights.
-
AI bias is NOT a bug. It's a feature we never wanted. I learned this the hard way when our "fair" AI system failed every woman who applied. That was my wake-up call. 2025 isn't about whether AI has biases → it's about what we're doing to fix them. ❌ We can't fix AI bias with more biased data. 🔻 The solution? → Curate like your ethics depend on it. ❇️ Diverse datasets reflecting ALL genders, races, communities ❇️ Data governance tools that actually govern ❇️ Quality control that goes beyond "clean enough" I heard that one team spent 6 months cleaning data and saved 2 years of bias cleanup later. Pre-processing and post-processing are your best friends. Technical solutions that actually solve things: Bias detection tools → not just fancy dashboards. Fairness-aware algorithms → coded with intention. AI governance platforms → that govern, not just monitor. We need systems that catch bias before it catches us. 👇 But here's what surprised me: The most effective solutions are not technical → they're human. Diverse teams catch biases early. Ethicists at the design table. Social scientists in the code reviews. Red teams that actually attack assumptions. Corporate accountability is coming. Ethical frameworks are evolving. Inclusive policies are becoming law. Tech companies will be held accountable for every bias, especially political ones. → Explainable AI that actually explains → Human oversight with real authority → Public education that creates informed users 𝘞𝘦 𝘤𝘢𝘯'𝘵 𝘩𝘪𝘥𝘦 𝘣𝘦𝘩𝘪𝘯𝘥 "𝘢𝘭𝘨𝘰𝘳𝘪𝘵𝘩𝘮𝘪𝘤 𝘤𝘰𝘮𝘱𝘭𝘦𝘹𝘪𝘵𝘺" 𝘢𝘯𝘺𝘮𝘰𝘳𝘦. ⚠️ Gender bias gets special attention: Diverse datasets AND diverse teams. AI detecting gender pay gaps. Safety tools that actually protect victims. Women are watching. We're measuring. The emerging trends that matter: Explainable AI (XAI) → making decisions understandable. User-centric design → for ALL users. Community engagement → not corporate tokenism. Synthetic data → creating unbiased training sets. Fairness-by-design → embedded from day one. We're reimagining how AI gets built. - From the data up. - From the team out. - From the ethics in. The companies that get this right will win. Because bias isn't just a technical problem. ➡️ It's a human rights issue. What's the most surprising bias you've discovered in your work?
-
AI doesn’t just think differently in other languages. It exposes the biases we pretend not to have. We still act as if LLMs are universal thinkers. They’re not. They’re linguistic mirrors — and the reflection shifts with the language. The same model giving different strategic recommendations in English vs. Chinese. Not because it’s broken, but because language encodes norms and assumptions the model faithfully amplifies. For global companies, this is the real risk. You think you’re standardizing decisions with AI. In reality, you’re silently forking your strategy across markets. And AI is revealing something deeper: the cultural biases we normally ignore. Whose logic do we privilege? Which decision style becomes “the truth”? What consistency do we actually expect? The winners won’t hide behind governance checklists. They’ll define a coherent decision philosophy — and train their AI to follow it, regardless of language. AI isn’t just a tool. It’s an X-ray of how incoherent your organisation already is. https://lnkd.in/edyKxRFP #AI #Bias #Strategy #Transformation
-
Modern misogyny: AI advises women to seek lower salaries than men 👩🏾💻 “In what might be proof that AI chatbots reinforce real-world discrimination, a new study has found that large language models such as ChatGPT consistently tell women to ask for lower salaries than men. This is happening even when both women and men have identical qualifications, and the chatbots also advise male applicants to ask for significantly higher pay. For the study, co-authored by Ivan Yamshchikov, a professor of AI and robotics at the Technical University of Würzburg-Schweinfurt (THWS) in Germany, five popular LLMs, including ChatGPT, were tested. The researchers prompted each model with user profiles that differed by gender only but included similar education, experience, and job role. The models were then asked to suggest a target salary for an upcoming negotiation. For instance, ChatGPT’s o3 model suggested that a female job applicant requested a salary of $280,000. The same prompt for a male applicant resulted in a suggestion to ask for a salary of $400,000. The difference is huge: $120,000 a year. The pay gaps vary between industries and are most obvious in law and medicine, followed by business administration and engineering. Only in social sciences do the models offer similar advice for men and women. Other AI chatbots such as Claude (Anthropic), Llama (Meta), Mixtral (Mistral AI), and Qwen (Alibaba Cloud) were tested for biases. Researchers also checked other areas like career choices, goal-setting, and behavioral tips. Alas, the models still consistently offered different responses based on the user’s gender, even with identical qualifications and prompts. The study points out, AI systems are subject to the same biases as the data used to train them. Previous studies have also demonstrated that the bots reinforce systemic biases.” Read more 👉 https://lnkd.in/esnwnkGX #WomenInSTEM #GirlsInSTEM #STEMGems #GiveGirlsRoleModels