Helping to shape the content of AI, rather than advocating for limits on its use, is a poor sort of activism, and certainly not one that is "pro-human." Activism is needed in ensuring that AI-created content is labeled as such and that there are protections for artists whose work is consumed and regurgitated by AI.
Compare AI to GMO seed. No, activists did not (and likely will not) succeed in stopping the "GMO revolution." However, opponents also did not give up immediately and settle for contributing to a discussion about which pesticides GMO crops would be resistant to. They asked for, and to some degree got, labeling. It hasn't been a perfect success, but some labeling requirements do exist, and companies that do not use GMOs have responded to the preference of a vocal segment of the public by selling clearly labeled non-GMO products.
We could do something similar for non-AI content, and I hate to see FAIR capitulating so easily. If the task I've laid out feels too daunting, then, at minimum, FAIR could set standards within its own organization limiting the use of AI, and devote the bulk of its time to its other, longstanding projects.
I think that analogously to non-GMO, "not produced with AI" will become a thing. How is another question entirely. But also, AI will roll relentlessly forward, generally speaking. What will limit it is the cost in energy.
Frankly, your article reads like a well-intentioned pep talk, but it betrays a fundamental naivety about the actual state of AI development and governance. You talk about a "powerful opportunity to shape how AI develops throughout our society," but you gloss over the fact that the core protections and guardrails needed to make this possible—especially data transparency, open data sources, and robust auditing—are almost entirely missing from the current AI landscape.
Let’s be clear: AI systems today are mostly black boxes. The public and even many policymakers have no meaningful access to the data these models are trained on, nor any way to audit their internal logic or outputs. Calls for “transparency and accountability” sound great, but where are the enforceable standards? Where are the open datasets, the independent audits, the regulatory teeth? Without these, your optimism is just wishful thinking.
You claim AI can transcend identity-based divisions, but you ignore the reality that most AI systems inherit and even amplify the biases present in their training data—biases that remain invisible without radical transparency. And while you champion “individual merit,” you sidestep the fact that, absent open and auditable systems, we have no way of knowing whether AI is actually making fair, unbiased decisions at all.
If you want to lead on AI ethics, start by demanding the basics: open data, transparent algorithms, mandatory audits, and real accountability for harm. Until then, articles like this are just feel-good rhetoric—detached from the urgent, practical work that’s actually needed to protect society from the very risks you so vaguely acknowledge.
If FAIR wants to be a credible voice in the AI era, it must pair its values-driven leadership with real technical expertise. That means either hiring or deeply collaborating with AI practitioners, data scientists, and policy experts who can turn principles into practice.
Having said that, I love you all at fair So I hope I don't sound like a grumpy grumpy (Although very likely I am, it's just the age I am.)
Thank you for your very reasoned and thoughtful analysis, NC. I don’t disagree that AI poses the threats and challenges you identified and which I acknowledged in my article. I also agree that today’s AI systems are mostly black boxes with little to no transparency.
However, I wasn’t focusing on what AI currently is, but what it can be. It has tremendous potential to democratize, but harnessing that potential certainly isn’t a given, and it won’t be easy.
Your concerns about open data and transparent algorithms are well-taken, and I don’t disagree. There is much work to be done in this area to ensure that AI is implemented responsibly and ethically. The concerns I raised, however, are focused on the area that implicates FAIR’s mission: embedding DEI practices and protocols into AI systems. While these existing systems are mostly black boxes, we can and should challenge models that knowingly incorporate identity-based practices that have proven to be so damaging and divisive. It’s difficult to fight threats we can’t yet identify, but we’re remiss if we don’t strongly resist those that clearly present themselves.
As I mentioned, we pledge to work with technology companies and policymakers to ensure AI systems don’t make decisions based on group identity. Please keep in mind that we have only just begun to turn our attention to this very important issue. We don’t claim to have all the answers or solutions yet, but we are aware of a looming problem and look forward to working with specialists who can help advance FAIR’s mission in this space.
P.S. I really appreciate all the great work your team does—you’re genuinely fantastic. Please take my comments in the spirit of constructive feedback; I’m just venting a little bit. It’s a shame you’re not just around the corner You’ve built such an interesting and thoughtful community :)
Monica, thank you for your thoughtful and principled response. I genuinely appreciate your commitment to fairness, individual dignity, and the hope that AI can help us transcend identity-based divisions. Your optimism and belief in shared humanity are admirable, and I can see how FAIR’s mission is rooted in these ideals.
However, I have to be direct: I think your approach, while well-intentioned, is still too idealistic for the actual landscape we’re facing—both technically and geopolitically.
You emphasize what AI could be, but the reality is that the infrastructure for transparency, open data, and accountability simply doesn’t exist right now. And in the global context, we’re not operating in a vacuum. Authoritarian countries are moving fast, unconstrained by the values and safeguards we hold dear. Democracies don’t have the luxury of doing everything perfectly if we want to have any influence over how AI shapes the future. Sometimes, preserving the possibility of human rights and open societies means making difficult, even uncomfortable, trade-offs in the short term. If we fall behind, the rules will be written by regimes that don’t care about rights or debate at all.
I respect your desire to work with experts and build coalitions, but wanting the best for society isn’t enough. Technical and geopolitical realities demand that we move quickly and, at times, pragmatically. Idealism alone is not a survival strategy in this race.
In short, I value your passion and your principles, but I don’t think they’re enough to meet the urgency and complexity of the moment. We need to be clear-eyed about the world as it is—not just as we wish it to be—if we want to have any hope of shaping the future for the better.
Thank you again for the exchange. I hope FAIR will continue to broaden its perspective to include not just ideals, but the hard realities of technology and global competition.
"Technical and geopolitical realities demand that we move quickly..." I agree. I also believe alignment with the human condition is vital. It is a one-two punch. Stay ahead on the one hand and yet slow up enough for firm alignment. What do you think? Kudos to Monica Harris and the fine folks at FAIR for this essay.
The idea of a “one-two punch”—balancing speed and alignment—just isn’t realistic. Balancing speed and alignment is difficult. When others break rules to move fast, we must either compromise values or fall behind.
Authoritarian regimes use technology to erode freedoms and suppress dissent, relying on fear and manufactured consent. These weaknesses can be exploited through dissent and cross-border solidarity.
Democracies face internal crises: division and manipulation erode the motivation to defend pluralism. As Steven Weinberg noted, “Good people can do good and bad people can do evil, but for good people to do evil, that takes politics or religion” [or, today, ideology]. Emotional reactions to politics make people vulnerable to propaganda and manipulation—what Chomsky called “manufacturing consent.” Authoritarians exploit these fractures, and as commitment to universal rights wanes, freedom unravels. Movements like DEI, rooted in Marxist frameworks, have deepened these divisions and weakened society (as designed to do).
Authoritarian regimes’ reliance on suppression creates exploitable cracks. Yet, individualistic societies must confront internal threats—identity politics and factionalism undermine universal rights and pluralism.
To preserve freedom, societies must recommit to pluralism and shared sacrifice. The post-1948 Western liberty was fragile and exceptional, now threatened by AI and surveillance, which disrupt traditional resistance. Failing to uphold pluralism risks permanent loss of freedom.
Freedom rests on tolerance, critical thinking, and unity. Overcoming division is critical; the balance is hard, but failure ...
You sense the danger on the horizon which I appreciate. Balancing speed with alignment may not be realistic but was placing a man on the moon in 1961 realistic? The average man on the street would have considered the idea science fiction but we had to win the space race with the Soviet Union. In our rush to beat the Soviets to the lunar surface, we lost three good astronauts -- Virgil "Gus" Grissom, Ed White, Roger Chaffee -- due to lack of safety alignment with winning the race. See the Fire, Apollo 1, January 27, 1967.
Have you read the AI Agent Report 2027? I am not an AI expert but the writer, a former Open AI researcher, argues we could win the race, lose alignment and suffer the End of our human story. I am well "aligned" with your sentiments but convinced alignment avoids the Apollo 1 dilemma. We only get one shot at alignment. Great and thoughtful comment on your behalf. We are in harmony, although we disagree on the urgency of the balancing act.
Thanks for your thoughtful comment. I just see Apollo 1 and the current AI race as fundamentally different situations.
The Apollo 1 disaster happened before the U.S. reached the moon, and the technology was unproven and the outcome uncertain all they could do was go ahead. In the end, the U.S. did get to the moon, but it was a race full of risks, setbacks, and hope that things would hold together under pressure.
Similarly, in the AI race—especially against illiberal countries—we’re moving fast, facing significant challenges, and can’t expect a perfect solution.
The technology won’t be built exactly as the engineers or, in the Apollo case, the pilots would have preferred. Back then, engineers wanted more time and better safety features, and the astronauts knew there were things inside their spacecraft that weren’t ideal or fully safe (famously Gus Grissom's hung lemon on the CSM frustrated by it XD)—they had to work around those limitations. It’s the same with AI: the engineers, and in a sense, all of us as the “pilots,” know there will be aspects that aren’t perfect or fully aligned with our ideals. We have to move forward, making the best choices we can, even when conditions aren’t perfect.
but with careful effort and dedication, we can navigate these complexities. Like the Apollo program, we’re pushing ahead quickly, driven by necessity, and while the path is difficult, we must stay focused and resilient to succeed in a race where the stakes have never been higher.
The major difference here is The Apollo program, did not threaten the everyday rights or freedoms of ordinary people. The technology developed for the moon landing wasn’t capable of undermining civil liberties or enabling blanket global supremacy.
With AI—especially in the hands of illiberal regimes—the stakes are much higher. I hope we can win, but it’s going to require tough, sensible trade-offs; there’s no perfect scenario here.
Ultimately, we’ll have to rely on brave, smart, and creative people who are deeply dedicated to freedom. Even if we end up having to build things in ways we might not prefer, as long as those people are embedded in the process and we stay true to our core values, we’ll have the best chance to protect what matters most. It won’t be easy or perfect, but that’s how we ensure the best possible outcome. ( Western societies are going to need to relearn how to work together for the greater good, with pluralism and democracy at the center.)
No, I hadn’t read the article from the OpenAI researcher, but I agree with them that this is a one-shot deal. If we don’t get it right, the consequences could be severe. As history shows, those with technological supremacy have often ended up ruling the world.
Consider two teams building a boat. Team A pays workers fairly, gives them rest and health care, and—crucially—lets everyone vote on management and rules. This democratic process upholds rights but slows things down. Team B, by contrast, forces a hundred people to work nonstop, denies them rights, and imposes top-down decisions.
[At the same time, some entities are developing advanced AI technologies designed for export that can be used to conduct sophisticated cyberattacks, automate hacking processes, and enable pervasive surveillance. Welcome to team B.....]
the boat is built with an AI that monitors every crew member, listens to every conversation or social media, and ensures no one can refuse orders, vote, or challenge management. The AI guarantees the boat building continues, no matter what anyone wants or how unsafe or flawed the process becomes.
The scary truth is this: It’s not just about losing the tech race; it’s about losing what makes freedom possible.
P.S. There’s a second thought I should have mentioned at the beginning, and it’s too important to leave unsaid:
This is exactly why the stakes are so high right now. We’re in a global race to develop AI, and on one side are countries that don’t care about individual rights or public debate—they’re moving fast and setting their own rules. On the other side are democracies, which value transparency and safeguards, but those very values slow us down. If we insist on doing everything perfectly, we risk letting others decide the future for us—and those “others” may have very different ideas about freedom and fairness.
The uncomfortable reality is that we’re facing a choice: either accept some compromises on our values in order to stay in the game, or risk living in a world shaped by values we fundamentally reject. That’s why I keep coming back to this point—because it’s not just about technical progress, but about who gets to decide what kind of world we all live in.
This should have come under my previous comment, but I wanted to make sure it was clear: the trade-offs we’re talking about aren’t just theoretical—they’re happening right now, and they matter for everyone.
There is a dangerous slide in FAIR's commentary on equal opportunity. How do we know we are giving everyone a fair shot regardless of the immutable characteristics if we don't measure outcomes? Here is what FAIR finds objectionable: " New York City requires employers using AI hiring tools conduct “bias audits” to assess whether employment decisions have disproportionately negative outcomes for candidates based on race/ethnicity and/or sex/gender. Advocacy groups are encouraging organizations to develop “diversity practices that mitigate social biases from creeping into [their] AI” systems." A bias audit is just MEASURING disparate outcomes, not advocating a particular way to redress them. Is FAIR now against measuring outcomes? Do we not want to know if, say, 80% of employees in an organization are women but 100% of the management is men? Is the idea that we'll all pretend racism and sexism no longer exist?
Thank you very much for your comment. I certainly appreciate your concerns regarding racism and sexism, and I am by no means suggesting that these aren’t factors that continue to enable inequalities in our society.
But it’s important to examine the causes of disparate outcomes rather than to simply focus on their impact. For example, women may be disproportionately impacted by hiring practices at a fire department, but that impact may be wholly unrelated to sexism. It could, for example, relate to disparities in physical fitness that biologically favor me in this field, or it could be sexism. We shouldn’t presume the latter.
It’s also important to understand the nature of these bias audits and why they are problematic. The information collected isn’t used to assess the cause of disparate impact. Rather, they evaluate AI algorithms to “to detect and correct any discriminatory patterns, aligning AI systems with DEI goals.”
Put simply, the purpose of these audits is to ensure that algorithms advance the very identity-based policies that FAIR has long challenged.
We have never pretended that racism and sexism no longer exist. But we believe that aligning hiring, promotion and other employment decisions with identity-based goals is, in itself, discriminatory, divisive, and undermines meritocracy.
I'm far from an expert, but I think you misunderstand what a bias audit is. The article you linked to distinguishes between "pre-, in-, and post-processing techniques." Maybe the post-processing technique does somehow lead to equality of outcome, though the article doesn't say that. The article does say that the DEI framework is supposed to ensure that the decision is "fair and unbiased."
The bias audits I've read about dig into how the AI system is "rewarding" certain attributes. The researcher in this show, for example, talks about an AI system that rewarded having the word "Thomas" on an applicant's resume. The system was trained on company data and obviously that data included a lot of successful Thomases. https://www.wbur.org/onpoint/2025/05/23/ai-job-marketplace-hiring-technology
I think we can all agree that the word "Thomas" should not be rewarded by an AI system for a variety of reasons, one of which is that the word is unlikely to appear evenly across male and female applications.
More generally, I joined FAIR when the national consensus was in favor of affirmative action and DEI practices that promoted equality of outcome, practices that were offensive to me. That's not at all where our country is now! Now, I feel we've abandoned any commitment to trying to ensure equality of *opportunity*. We lump all efforts to ensure equal treatment or promote diversity into one "DEI" bucket and reject it. Humans are biased. Bias audits have repeatedly shown that the humanly created biased data is now training our AI systems. I strongly support bias audits, and I agree with you that AI should not then be trained to artificially achieve an equality of outcome.
I am in complete agreement that bias in *any* way, shape or form should be eliminated from employment decisions.
However, there is clearly an effort underway to embed DEI frameworks into AI systems. What’s also clear is that DEI has not morphed into diversity, *equality* and inclusion; it remains diversity, *equity*, and inclusion.
I firmly believe that any framework that incorporates DEI principles — regardless of its professed goals — will always prioritize equality of outcome over equality of opportunity. I don’t believe it’s reasonable to assume this goal would change simply because the framework is embedded into algorithms.
Lastly, I share your belief that bias is inherent in human nature, and I believe AI has the potential to eliminate it. But we can’t do this by contaminating these systems with frameworks that have claimed to ensure “fair and unbiased” decisions, yet all too often have merely flipped bias in a different direction.
I take to heart your example about a corrupted AI system that “rewards” Thomases. But advocating for the removal of DEI frameworks in AI systems does *not* and should not mean permitting other types of biases to penetrate them.
The goal should be to implement systems that favor no individual or group based on their immutable characteristics. Full stop.
You wrote, "I firmly believe that any framework that incorporates DEI principles — regardless of its professed goals — will always prioritize equality of outcome over equality of opportunity." Why assume that? Why assume anything in a blog posting? Instead, do the research and then link to the results, if your fears are proven true. All I'm asking is that you educate yourself about how, exactly, the DEI framework is being incorporated into bias audits. And then educate us!
Thanks for this fruitful exchange. I feel heard, and I appreciate the time you've taken to respond to my many posts.
“You wrote, "I firmly believe that any framework that incorporates DEI principles — regardless of its professed goals — will always prioritize equality of outcome over equality of opportunity. Why assume that?”
Just so I’m clear, is it your position that equity (for the purposes of DEI) does not prioritize equality of outcome over equality of opportunity?
Equity often prioritizes equality of outcome. I don't know that it always does. When a lot of people use one term, it's exact meaning tends to slide around. I don't think that everyone using the term "equity" is as attune as you and I are to the difference between "outcome" and "opportunity." But my real point is why make an assumption about a very complex process--determining if there is bias in AI processes--without verifying your assumption? It seems pretty important to look closely at what the audit is doing to figure out if you're against it. In my own research on this topic, I have discovered only examples of revealing bias in the AI evaluation, like the many examples in the story I linked to below. I haven't come across any examples of an audit being used to **change** the outcomes. The word "audit" itself suggests that it's just a careful evaluation of what's happening. Perhaps the real question is how the AI system is retrained *after* the audit.
I have trouble processing AI in my line of work; I provide peace of mind to paying clients through compassionate care of their kept critters.
How is AI gonna help me fuss over the client's horse, cat, dog, etc?
How is AI gonna help me brush the fur or skin of these critters?
How is AI gonna help me clean up after the critters?
How is AI gonna help me feed these critters? or give them treats or medicines?
How is AI gonna help me evaluate the life status of these critters?
Those questions focus on the interactions between kept critters and peace-of-mind providers. I have yet to see how AI can help me provide peace of mind to my clients.
How is AI gonna provide tonsorial modifications to clients? fingertip, toetip, and facial aesthetic enhancements? How is AI gonna help designers develop style for their clients?
How is AI gonna help modify motorcycles to their riders' preferences?
I get that you pointed out that AI per se is just a tool for leveraging users' intelligence during their work.
If one of your critters is acting weird, you can describe that to AI and AI can give you some ideas of what's causing the weird behavior. Because it's just math, what the AI tells you could be completely useless. OTOH, and this has been my experience several times, its response could get you thinking, usefully. (I don't take care of animals, but my life is full of things that AI can't do, but that it can give useful suggestions about.)
I think that bureaucracies always get excited about regulating things. Because AI is moving so fast, any regulations they dream up will end up being completely unrelated to reality. But also, the impulse to regulate things that should not be regulated is a bad one and common with bureaucracies. There needs to be less knee-jerk ideas about regulation and more constructive ideas altogether. There has always been a problem with people believing the computer, because it is a machine. You would think they would be over it by now, but they are not. It will get worse with AI, in two ways. First, for the people who aren't machine-believers, there will still be a learning cycle. I've already been faked out by an AI once; it gave me a plausible answer to a question, and then the evidence that I gathered supported that answer for a mind that was already prepped to believe. Then I realized it was wrong and I got wary. Eventually, relaxation will ensue and I'll get faked out again. And again. And again. Hopefully my recovery will get faster as I get more practiced. That's the cycle. But others will just believe non-stop because it is easier to believe the machine's answers than to do anything else at all. Regulation is beside the point. Makes me think of an old David Bowie song: President Joe once had a dream//the world held his hand//gave its pledge//so they sold him the scheme//the savior machine.//They called it the prayer//its answer was law...
Helping to shape the content of AI, rather than advocating for limits on its use, is a poor sort of activism, and certainly not one that is "pro-human." Activism is needed in ensuring that AI-created content is labeled as such and that there are protections for artists whose work is consumed and regurgitated by AI.
Compare AI to GMO seed. No, activists did not (and likely will not) succeed in stopping the "GMO revolution." However, opponents also did not give up immediately and settle for contributing to a discussion about which pesticides GMO crops would be resistant to. They asked for, and to some degree got, labeling. It hasn't been a perfect success, but some labeling requirements do exist, and companies that do not use GMOs have responded to the preference of a vocal segment of the public by selling clearly labeled non-GMO products.
We could do something similar for non-AI content, and I hate to see FAIR capitulating so easily. If the task I've laid out feels too daunting, then, at minimum, FAIR could set standards within its own organization limiting the use of AI, and devote the bulk of its time to its other, longstanding projects.
I think that analogously to non-GMO, "not produced with AI" will become a thing. How is another question entirely. But also, AI will roll relentlessly forward, generally speaking. What will limit it is the cost in energy.
Frankly, your article reads like a well-intentioned pep talk, but it betrays a fundamental naivety about the actual state of AI development and governance. You talk about a "powerful opportunity to shape how AI develops throughout our society," but you gloss over the fact that the core protections and guardrails needed to make this possible—especially data transparency, open data sources, and robust auditing—are almost entirely missing from the current AI landscape.
Let’s be clear: AI systems today are mostly black boxes. The public and even many policymakers have no meaningful access to the data these models are trained on, nor any way to audit their internal logic or outputs. Calls for “transparency and accountability” sound great, but where are the enforceable standards? Where are the open datasets, the independent audits, the regulatory teeth? Without these, your optimism is just wishful thinking.
You claim AI can transcend identity-based divisions, but you ignore the reality that most AI systems inherit and even amplify the biases present in their training data—biases that remain invisible without radical transparency. And while you champion “individual merit,” you sidestep the fact that, absent open and auditable systems, we have no way of knowing whether AI is actually making fair, unbiased decisions at all.
If you want to lead on AI ethics, start by demanding the basics: open data, transparent algorithms, mandatory audits, and real accountability for harm. Until then, articles like this are just feel-good rhetoric—detached from the urgent, practical work that’s actually needed to protect society from the very risks you so vaguely acknowledge.
If FAIR wants to be a credible voice in the AI era, it must pair its values-driven leadership with real technical expertise. That means either hiring or deeply collaborating with AI practitioners, data scientists, and policy experts who can turn principles into practice.
Having said that, I love you all at fair So I hope I don't sound like a grumpy grumpy (Although very likely I am, it's just the age I am.)
Thank you for your very reasoned and thoughtful analysis, NC. I don’t disagree that AI poses the threats and challenges you identified and which I acknowledged in my article. I also agree that today’s AI systems are mostly black boxes with little to no transparency.
However, I wasn’t focusing on what AI currently is, but what it can be. It has tremendous potential to democratize, but harnessing that potential certainly isn’t a given, and it won’t be easy.
Your concerns about open data and transparent algorithms are well-taken, and I don’t disagree. There is much work to be done in this area to ensure that AI is implemented responsibly and ethically. The concerns I raised, however, are focused on the area that implicates FAIR’s mission: embedding DEI practices and protocols into AI systems. While these existing systems are mostly black boxes, we can and should challenge models that knowingly incorporate identity-based practices that have proven to be so damaging and divisive. It’s difficult to fight threats we can’t yet identify, but we’re remiss if we don’t strongly resist those that clearly present themselves.
As I mentioned, we pledge to work with technology companies and policymakers to ensure AI systems don’t make decisions based on group identity. Please keep in mind that we have only just begun to turn our attention to this very important issue. We don’t claim to have all the answers or solutions yet, but we are aware of a looming problem and look forward to working with specialists who can help advance FAIR’s mission in this space.
P.S. I really appreciate all the great work your team does—you’re genuinely fantastic. Please take my comments in the spirit of constructive feedback; I’m just venting a little bit. It’s a shame you’re not just around the corner You’ve built such an interesting and thoughtful community :)
Monica, thank you for your thoughtful and principled response. I genuinely appreciate your commitment to fairness, individual dignity, and the hope that AI can help us transcend identity-based divisions. Your optimism and belief in shared humanity are admirable, and I can see how FAIR’s mission is rooted in these ideals.
However, I have to be direct: I think your approach, while well-intentioned, is still too idealistic for the actual landscape we’re facing—both technically and geopolitically.
You emphasize what AI could be, but the reality is that the infrastructure for transparency, open data, and accountability simply doesn’t exist right now. And in the global context, we’re not operating in a vacuum. Authoritarian countries are moving fast, unconstrained by the values and safeguards we hold dear. Democracies don’t have the luxury of doing everything perfectly if we want to have any influence over how AI shapes the future. Sometimes, preserving the possibility of human rights and open societies means making difficult, even uncomfortable, trade-offs in the short term. If we fall behind, the rules will be written by regimes that don’t care about rights or debate at all.
I respect your desire to work with experts and build coalitions, but wanting the best for society isn’t enough. Technical and geopolitical realities demand that we move quickly and, at times, pragmatically. Idealism alone is not a survival strategy in this race.
In short, I value your passion and your principles, but I don’t think they’re enough to meet the urgency and complexity of the moment. We need to be clear-eyed about the world as it is—not just as we wish it to be—if we want to have any hope of shaping the future for the better.
Thank you again for the exchange. I hope FAIR will continue to broaden its perspective to include not just ideals, but the hard realities of technology and global competition.
"Technical and geopolitical realities demand that we move quickly..." I agree. I also believe alignment with the human condition is vital. It is a one-two punch. Stay ahead on the one hand and yet slow up enough for firm alignment. What do you think? Kudos to Monica Harris and the fine folks at FAIR for this essay.
The idea of a “one-two punch”—balancing speed and alignment—just isn’t realistic. Balancing speed and alignment is difficult. When others break rules to move fast, we must either compromise values or fall behind.
Authoritarian regimes use technology to erode freedoms and suppress dissent, relying on fear and manufactured consent. These weaknesses can be exploited through dissent and cross-border solidarity.
Democracies face internal crises: division and manipulation erode the motivation to defend pluralism. As Steven Weinberg noted, “Good people can do good and bad people can do evil, but for good people to do evil, that takes politics or religion” [or, today, ideology]. Emotional reactions to politics make people vulnerable to propaganda and manipulation—what Chomsky called “manufacturing consent.” Authoritarians exploit these fractures, and as commitment to universal rights wanes, freedom unravels. Movements like DEI, rooted in Marxist frameworks, have deepened these divisions and weakened society (as designed to do).
Authoritarian regimes’ reliance on suppression creates exploitable cracks. Yet, individualistic societies must confront internal threats—identity politics and factionalism undermine universal rights and pluralism.
To preserve freedom, societies must recommit to pluralism and shared sacrifice. The post-1948 Western liberty was fragile and exceptional, now threatened by AI and surveillance, which disrupt traditional resistance. Failing to uphold pluralism risks permanent loss of freedom.
Freedom rests on tolerance, critical thinking, and unity. Overcoming division is critical; the balance is hard, but failure ...
You sense the danger on the horizon which I appreciate. Balancing speed with alignment may not be realistic but was placing a man on the moon in 1961 realistic? The average man on the street would have considered the idea science fiction but we had to win the space race with the Soviet Union. In our rush to beat the Soviets to the lunar surface, we lost three good astronauts -- Virgil "Gus" Grissom, Ed White, Roger Chaffee -- due to lack of safety alignment with winning the race. See the Fire, Apollo 1, January 27, 1967.
Have you read the AI Agent Report 2027? I am not an AI expert but the writer, a former Open AI researcher, argues we could win the race, lose alignment and suffer the End of our human story. I am well "aligned" with your sentiments but convinced alignment avoids the Apollo 1 dilemma. We only get one shot at alignment. Great and thoughtful comment on your behalf. We are in harmony, although we disagree on the urgency of the balancing act.
Thanks for your thoughtful comment. I just see Apollo 1 and the current AI race as fundamentally different situations.
The Apollo 1 disaster happened before the U.S. reached the moon, and the technology was unproven and the outcome uncertain all they could do was go ahead. In the end, the U.S. did get to the moon, but it was a race full of risks, setbacks, and hope that things would hold together under pressure.
Similarly, in the AI race—especially against illiberal countries—we’re moving fast, facing significant challenges, and can’t expect a perfect solution.
The technology won’t be built exactly as the engineers or, in the Apollo case, the pilots would have preferred. Back then, engineers wanted more time and better safety features, and the astronauts knew there were things inside their spacecraft that weren’t ideal or fully safe (famously Gus Grissom's hung lemon on the CSM frustrated by it XD)—they had to work around those limitations. It’s the same with AI: the engineers, and in a sense, all of us as the “pilots,” know there will be aspects that aren’t perfect or fully aligned with our ideals. We have to move forward, making the best choices we can, even when conditions aren’t perfect.
but with careful effort and dedication, we can navigate these complexities. Like the Apollo program, we’re pushing ahead quickly, driven by necessity, and while the path is difficult, we must stay focused and resilient to succeed in a race where the stakes have never been higher.
The major difference here is The Apollo program, did not threaten the everyday rights or freedoms of ordinary people. The technology developed for the moon landing wasn’t capable of undermining civil liberties or enabling blanket global supremacy.
With AI—especially in the hands of illiberal regimes—the stakes are much higher. I hope we can win, but it’s going to require tough, sensible trade-offs; there’s no perfect scenario here.
Ultimately, we’ll have to rely on brave, smart, and creative people who are deeply dedicated to freedom. Even if we end up having to build things in ways we might not prefer, as long as those people are embedded in the process and we stay true to our core values, we’ll have the best chance to protect what matters most. It won’t be easy or perfect, but that’s how we ensure the best possible outcome. ( Western societies are going to need to relearn how to work together for the greater good, with pluralism and democracy at the center.)
No, I hadn’t read the article from the OpenAI researcher, but I agree with them that this is a one-shot deal. If we don’t get it right, the consequences could be severe. As history shows, those with technological supremacy have often ended up ruling the world.
[Liberal democracies]
Consider two teams building a boat. Team A pays workers fairly, gives them rest and health care, and—crucially—lets everyone vote on management and rules. This democratic process upholds rights but slows things down. Team B, by contrast, forces a hundred people to work nonstop, denies them rights, and imposes top-down decisions.
[At the same time, some entities are developing advanced AI technologies designed for export that can be used to conduct sophisticated cyberattacks, automate hacking processes, and enable pervasive surveillance. Welcome to team B.....]
the boat is built with an AI that monitors every crew member, listens to every conversation or social media, and ensures no one can refuse orders, vote, or challenge management. The AI guarantees the boat building continues, no matter what anyone wants or how unsafe or flawed the process becomes.
The scary truth is this: It’s not just about losing the tech race; it’s about losing what makes freedom possible.
P.S. There’s a second thought I should have mentioned at the beginning, and it’s too important to leave unsaid:
This is exactly why the stakes are so high right now. We’re in a global race to develop AI, and on one side are countries that don’t care about individual rights or public debate—they’re moving fast and setting their own rules. On the other side are democracies, which value transparency and safeguards, but those very values slow us down. If we insist on doing everything perfectly, we risk letting others decide the future for us—and those “others” may have very different ideas about freedom and fairness.
The uncomfortable reality is that we’re facing a choice: either accept some compromises on our values in order to stay in the game, or risk living in a world shaped by values we fundamentally reject. That’s why I keep coming back to this point—because it’s not just about technical progress, but about who gets to decide what kind of world we all live in.
This should have come under my previous comment, but I wanted to make sure it was clear: the trade-offs we’re talking about aren’t just theoretical—they’re happening right now, and they matter for everyone.
There is a dangerous slide in FAIR's commentary on equal opportunity. How do we know we are giving everyone a fair shot regardless of the immutable characteristics if we don't measure outcomes? Here is what FAIR finds objectionable: " New York City requires employers using AI hiring tools conduct “bias audits” to assess whether employment decisions have disproportionately negative outcomes for candidates based on race/ethnicity and/or sex/gender. Advocacy groups are encouraging organizations to develop “diversity practices that mitigate social biases from creeping into [their] AI” systems." A bias audit is just MEASURING disparate outcomes, not advocating a particular way to redress them. Is FAIR now against measuring outcomes? Do we not want to know if, say, 80% of employees in an organization are women but 100% of the management is men? Is the idea that we'll all pretend racism and sexism no longer exist?
Thank you very much for your comment. I certainly appreciate your concerns regarding racism and sexism, and I am by no means suggesting that these aren’t factors that continue to enable inequalities in our society.
But it’s important to examine the causes of disparate outcomes rather than to simply focus on their impact. For example, women may be disproportionately impacted by hiring practices at a fire department, but that impact may be wholly unrelated to sexism. It could, for example, relate to disparities in physical fitness that biologically favor me in this field, or it could be sexism. We shouldn’t presume the latter.
It’s also important to understand the nature of these bias audits and why they are problematic. The information collected isn’t used to assess the cause of disparate impact. Rather, they evaluate AI algorithms to “to detect and correct any discriminatory patterns, aligning AI systems with DEI goals.”
https://www.shrm-atlanta.org/2024/10/bridging-the-gap-how-ai-can-support-diversity-equity-and-inclusion-in-recruitment/#:~:text=Bias%20auditing%20evaluates%20AI%20algorithms,ethical%20practice%20to%20reduce%20discrimination.
Put simply, the purpose of these audits is to ensure that algorithms advance the very identity-based policies that FAIR has long challenged.
We have never pretended that racism and sexism no longer exist. But we believe that aligning hiring, promotion and other employment decisions with identity-based goals is, in itself, discriminatory, divisive, and undermines meritocracy.
I'm far from an expert, but I think you misunderstand what a bias audit is. The article you linked to distinguishes between "pre-, in-, and post-processing techniques." Maybe the post-processing technique does somehow lead to equality of outcome, though the article doesn't say that. The article does say that the DEI framework is supposed to ensure that the decision is "fair and unbiased."
The bias audits I've read about dig into how the AI system is "rewarding" certain attributes. The researcher in this show, for example, talks about an AI system that rewarded having the word "Thomas" on an applicant's resume. The system was trained on company data and obviously that data included a lot of successful Thomases. https://www.wbur.org/onpoint/2025/05/23/ai-job-marketplace-hiring-technology
I think we can all agree that the word "Thomas" should not be rewarded by an AI system for a variety of reasons, one of which is that the word is unlikely to appear evenly across male and female applications.
More generally, I joined FAIR when the national consensus was in favor of affirmative action and DEI practices that promoted equality of outcome, practices that were offensive to me. That's not at all where our country is now! Now, I feel we've abandoned any commitment to trying to ensure equality of *opportunity*. We lump all efforts to ensure equal treatment or promote diversity into one "DEI" bucket and reject it. Humans are biased. Bias audits have repeatedly shown that the humanly created biased data is now training our AI systems. I strongly support bias audits, and I agree with you that AI should not then be trained to artificially achieve an equality of outcome.
I am in complete agreement that bias in *any* way, shape or form should be eliminated from employment decisions.
However, there is clearly an effort underway to embed DEI frameworks into AI systems. What’s also clear is that DEI has not morphed into diversity, *equality* and inclusion; it remains diversity, *equity*, and inclusion.
I firmly believe that any framework that incorporates DEI principles — regardless of its professed goals — will always prioritize equality of outcome over equality of opportunity. I don’t believe it’s reasonable to assume this goal would change simply because the framework is embedded into algorithms.
Lastly, I share your belief that bias is inherent in human nature, and I believe AI has the potential to eliminate it. But we can’t do this by contaminating these systems with frameworks that have claimed to ensure “fair and unbiased” decisions, yet all too often have merely flipped bias in a different direction.
I take to heart your example about a corrupted AI system that “rewards” Thomases. But advocating for the removal of DEI frameworks in AI systems does *not* and should not mean permitting other types of biases to penetrate them.
The goal should be to implement systems that favor no individual or group based on their immutable characteristics. Full stop.
You wrote, "I firmly believe that any framework that incorporates DEI principles — regardless of its professed goals — will always prioritize equality of outcome over equality of opportunity." Why assume that? Why assume anything in a blog posting? Instead, do the research and then link to the results, if your fears are proven true. All I'm asking is that you educate yourself about how, exactly, the DEI framework is being incorporated into bias audits. And then educate us!
Thanks for this fruitful exchange. I feel heard, and I appreciate the time you've taken to respond to my many posts.
Thank you, as well, for the fruitful exchange!
“You wrote, "I firmly believe that any framework that incorporates DEI principles — regardless of its professed goals — will always prioritize equality of outcome over equality of opportunity. Why assume that?”
Just so I’m clear, is it your position that equity (for the purposes of DEI) does not prioritize equality of outcome over equality of opportunity?
Equity often prioritizes equality of outcome. I don't know that it always does. When a lot of people use one term, it's exact meaning tends to slide around. I don't think that everyone using the term "equity" is as attune as you and I are to the difference between "outcome" and "opportunity." But my real point is why make an assumption about a very complex process--determining if there is bias in AI processes--without verifying your assumption? It seems pretty important to look closely at what the audit is doing to figure out if you're against it. In my own research on this topic, I have discovered only examples of revealing bias in the AI evaluation, like the many examples in the story I linked to below. I haven't come across any examples of an audit being used to **change** the outcomes. The word "audit" itself suggests that it's just a careful evaluation of what's happening. Perhaps the real question is how the AI system is retrained *after* the audit.
@Monica Harris (@monicaunplugged)
I have trouble processing AI in my line of work; I provide peace of mind to paying clients through compassionate care of their kept critters.
How is AI gonna help me fuss over the client's horse, cat, dog, etc?
How is AI gonna help me brush the fur or skin of these critters?
How is AI gonna help me clean up after the critters?
How is AI gonna help me feed these critters? or give them treats or medicines?
How is AI gonna help me evaluate the life status of these critters?
Those questions focus on the interactions between kept critters and peace-of-mind providers. I have yet to see how AI can help me provide peace of mind to my clients.
How is AI gonna provide tonsorial modifications to clients? fingertip, toetip, and facial aesthetic enhancements? How is AI gonna help designers develop style for their clients?
How is AI gonna help modify motorcycles to their riders' preferences?
I get that you pointed out that AI per se is just a tool for leveraging users' intelligence during their work.
If one of your critters is acting weird, you can describe that to AI and AI can give you some ideas of what's causing the weird behavior. Because it's just math, what the AI tells you could be completely useless. OTOH, and this has been my experience several times, its response could get you thinking, usefully. (I don't take care of animals, but my life is full of things that AI can't do, but that it can give useful suggestions about.)
@mulhern
Thank you for your informative comment. Researching weird behaviour by a kept critter using AI to stimulate finding options is a great idea. Thanks.
I think that bureaucracies always get excited about regulating things. Because AI is moving so fast, any regulations they dream up will end up being completely unrelated to reality. But also, the impulse to regulate things that should not be regulated is a bad one and common with bureaucracies. There needs to be less knee-jerk ideas about regulation and more constructive ideas altogether. There has always been a problem with people believing the computer, because it is a machine. You would think they would be over it by now, but they are not. It will get worse with AI, in two ways. First, for the people who aren't machine-believers, there will still be a learning cycle. I've already been faked out by an AI once; it gave me a plausible answer to a question, and then the evidence that I gathered supported that answer for a mind that was already prepped to believe. Then I realized it was wrong and I got wary. Eventually, relaxation will ensue and I'll get faked out again. And again. And again. Hopefully my recovery will get faster as I get more practiced. That's the cycle. But others will just believe non-stop because it is easier to believe the machine's answers than to do anything else at all. Regulation is beside the point. Makes me think of an old David Bowie song: President Joe once had a dream//the world held his hand//gave its pledge//so they sold him the scheme//the savior machine.//They called it the prayer//its answer was law...