Discussion about this post

User's avatar
Tracye's avatar

Helping to shape the content of AI, rather than advocating for limits on its use, is a poor sort of activism, and certainly not one that is "pro-human." Activism is needed in ensuring that AI-created content is labeled as such and that there are protections for artists whose work is consumed and regurgitated by AI.

Compare AI to GMO seed. No, activists did not (and likely will not) succeed in stopping the "GMO revolution." However, opponents also did not give up immediately and settle for contributing to a discussion about which pesticides GMO crops would be resistant to. They asked for, and to some degree got, labeling. It hasn't been a perfect success, but some labeling requirements do exist, and companies that do not use GMOs have responded to the preference of a vocal segment of the public by selling clearly labeled non-GMO products.

We could do something similar for non-AI content, and I hate to see FAIR capitulating so easily. If the task I've laid out feels too daunting, then, at minimum, FAIR could set standards within its own organization limiting the use of AI, and devote the bulk of its time to its other, longstanding projects.

Expand full comment
NC's avatar
May 30Edited

Frankly, your article reads like a well-intentioned pep talk, but it betrays a fundamental naivety about the actual state of AI development and governance. You talk about a "powerful opportunity to shape how AI develops throughout our society," but you gloss over the fact that the core protections and guardrails needed to make this possible—especially data transparency, open data sources, and robust auditing—are almost entirely missing from the current AI landscape.

Let’s be clear: AI systems today are mostly black boxes. The public and even many policymakers have no meaningful access to the data these models are trained on, nor any way to audit their internal logic or outputs. Calls for “transparency and accountability” sound great, but where are the enforceable standards? Where are the open datasets, the independent audits, the regulatory teeth? Without these, your optimism is just wishful thinking.

You claim AI can transcend identity-based divisions, but you ignore the reality that most AI systems inherit and even amplify the biases present in their training data—biases that remain invisible without radical transparency. And while you champion “individual merit,” you sidestep the fact that, absent open and auditable systems, we have no way of knowing whether AI is actually making fair, unbiased decisions at all.

If you want to lead on AI ethics, start by demanding the basics: open data, transparent algorithms, mandatory audits, and real accountability for harm. Until then, articles like this are just feel-good rhetoric—detached from the urgent, practical work that’s actually needed to protect society from the very risks you so vaguely acknowledge.

If FAIR wants to be a credible voice in the AI era, it must pair its values-driven leadership with real technical expertise. That means either hiring or deeply collaborating with AI practitioners, data scientists, and policy experts who can turn principles into practice.

Having said that, I love you all at fair So I hope I don't sound like a grumpy grumpy (Although very likely I am, it's just the age I am.)

Expand full comment
23 more comments...

No posts