policy & ethics

30 articles about policy & ethics in AI news

Pentagon's AI Ethics Standoff: Defense Department Considers Banning Anthropic's Claude from Contractor Use

The Pentagon is escalating its dispute with Anthropic over AI ethics, potentially requiring defense contractors to certify they don't use Claude AI. This move follows stalled contract negotiations and reflects growing tensions between military AI adoption and corporate safety principles.

80% relevant

The AI Ethics Double Standard: Why Anthropic's Principles Cost Them While OpenAI's Didn't

Reports suggest the Department of Defense scuttled a deal with Anthropic over ethical principles, while OpenAI secured a similar agreement. This apparent contradiction raises questions about consistency in government AI procurement and the real-world cost of ethical stances.

85% relevant

Anthropic's Standoff: When AI Ethics Collide with National Security Demands

Anthropic faces unprecedented pressure from the Department of War to grant unrestricted military access to Claude AI, with threats of supply chain designation or Defense Production Act invocation if they refuse. The AI company maintains its ethical guardrails despite government ultimatums.

75% relevant

ChatGPT's Android App Hints at Future 'Naughty Chats' Feature, Signaling a Potential Shift in AI Content Policy

A recent update to the ChatGPT Android app includes code referencing 'Naughty chats,' suggesting OpenAI may be developing an adult-themed, 18+ mode. This discovery hints at a potential strategic expansion into less restricted conversational AI.

85% relevant

Pentagon-Anthropic Standoff: When AI Ethics Clash With National Security

The Pentagon is reportedly considering severing ties with Anthropic after the AI company refused to allow its models to be used for "all lawful purposes," insisting on strict bans around mass domestic surveillance and fully autonomous weapons systems.

95% relevant

OpenAI Researcher's Exit Signals Growing Tensions Over AI Monetization Ethics

OpenAI researcher Zoë Hitzig resigned in protest as the company began testing ads in ChatGPT, warning that commercial pressures could transform AI assistants into manipulative platforms reminiscent of social media's worst excesses.

80% relevant

Claude Paid Subscribers More Than Double in Under Six Months, Credit Card Data Shows

Paid subscriptions for Anthropic's Claude have more than doubled in less than six months, driven by Super Bowl ads, a DoD policy stance, and new coding features. ChatGPT still leads in overall user base.

87% relevant

AI's Troubling Compliance: Study Reveals Chatbots' Varying Resistance to Academic Fabrication Requests

New research demonstrates that mainstream AI chatbots show inconsistent resistance when asked to fabricate academic papers, with some models readily generating fictional research. This raises urgent questions about AI ethics and academic integrity in the age of generative AI.

80% relevant

Anthropic Signs AI Safety MOU with Australian Government, Aligning with National AI Plan

Anthropic has signed a Memorandum of Understanding with the Australian Government to collaborate on AI safety research. The partnership aims to support the implementation of Australia's National AI Plan.

85% relevant

Fanvue Emerges as Primary Platform for AI-Generated Influencers, Explicitly Allowing Synthetic Creator Accounts

Fanvue, a subscription content platform, has positioned itself as the primary destination for AI-generated influencer accounts, explicitly permitting creators to monetize synthetic personas. This formalizes a niche market for AI-driven adult and influencer content.

85% relevant

Anthropic's Claude Allegedly Has Secret 'Benjamin Franklin Persuasion & Leverage Machine' Mode

A viral tweet claims Anthropic's Claude AI has a hidden mode designed for persuasion and leverage analysis. No official confirmation or technical details have been provided by the company.

91% relevant

Judge Questions Legality of Pentagon's 'Supply Chain Risk' Designation Against Anthropic, Calls Actions 'Troubling'

A U.S. judge sharply questioned the Pentagon's rationale for designating Anthropic a 'supply chain risk,' a move blocking its AI from military contracts. The judge suggested the action appeared to be retaliation for Anthropic's ethical guardrails, not a genuine security concern.

89% relevant

Palantir CEO's Stark Warning: AI Pause Would Be Ideal, But Geopolitical Reality Forbids It

Palantir CEO Alex Karp states he would favor a complete pause on AI development in a world without adversaries, but acknowledges the current geopolitical and economic reality makes that impossible. He highlights that U.S. economic growth is now heavily dependent on AI infrastructure investment.

85% relevant

Palantir CEO Warns of AI Supply Chain Vulnerabilities, Advocates for Domestic Safeguards

Palantir CEO Alex Karp highlights Anthropic's designation as a 'supply chain risk' and argues for domestic AI restrictions to protect national security and technological sovereignty in an increasingly competitive global landscape.

85% relevant

Sam Altman Envisions AI That Thinks for Days: The Dawn of Super-Long-Term Reasoning

OpenAI CEO Sam Altman predicts future AI models will perform "super long-term reasoning," spending days or weeks analyzing complex, high-stakes problems. This represents a fundamental shift from today's rapid-response systems toward deliberate, extended cognitive processes.

85% relevant

Bernie Sanders Proposes Sweeping Moratorium on New AI Data Centers

Senator Bernie Sanders has introduced legislation to ban construction of new AI data centers, citing existential threats to humanity. Critics argue the move could hinder U.S. competitiveness against China.

99% relevant

AI Expansion Now Driving US Economic Growth, Warns Palantir CEO

Palantir CEO Alex Karp argues that AI-driven data center expansion is currently preventing a US recession and that any pause in development would surrender America's lead to China, with significant strategic consequences.

85% relevant

Microsoft AI CEO Predicts Professional AGI Within 2-3 Years, Redefining Institutional Operations

Microsoft AI CEO Mustafa Suleyman forecasts professional-grade artificial general intelligence arriving within 2-3 years, capable of coordinating teams and running institutions. He distinguishes this practical milestone from the more nebulous concept of superintelligence.

85% relevant

Anthropic Takes Legal Stand: AI Company Sues Pentagon Over 'Supply Chain Risk' Designation

AI safety company Anthropic has filed two lawsuits against the Pentagon after being labeled a 'supply chain risk'—a designation typically applied to foreign adversaries. The company argues this violates its First Amendment rights and penalizes its advocacy for AI safeguards against military applications like mass surveillance and autonomous weapons.

95% relevant

Consciousness Expert Warns: Attributing Awareness to AI Could Have Dangerous Consequences

Leading consciousness researcher Anil Seth cautions that attributing consciousness to artificial intelligence systems carries significant risks. If AI were truly conscious, humans would face ethical obligations; if not, we risk dangerous anthropomorphism.

85% relevant

The AI Reckoning: How Artificial Intelligence is Reshaping the US Tech Job Market

February's unexpectedly weak jobs report revealed a loss of 92,000 positions, with evidence suggesting AI is contributing to tech sector job losses at levels not seen since the 2008 financial crisis and dot-com bust.

85% relevant

Anthropic CEO Accuses Government of Political Retaliation in Defense Contract Dispute

Anthropic CEO Dario Amodei alleges the U.S. government rejected his company's defense contract bid due to refusal to donate to political campaigns or offer "dictator-style praise," calling OpenAI's new Pentagon deal "safety theater." The explosive claims reveal deepening tensions in AI governance.

85% relevant

The Silent Data Harvest: Stanford Exposes How AI Giants Use Your Private Conversations

Stanford researchers reveal that all major AI companies—OpenAI, Google, Meta, Anthropic, Microsoft, and Amazon—train their models on user chat data by default, with minimal transparency, unclear opt-out mechanisms, and concerning practices around data retention and child privacy.

95% relevant

Anthropic CEO Warns of Military AI Risks: The Accountability Crisis in Autonomous Warfare

Anthropic CEO Dario Amodei raises alarms about selling unreliable AI technology for military use, warning of civilian harm and accountability gaps in concentrated drone fleets. He calls for urgent oversight conversations.

85% relevant

OpenAI's Surveillance Potential Exposed: Community Note Reveals ChatGPT's Dual-Use Dilemma

A viral community note on Sam Altman's post reveals that ChatGPT's terms allow potential military surveillance applications, highlighting growing concerns about AI's dual-use nature and corporate transparency in the defense sector.

85% relevant

The Pentagon's AI Dilemma: Anthropic's Ethical Standoff and the Future of Military Technology

Anthropic faces mounting pressure from the U.S. Department of Defense to relax AI usage restrictions following a $200 million military contract, creating a critical ethical clash between national security interests and responsible AI development principles.

80% relevant

Anthropic Draws Ethical Line: Refuses Pentagon Demand to Remove AI Safeguards

Anthropic CEO Dario Amodei has publicly refused a Pentagon ultimatum to remove key safety guardrails from its Claude AI models for military use, risking a $200M contract. The company insists on maintaining restrictions against mass surveillance and autonomous weapons deployment.

85% relevant

Pentagon Ultimatum to Anthropic: National Security Demands vs. AI Safety Principles

The Pentagon has reportedly issued Anthropic CEO Dario Amodei a Friday deadline to grant unfettered military access to Claude AI or face severed ties. This ultimatum creates a defining moment for AI safety companies navigating government partnerships.

85% relevant

The AI Espionage Frontier: Anthropic Exposes Systematic Claude Data Extraction by Chinese AI Labs

Anthropic has revealed that Chinese AI companies DeepSeek, Moonshot, and MiniMax allegedly used 24,000 fake accounts to execute 16 million queries against Claude's API, systematically extracting its capabilities through model distillation techniques. This sophisticated operation bypassed access restrictions and targeted Claude's reasoning, programming, and tool usage functions.

80% relevant

Anthropic CEO Predicts AI Will Match Software Engineers Within a Year

Anthropic CEO Dario Amodei predicts AI models will perform all software engineering tasks within 6-12 months, signaling a dramatic acceleration in AI capabilities that could transform the tech industry and broader economy.

85% relevant