corporate ethics
30 articles about corporate ethics in AI news
Pentagon's AI Ethics Standoff: Defense Department Considers Banning Anthropic's Claude from Contractor Use
The Pentagon is escalating its dispute with Anthropic over AI ethics, potentially requiring defense contractors to certify they don't use Claude AI. This move follows stalled contract negotiations and reflects growing tensions between military AI adoption and corporate safety principles.
AI Ethics Crisis Erupts as Trump Bans Anthropic, OpenAI Steps Into Pentagon Void
President Trump has ordered federal agencies to stop using Anthropic's AI services after the company refused to lift safeguards against mass surveillance and autonomous weapons. OpenAI has now secured a Pentagon contract to fill the gap, creating a major industry divide over military AI ethics.
AI-Driven Workforce Transformation: The Coming Corporate Downsizing Wave
Industry experts predict massive workforce reductions across public companies as AI adoption accelerates, with projections suggesting 30%+ staff cuts within 18 months. This transformation reflects AI's growing capability to automate complex business functions previously requiring human expertise.
Anthropic's Standoff: When AI Ethics Collide with National Security Demands
Anthropic faces unprecedented pressure from the Department of War to grant unrestricted military access to Claude AI, with threats of supply chain designation or Defense Production Act invocation if they refuse. The AI company maintains its ethical guardrails despite government ultimatums.
Pentagon-Anthropic Standoff: When AI Ethics Clash With National Security
The Pentagon is reportedly considering severing ties with Anthropic after the AI company refused to allow its models to be used for "all lawful purposes," insisting on strict bans around mass domestic surveillance and fully autonomous weapons systems.
OpenAI Researcher's Exit Signals Growing Tensions Over AI Monetization Ethics
OpenAI researcher Zoë Hitzig resigned in protest as the company began testing ads in ChatGPT, warning that commercial pressures could transform AI assistants into manipulative platforms reminiscent of social media's worst excesses.
OpenAI's Surveillance Potential Exposed: Community Note Reveals ChatGPT's Dual-Use Dilemma
A viral community note on Sam Altman's post reveals that ChatGPT's terms allow potential military surveillance applications, highlighting growing concerns about AI's dual-use nature and corporate transparency in the defense sector.
Inside Claude's Constitution: How Anthropic's AI Principles Shape Next-Generation Chatbots
Anthropic's Claude Constitution reveals the ethical framework governing its AI assistant, sparking debate about transparency, corporate values, and the future of responsible AI development. This public-facing document outlines core principles that guide Claude's behavior during training and operation.
OpenAI Drops AGI Clause with Microsoft Ahead of IPO
OpenAI has removed the AGI clause from its Microsoft partnership, ending restrictions that limited Microsoft's access to future AGI systems. The move, reported ahead of OpenAI's anticipated IPO, suggests OpenAI may be preparing to announce AGI milestones.
CS3: A New Framework to Boost Two-Tower Recommenders Without Slowing Them Down
Researchers propose CS3, a plug-and-play framework that strengthens the ubiquitous two-tower recommendation architecture. It uses three novel mechanisms to improve model alignment and knowledge transfer, delivering significant revenue gains in a live ad system while maintaining millisecond latency.
Gallup: 50% of US Workers Now Use AI on the Job, Doubling Since 2023
A Gallup survey of nearly 24,000 US workers in Q1 2026 shows 50% now use AI at work, up from just 21% in 2023. This marks a critical mass for enterprise AI tools and signals a shift from experimentation to operational integration.
Palantir's Alex Karp Weaponizes Critical Theory to Sell AI Ontology
A critique argues Palantir CEO Alex Karp deliberately misapplies Frankfurt School critical theory to market his company's AI platforms to governments, turning philosophical critique into a sales tool for surveillance technology.
Sabicap Develops Brain Wearable to Decode Imagined Speech into Text
Sabicap is developing a brain wearable with tens of thousands of sensors to decode imagined speech into text. The company, backed by Vinod Khosla, aims to create a system that works across users with minimal calibration for broad adoption.
Google Negotiates Pentagon AI Deal with OpenAI's 'All Lawful Uses' Terms
Google is in talks with the Pentagon to deploy Gemini under terms mirroring OpenAI's 'all lawful uses' contract, a reversal from its 2018 Project Maven withdrawal. Anthropic remains excluded for refusing to drop safeguards against autonomous weapons.
Google DeepMind Hires Philosopher Henry Shevlin for AI Consciousness Research
Google DeepMind has hired philosopher Henry Shevlin to treat machine consciousness as a live research problem, focusing on AI inner states, human-AI relations, and governance. This marks a strategic pivot toward understanding what advanced AI systems might become, not just what they can do.
Fortune Survey: 29% of Workers Admit to Sabotaging Company AI Plans
A Fortune survey finds 29% of workers admit to sabotaging company AI initiatives, a figure that rises to 44% among Gen Z. This exposes a critical human-factor challenge in enterprise AI adoption beyond technical hurdles.
Mo Gawdat: AI Will Take Many Jobs in Under 5 Years
Mo Gawdat, former Chief Business Officer at Google, stated AI will take many jobs in under five years but will never replicate the human connection aspect. He emphasized the real danger of this economic displacement.
Sam Altman Advocates for 32-Hour Work Week in AI-Driven Policy Paper
Sam Altman has proposed a 4-day, 32-hour work week as part of a new social contract, reflecting a growing trend among executives to advocate for reduced working hours in the age of AI.
Massive Video Reasoning Dataset Released, Reportedly 1000x Larger Than Predecessors
An unverified report claims the release of a video reasoning dataset roughly 1000x larger than existing benchmarks. If true, it would be a significant resource for training next-generation video understanding models.
Netflix Study Quantifies the True Value of Personalized Recommendations
A new study using Netflix data finds its personalized recommender system drives 4-12% more engagement than simpler algorithms. The research reveals that effective targeting, not just exposure, is key, with mid-popularity titles benefiting most.
Columbia's Truss Links Robots Self-Assemble and Cannibalize for Parts, Achieving 66.5% Mobility Gain
Columbia University researchers demonstrated 'Truss Links' robots that autonomously self-assemble using magnetic connectors, then selectively disassemble other robots to harvest parts for repair or growth. The system achieved a 66.5% mobility improvement through this zero-waste physical adaptation.
Judge Questions Legality of Pentagon's 'Supply Chain Risk' Designation Against Anthropic, Calls Actions 'Troubling'
A U.S. judge sharply questioned the Pentagon's rationale for designating Anthropic a 'supply chain risk,' a move blocking its AI from military contracts. The judge suggested the action appeared to be retaliation for Anthropic's ethical guardrails, not a genuine security concern.
Rezolve Ai and Microsoft to Spotlight 'Agentic Commerce' at 2026 Fireside Chat
Rezolve Ai announces a fireside chat with Microsoft to discuss 'Agentic Commerce'—AI agents that autonomously shop for consumers. This signals a strategic push to make AI a core transactional layer in retail.
Algorithmic Trust and Compliance: A New Framework for Visibility in Generative AI Search
A new arXiv study introduces Generative Engine Optimization (GEO), a framework for optimizing content for AI search engines. It finds AI exhibits a strong bias towards authoritative, third-party sources, making compliance and trust signals critical for visibility in regulated sectors.
Palantir CEO's Stark Warning: AI Pause Would Be Ideal, But Geopolitical Reality Forbids It
Palantir CEO Alex Karp states he would favor a complete pause on AI development in a world without adversaries, but acknowledges the current geopolitical and economic reality makes that impossible. He highlights that U.S. economic growth is now heavily dependent on AI infrastructure investment.
Palantir CEO Warns of AI Supply Chain Vulnerabilities, Advocates for Domestic Safeguards
Palantir CEO Alex Karp highlights Anthropic's designation as a 'supply chain risk' and argues for domestic AI restrictions to protect national security and technological sovereignty in an increasingly competitive global landscape.
Sam Altman Envisions AI That Thinks for Days: The Dawn of Super-Long-Term Reasoning
OpenAI CEO Sam Altman predicts future AI models will perform "super long-term reasoning," spending days or weeks analyzing complex, high-stakes problems. This represents a fundamental shift from today's rapid-response systems toward deliberate, extended cognitive processes.
Bernie Sanders Proposes Sweeping Moratorium on New AI Data Centers
Senator Bernie Sanders has introduced legislation to ban construction of new AI data centers, citing existential threats to humanity. Critics argue the move could hinder U.S. competitiveness against China.
Anthropic Takes Legal Stand: AI Company Sues Pentagon Over 'Supply Chain Risk' Designation
AI safety company Anthropic has filed two lawsuits against the Pentagon after being labeled a 'supply chain risk'—a designation typically applied to foreign adversaries. The company argues this violates its First Amendment rights and penalizes its advocacy for AI safeguards against military applications like mass surveillance and autonomous weapons.
Crawlee: The Open-Source Web Scraping Library That Evades Modern Bot Detection
Crawlee, a 100% open-source Python library, enables developers to build web scrapers that bypass modern anti-bot systems with features like proxy rotation, headless browser support, and automatic retries.