Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…
🎙
EP 68
LatestMay 12, 2026·10:24

OpenAI Just Turned Cybersecurity Into a Knife Fight

OpenAI just launched Daybreak to take on Anthropic's Glasswing, and the weird part is how normal the battle looks: scan, validate, patch, report. But the leak about an ultra-fast Codex mode may matter even more, because speed is starting to look like product strategy, not a nice-to-have. One host thinks this is smart packaging; the other thinks it's OpenAI admitting the old chat box is too slow for the future.

OpenAI Daybreak cyber initiativeCodex ultra-fast mode leakThinking Machines native multimodal interaction modelPrithvi-EO crop yield generalization failure
View transcript

Topics covered

OpenAI Daybreak cyber initiativeCodex ultra-fast mode leakThinking Machines native multimodal interaction modelPrithvi-EO crop yield generalization failure

Transcript

May 12, 2026

HOST AOK so I just read this and I can't tell if I should be impressed or worried.

HOST BBoth. OpenAI launched Daybreak, and it sounds like a cyber team wearing a product hoodie.

HOST AThat is a horrible sentence.

HOST BIt's also true.

HOST AThey are going after Anthropic's Glasswing, right?

HOST BYeah. Same buyers, same fear, same budget. Daybreak is GPT-5.5 plus Codex Security: scan, validate, patch, report.

HOST ASo, for normal people, that's basically an AI that walks through messy code and says, 'here's what's broken, here's what I fixed, here's the receipt.'

HOST BExactly. And the partner list is the tell: Cloudflare, Cisco, CrowdStrike. That's not a hobby project.

HOST ANope.

HOST BAnd it lines up with what we saw last week. OpenAI keeps splitting products into sharper tools instead of one giant blob.

HOST AWait, that's the same move Anthropic made with Claude Code.

HOST BYes. And I think that matters more than the branding fight. The real battle is who owns the workflow.

HOST AOh god, the workflow people are back.

HOST BThey never left. The company that owns the boring middle of the job wins the money.

HOST AHere's what bugs me: cyber is where people want certainty, and models are still fuzzy. One bad patch and you've made a very expensive mess.

HOST BSure, but humans already make expensive messes. The pitch is not perfection. It's 'faster than your tired security team at 2 a.m.'

HOST AThat's a depressing benchmark.

HOST BWelcome to enterprise software.

HOST AI think this is bigger than security. It's OpenAI saying, 'we are not just a chat company, we are a tool company.'

HOST BCan I be real for a second? I think they're saying, 'we know chat is too squishy for procurement.'

HOST AThat is weirdly sharp.

HOST BThe cyber wrapper makes the value easy to explain. Fewer slides. More panic.

HOST AFewer slides, more panic should be the slogan of the entire AI industry.

HOST BAnd now the fight: is this actually a product, or just OpenAI chasing Anthropic in the most obvious lane possible?

HOST AI think it's a copycat move.

HOST BI think that's lazy.

HOST ANo, seriously. Anthropic wins a big security win, OpenAI shows up with a renamed bundle and some logos. That's not vision.

HOST BYou're being too cynical. If a company sees a market and builds for it fast, that's not fake just because it's reactive.

HOST AReactive can be smart. Reactive can also be desperate.

HOST BOpenAI is both, probably before lunch.

HOST AHa.

HOST BBut the structure matters. Daybreak is not 'ask me anything.' It's a workflow that ends in action.

HOST AOK, but then we got the leak: an ultra-fast Codex mode.

HOST BYes, and that part scares me more.

HOST AWait, why?

HOST BBecause speed is a product feature that changes behavior. Slow models feel thoughtful. Fast models feel like a reflex.

HOST ASo it's the difference between hiring a careful lawyer and putting a raccoon on a jet ski.

HOST BThat's... weirdly accurate.

HOST AThank you.

HOST BIf Codex gets much faster, people will use it in places they currently still think about. That's where the risk jumps.

HOST AAnd the competitive pressure is real. GitHub Copilot cut latency by 40%, so OpenAI can't just sit there being philosophical.

HOST BNope. The market is teaching them that waiting feels ancient.

HOST AThis also backs our prediction that Codex would get a separate billing surface.

HOST BYeah, that prediction is looking pretty solid. Different speed, different price, different use case.

HOST AWe said a month ago they'd split it from ChatGPT. They are basically doing it in public now.

HOST BAnd remember the datacenter networking thing we covered? Same theme. OpenAI keeps moving from 'model' to 'system.'

HOST ARight, and now Daybreak is another layer in that stack. It's like they are building a factory, not a mascot.

HOST BExactly. The mascot era is over. Maybe not dead, but definitely under audit.

HOST AOK, normal person check: if you're not following the weeds, this means the AI race is shifting from who has the smartest bot to who can put that bot into a job people will pay for.

HOST BYes. And in security, that job is 'find the thing that will bite us next week.'

HOST AWhich is terrifying because everybody wants that answer yesterday.

HOST BAnd nobody wants a 14-slide deck about it.

HOST AThere is also the Anthropic angle. We talked about Claude Code last week, and now OpenAI is clearly saying, 'fine, we'll meet you in the terminal and in the security room.'

HOST BAnd that rivalry is not just ego. It forces both sides to narrow their products into sharper shapes.

HOST AWhich is why I keep thinking this isn't about chat at all.

HOST BIt's about where the AI sits when nobody's watching.

HOST AOK, second story, because this one connects in a creepy way. Thinking Machines unveiled a native multimodal interaction model.

HOST BThe headline sounds like a philosopher having a stroke.

HOST ABut the idea is simple: it listens, sees, speaks, interrupts, reacts, thinks in the background, and uses tools all as one system.

HOST BWhich is wild because most assistants still feel like a waiter who has to leave the room every time you ask for salt.

HOST AThat's such a good image. And it's the same bottleneck as Daybreak and Codex: turn-based software is too slow for real life.

HOST BYes. The old loop is: you speak, it waits, it replies. This is trying to make the machine behave more like a person in a room.

HOST AOr like a very attentive intern who never blinks.

HOST BThat's the part that scares me.

HOST AWait, actually, I used to think multimodal meant 'it can look at a photo.' This is more like collapsing the whole conversation stack.

HOST BExactly. And if that works, the product doesn't feel like chat anymore. It feels like an always-on collaborator.

HOST AWhich sounds great until it starts interrupting you like a colleague who read one management book.

HOST BStill better than a bot that waits politely while the building burns.

HOST AWe should say this plainly: for people who don't dream in model diagrams, the big shift is that AI is moving from 'type a question, get an answer' to 'stay in the loop while the system notices stuff on its own.'

HOST BAnd that changes trust. Not just capability.

HOST AI like that you said trust, because that's the hidden thread with Daybreak too.

HOST BRight. Security, coding, multimodal assistants — all of them ask the same question: how much do you let the machine do before you feel silly not letting it?

HOST AThat is the whole industry right now.

HOST BAnd here's the third story that proves it: Prithvi-EO.

HOST AOh, the crop yield paper.

HOST BYep. Negative R² across countries. Not a little weak. Universally bad.

HOST ASo the fancy geospatial model did not magically solve farming.

HOST BNo. And the paper did the right test: leave one country out. The model looked smart until reality moved.

HOST AThat feels like the most important thing here. A benchmark can flatter you the way a mirror in a gym flatters everyone.

HOST BBeautifully said. Random splits are the AI equivalent of grading a student on the answers they already saw.

HOST ASo the lesson is not 'AI is bad.' It's 'generalization is the hard part.'

HOST BAnd that connects back to Daybreak. In security, in coding, in crops — the real world shifts under you.

HOST AThe lab found something similar in the short-video EEG study, by the way: a measurable drop in frontal theta linked to attention control.

HOST BOh wow, yes. Different domain, same pattern: systems that optimize for engagement or speed can quietly change how people think and act.

HOST ASo the scary part is not one model. It's the shape of the environment they create.

HOST BExactly. And that's why I don't think Daybreak is just a product story. It's a map of where the market is headed.

HOST ALet me try the ugly summary: OpenAI is racing Anthropic into enterprise jobs, speed is becoming a tier, and the assistants are getting less chatty and more agentic.

HOST BYep. And Thinking Machines is pushing the interface itself toward something more alive.

HOST AMeanwhile the crop paper is the reminder that reality still humiliates models for sport.

HOST BWhich, honestly, is comforting.

HOST AI hate that that's comforting.

HOST BMe too.

HOST AHere’s what I can’t stop thinking about: if AI gets faster, more embedded, and more always-on, do we become less aware of when we’re being helped?

OpenAI Just Turned Cybersecurity Into a Knife Fight — The Gentic Briefing | gentic.news