A recent update to the ChatGPT mobile application contains code strings that explicitly mention an "image announcement," according to findings shared by developer M1Astra. The discovery, spotted in the app's resource files, is the first technical indicator that OpenAI may be preparing to publicly announce a new image-related capability for its flagship AI product.
What Happened
Developer M1Astra, who regularly analyzes app updates for unreleased features, identified new text strings within the latest version of the official ChatGPT app for iOS or Android. The strings directly reference an "image announcement," suggesting the app's interface is being prepared to notify users about a forthcoming image feature. The source code does not reveal specific details about the feature's functionality, release date, or whether it's related to image generation, analysis, or multimodal input.
Context
OpenAI has steadily expanded ChatGPT's capabilities beyond text. The model already supports image uploads for analysis via GPT-4V (Vision) in certain tiers, and the company operates the separate DALL-E image generation service. However, a more integrated or prominent image feature within the main ChatGPT interface has been a long-anticipated development. Code strings are often added to applications weeks or even months before a feature's public launch, serving as placeholders for future user interface elements and notifications.
This discovery follows a pattern of feature teases found within app code. In late 2025, strings referencing "voice mode" improvements were found before an official announcement. For a company like OpenAI, which typically controls its announcement narrative tightly, such code leaks provide rare early signals of product direction.
What This Means in Practice
For users and developers, this code hint suggests the ChatGPT product team is in the final stages of preparing a significant image-related update. Practitioners should monitor OpenAI's official channels (blog, social media) for an announcement in the coming weeks. The integration could lower the barrier for creating or analyzing images directly within a conversational AI workflow.
gentic.news Analysis
This code leak aligns with OpenAI's established product integration strategy and the broader industry trend of modality fusion. Historically, OpenAI has developed capabilities in separate silos—DALL-E for image generation, Whisper for audio, GPT-4V for vision understanding—before gradually integrating them into the ChatGPT umbrella product. This potential "image announcement" likely represents the next step in that consolidation, potentially offering a more seamless experience than the current multi-step process of using DALL-E via the API or a separate interface.
The timing is also noteworthy. With competitors like Google's Gemini and Anthropic's Claude offering increasingly robust native multimodal features, OpenAI faces pressure to make its best image technology more accessible within its primary consumer-facing product. A dedicated announcement would help counter the narrative that competitors are moving faster on integrated AI experiences. Furthermore, as we covered in our analysis of the "GPT-4.5 Rumors" last quarter, OpenAI's release cadence has become more incremental, focusing on product polish and integration rather than purely raw model capability leaps. This image feature fits that pattern perfectly.
However, the vagueness of the term "image announcement" leaves several strategic questions open. Is this a new model (e.g., DALL-E 4), a deeper integration of existing DALL-E 3 or GPT-4V, or an entirely new capability like consistent character generation for stories? The code strings offer no clarity, making this a signal to watch rather than a definitive roadmap.
Frequently Asked Questions
What does "image announcement" in the code mean?
It means the ChatGPT app's developers have added text labels (strings) that will be displayed to users when a new feature related to images is launched. This is a standard software development practice to prepare an app's user interface for a future update. It confirms the team is actively working on an image-related feature that will be announced soon.
Is this about DALL-E integration into ChatGPT?
While not confirmed, it is the most likely scenario. OpenAI already offers DALL-E image generation via API and has experimented with limited access within ChatGPT. A full public rollout of image generation directly in the chat interface would be a logical next step, competing with similar features in Microsoft Copilot and Midjourney's chat-based interface.
When will this feature be announced?
Code strings can be added long before a launch. Based on past OpenAI patterns—where code hints for "voice mode" appeared roughly a month before announcement—a reveal could happen within the next 4-8 weeks. However, this is speculative; the company could accelerate or delay the timeline based on internal testing and competitive factors.
Will this be a free or paid feature?
If it is a significant new capability like DALL-E integration, it will almost certainly be limited to paying ChatGPT Plus, Team, or Enterprise subscribers initially, following OpenAI's strategy of offering advanced features to its subscription tiers. Free tier users might get limited access or wait longer, as seen with GPT-4o and advanced data analysis features.








