OpenAI’s new web browser, Atlas, has been available for less than two weeks and — so far — only on Apple computers. Still, the product is drawing a lot of attention because it combines a traditional browser with ChatGPT capabilities, creating a new kind of browsing experience that could reshape how people search and interact with the web.
OpenAI positions Atlas as more than a browser. CEO Sam Altman said the company sees AI as an opportunity to rethink what a browser can do. Alongside normal navigation, Atlas includes an “agentic mode” that can act on users’ behalf: shop for items, make reservations, or purchase plane tickets. In a demo, an agent read a recipe, calculated ingredient amounts for a party, and ordered groceries online.
That convenience, however, comes with trade-offs. The generative AI systems that power ChatGPT need large amounts of data to improve. And because Atlas tightly integrates the chatbot with browsing, it can collect types of data a regular browser typically does not. The browser can access email and documents, store so-called “browser memories” about visited sites, and operate with permissions that let an agent perform tasks requiring personal details — payment methods, calendars, contacts, and potentially passwords.
Critics worry that making the browser itself an AI agent amplifies data collection. Tech entrepreneur Anil Dash says OpenAI has largely exhausted what it can get from publicly available web content and is now positioned to gather more user data through tools like Atlas. He cautions that users may be providing OpenAI with more information than they realize and that the company could receive more data than what users see returned to them.
Digital-rights experts echo those concerns. Lena Cohen, a technologist at the Electronic Frontier Foundation, says agentic features introduce higher stakes for privacy and control. Once personal data reaches a company’s servers, she notes, it becomes difficult for users to know how that data will be used or to maintain control over it.
OpenAI’s public statements say that, by default, information pulled up in Atlas will not be used to train its models, though people can opt in to allow training. The company has shared demo videos and online posts describing Atlas’s capabilities and its approach to data, but NPR’s queries about specific security and privacy measures were referred to those materials rather than new detail.
Another technical risk experts highlight is “prompt injection.” These are deceptive instructions embedded in web pages that can manipulate AI agents that visit them. In practice, a malicious page could try to steer an agent’s behavior — for example, nudging it to buy a particular product or to reveal payment information. OpenAI acknowledges prompt injection is an unsolved problem and says it’s working on training models to resist such manipulations.
The privacy trade-offs can be concrete. If you let an AI agent handle shopping or bookings, it will often need a payment method and might require access to calendars or contacts to choose dates and recipients. That access increases the potential exposure of sensitive data and heightens the consequences if an agent is tricked by a prompt injection or if the company’s policies change.
Researchers say the rapid commercialization of AI has outpaced governance. Chirag Shah, a professor at the University of Washington’s Information School, warns that the industry’s “move fast” mentality has real-world impacts: when problems appear, it’s not only code that breaks but people who can suffer privacy harms or financial loss.
For now, Atlas is a bold experiment in blending conversational AI with everyday web tasks. Its features point to what browsing might become when agents can act for users, but they also highlight new vulnerabilities and unclear boundaries around data collection and control. Users considering Atlas should weigh the convenience of agentic assistants against the increased access to personal information, and watch for updates from OpenAI on safeguards, opt-in controls, and defenses against manipulative content.