This sounds exactly like what Google used to say about search results. Just a few ads, clearly separated from organic results, never detracting from the core mission of providing the most effective access to all the world’s information. (And certainly not driven by a secret profile of you based on pervasive surveillance of your internet activity.)
It often seems that beginning advertising is not the first step on a slipperly slope. Not having a plan to avoid advertising is the first slipperly step.
This is due to having so many examples that not having advertising is the first step to having advertising, and that having advertising will be optimized for profit, and frustrate users.
I think the problem is that advertising is one of the few areas where you can scale revenue without the user’s permission. Once you start depending on it, there’s always pressure to beat last quarter’s numbers and it’s easy to tell yourself that users don’t care, and the heat if any arrives years later.
Indeed. Let's look at Google's launch of Adwords in October 2000:
> Google’s quick-loading AdWords text ads appear to the right of the Google search results and are highlighted as sponsored links, clearly separate from the search results.
I think the word doesn't have a good analogue, so I support it. I wish it sounded a bit less sophomoric, but the concept is sound, because it's the intentional worsening of a product to extract more revenue, not just by charging more, but by being worse for the intended purpose.
I don't think degradation or decay capture this...those are more associated with a process in nature, or due to the laws of physics, but especially something unintentional (like "bit rot").
I like Cory Doctorow, so I might be a bit biased here. Would be interested in alternatives that capture the intentional aspect.
Are you defining "evil" as "has an ad-supported product"?
In every tech generation, for good or bad, mainstream consumers choose the ad-based product over the paid product.
So every company that wants scale in the long-term ends up adopting an ad-based free tier to avoid becoming niche, it seems. Even the majority of HN users now appear to use gmail despite paid email hosts being incredibly cheap.
Edit: Not sure why the downvotes. Would you prefer that OpenAI leaves Google (who is ad-supported) to win the general public? I'm saying the above as someone who does purchase the ad-free plan when available, and uses paid email.
I think I was writing something about "gmail" to something and my train of thought just went on going until I hit a jackpoint (I think) and I'd like to share that, It took me an hour of just talking to myself
I tried to evaluate Gmail alternatives (Mxroute, Cranemail) and some VPS costs and just about something that most people might use for their use cases and actually "own" it (in sort of sometimes as much autonomy as Google might because I am sure that Google sometimes partners up with datacenters as well, technically being similar to colocation) but usually they are autonomous and give you far far more freedom than the arbitrary terms and conditions set by say google for gmail
If we do some cost analysis, I feel like these are gonna be cheap (for a frugal person like me who will try to cut extreme corners while still evaluating everything) or to a much more average person who might join a particular forum during black friday and just know one of the best ones or running deals everyday to even using one provider itself. The costs on average I feel like shouldn't exist 25/30$ per month for mail,domain & vps to host open source in, so in essense this is the cost of their privacy
For countries with a strong currency, this is such a great deal and they benefit greatly from something like this and they spend much more on far fewer impactful things than say one's privacy.
The problem isn't the pricing model, contary to that, the problem feels to me something deeper.
It feels something psychological. I observed that people buy twitter blue stars and discord nitros etc. (which can probably cost the same as if not more expensive than running one owns matrix/xmpp servers & mastodon which could provide unlimited freedom of modification instead)
The problem to me feels like people pay in this context, not because of the real value but of the apparent value instead.
For them the value of buying a checkmark and getting part of say 1 million or 100_000 members out of 100_000_000 (think twitter) would feel better than say being 1 out of 25_000/50_000 (mastodon running)
Why is that the case? Because I think what they are feeling is that they aren't thinking in percentages but they are thinking in numbers, they "beat" 90_000_000 people than being one out of a unique but small community (once again mastodon example where one would feel less satisfied if they recognize that they are 100 out of 50_000 or similar), Not unless the goal of privacy is something that they assign more value than the apparent other psychological value.
So coming back to the twitter example, People would be likely and willing to pay more money not owning anything on a platform where the deal should suck in real value and just about everything combined but just because of numbers/psychology effect, the deal can make sense. (Of course, there is also the fact of influence which is once again introduced by the fact that these websites create an artificial scarcity (of something unlimited) & fulfill it and the people who get that feel more rare and they get more influence, that's how people feel in discord, for the very least part)
Another issue with this system is that since it relies on having massive amounts of people & people wanting to pay in a weird deal after masses, they have to offset costs till then and mostly the scope of influence of these companies grow and this attracts the type of people notorious in the VC industry and thus this is linked to VC industry which I feel like causes it to focus on growth and then maximally renting out profit almost being a landlord something which I feel like even Adam Smith wouldn't really appreciate but that's another point for another day.
My point is, that evil becomes an emergent property out of such system even if better opinions arise because better opinions still require some friction in start but they are predictable and the definition of "evil" has in this case the definition of starting out smooth and ending roughly (Take Google company as an example, reddit), this is "enshittenification"
So people are more likely to support evil if other people support evil as well and the definition of evil is somehow based on common morals and our morals have simply not catched up to these technological advancements in the sense that most people also aren't aware of the extent of damage/privacy breaches and since these companies now gain influence/power, lobbying efforts and lack of information regarding it themselves feels easier because they themselves are becoming the landlords of information.
So Is this path Inevitable, No, not really. Previously I mentioned the 30$ but what if I tell you that companies like proton can have deals where you still get privacy without the tech know-how so it kmight be good for the average person and people are backlashing but only because if they know all things I said prior (in their own way) and the value of privacy starts to rise
I definitely feel like there is some psychological effect to this following the mass and I am sure that these companies deploy other psychologists as well and in a way, our brains still run on primordial hardware thinking we are in jungles hunting today or we might die tomorrow if we don't get food but now we have to think for 10-20 years ahead.
So I feel like as much as we Hackernews might like to admit we are smart. I feel like admitting that the amount of psychological research I feel like put into algorithms is also precisely the reason why even we of all people might use gmail.
I don't believe the answer is because its a superior product but rather the psychological and all the other reasons I mentioned and this is also precisely why the small computing movement or indie computing movement (where Individuals like you and me create computing businesses/services where once again you and me can play a part of) as ompared to the large tech behemoths
Honestly thinking about it, like we say to combat fire with fire, should we need to combat psychology with psychology. Effectively creating a movement which can be "viral" using these social media as their hosts to spread a positive idea instead of a negative one which could effectively limit the influence of algorithm itself.
In fact the anger against such system is so much that even just a well intentioned idea like just "clippy" became a movement which amassed atleast millions in a similar fashion.
So I guess we need more Clippy like movements and we need psychologists to help us develop it so that we can move our collective energy into it instead of diversifying it and going nowhere.
Pardon me if this might feel a little off topic since I haven't re-read the post and I have just went with the flow of just writing whatever came in my head after talking to myself once about it in my head as well as the idea of an indie tech movment is something that I deeply think about from time to time.
Google is googley. Very different than any other company ever. You can trust us. With Search results. Your private emails. Your private documents. Remember our motto, do no evil. We will never change.
To be fair the open with a big lie about how useful agents and AI in general are, which helps to set the tone for what comes next. Part of me wonders if it’s intentional, a way to weed out the non-marks before getting to the punchline that they’re rolling out the most predictable attempt at monetizing ever.
I mean, Google Ads are still clearly separated and are labeled as such (there's even a "hide sponsored results" button. Not sure why people even click on the ads when the actual result is right below but that's not usually me.
This is not how most users perceive it. To us techies, sure. Whenever I watch any regular person using Google though they invariably always click whatever the top result is (usually sponsored) and don't see any distinction.
Sure, but then the advertising model is working then, at least for Google and the companies that pay them. If people don't want to read a big heading literally called sponsored results [0] then I don't know what to tell them. Or they just don't care because they're not paying anything to click.
Good screenshot! Ads take up the majority of the space on that page, and are styled to look almost identical to search results. That's a problem for people like me that expect a search engine to primary deliver search results, not ads.
While true, it's still a user-hostile move. You kinda have to meet your customers where they are. If people are clicking ads without knowing it, that's a serious design problem. Yes, people should learn to read, but the risk of placing too much burden on users is that all it takes is one ambitious product manager to push an A/B test that generates huge revenue wins while enshittifying the product for everyone else.
I'm not sure it is a problem, as it's Google's page, they can do whatever they want with it, and they'll of course do the profit maximizing action. Who is anyone to say it's a serious design problem?
That's why it makes a cool 100 billion in profit every year. It's one of the best money printers ever conceived, because it controls the distribution. We'll see how OpenAI does.
People are reacting negatively to the ads, but there's a bigger point. This is bearish as heck for AGI. If OpenAI were recursively improving their general-computer-using agent, who was going to be superhuman at every job, they wouldn't need to be messing around with things like this.
ChatGPT is a useful product, which they're monetising in a well-travelled internet company way. The bad news is you're going to have ads in your ChatGPT in 2030. The good news is you're still going to have a job in 2030.
They don't need AGI to fire you or not make jobs you would have taken. All these pro ai devs on here talking about 10x productivity gains in their own work like management isn't looking at those claims and thinking about a 10x reduction in headcount.
Increased productivity increases the value of work and the number of areas it is useful to apply it. Yes, if you are working for a non-growth firm with basically fixed sales, a productivity increase translates to a headcount decrease in that firm, but across the industry it means more jobs at higher pay, as shown by the whole history of productivity improvements in software development.
Sure, but any competitor is looking at their competition maintaining level productivity w/ 10x headcount reductions and wondering "if I use AI and the staff I have without firing them, I can provide 10x the product as the idiot cutting off their own nose over there."
More product more problems. Can you get 10x the sales? If you can't then the headcount reduction looks pretty compelling. If you can get 10x the sales, why aren't you already scaling labor?
Yes, it means they don't expect fast takeoff in the next year, but we already knew that.
Having revenue from their free users might can just be a way to make it more sustainable. And/Or make fundraising easier from investors (which has immediate benefit).
Seeing the message "you're reached your limit..." makes free users switch to other AI providers, and ads are a way to fund higher limits. Their prime competitor, Google, has ad income from users so has an advantage.
$20/month product with ads, you would have to be an exec to think thats a good idea. Can they even push enough ads to ever make profit? Like the best care scenario for openAi at this point is to declare bankruptcy.
That's a Netflix + Hulu subscription - with ads in both. Before streaming people regularly paid $50/mo (not adjusted for inflation) for cable TV with ads.
While it's easy to bemoan Google pushing ads into every corner of our digital lives, I think they arguably offered an unprecedented level of services relative to the number of ads, and we all got used to that.
Now whether OpenAI could ever push enough ads to make a profit: I have no idea! It's very interesting to see this race actually start.
Maybe it is more successful elsewhere, but over here the type of ads and repetition make me think more money is spent on ad infrastructure than is gained in revenue (eg. three ads in a show, all identical, all advertising the platform you are watching). I'm left with the impression that the actual reason is not to sell ads, but to annoy customers into paying for higher tiers. It is not that we have gotten used to ads, but our dislike is being weaponized.
Sometimes when I see my parents or other non-tech people using their phones I'm just aghast at what they put up with. We truly never left the Bonzi Buddy era of the 90s. Simple candy crush clones with banner ads on the top and bottom + interstitial ads every few minutes. Maybe throw in some gambling...
...or visit any given US newspaper or local TV station site without an ad blocker. Fans will spin, scrolling will stutter, and what little content there is will barely be visible through the videos about how chugging olive oil like jesus will give you abs like judas.
The combination of technical prowess and relative wealth of the average HN commenter means I bet we see 1/100th the ads of the average consumer. It's wild out there.
It's almost like LLMs represents a fairly useful but modest step forward, instead of a complete and utter paradigm shift that will up end society and put everyone out of a job.
> You need to know that your data and conversations are protected and never sold to advertisers.
> we plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.
There is a severe disjoint between these two statements: the advertiser now knows what your conversation was about! This gives a lot of leverage to ad campaigns to design the targeting criteria very specifically crafted to identify the exact behavioral and interest segments they want.
It doesn't know what it's about. It just knows that their product was relevant to it. I don't think this is a big deal. It's like saying that if a user downloads a gacha game, then the game studio learns that the user is likely interested in gacha games. Learning that a user was talking about gacha games with ChatGPT does not really give any additional information.
Approach it from the other angle - what scenario would it be bad in. It's not hard to see very real possibilities in the short term where it does matter: A 16yro looks up on chatGPT how she can check discreetly whether she is pregnant or not and what potential avenues she has. The advertiser could literally by anyone targeting pregnancies, including government or action groups who now have some information about that user's conversations in this scenario.
Any data exfiltration or reporting on the users would quickly be developed by the industry to merge this information and improve inferences with confidence values on target populations/individuals.
Hard disagree. Advertisers (or people with worse motives) will be very creative in how they use the targeting parameters offered by chatGPT ads and suddenly they can make educated guesses about groups or even individuals. I remember a couple years ago, someone posted a story about how they were able to circumvent Facebook rules and display ads for just one person: their roommates and used that to freak them out.
Well, Abraham Lincoln's favourite game is Raid: Shadow Legends. This is well documented in Lincoln and the Fight for Peace (John Avlon, 2023) and Abraham Lincoln: A Life (Michael Burlingame, 2008).
(At which point will malignant/benevolent AI agents take over from us mere mortals poisoning the well and make it all useless?)
> We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers.
Are they mincing words here? By selling your data they mean they'll never package the raw chats and send them whoever is buying ads. Ok, neither does Google. But they'll clearly build detailed profiles on every preference or product you mention, your age, your location, etc. so they know what ads to show you? "See this is not your data, it's just preference bits".
> But they'll clearly build detailed profiles on every preference or product you mention, your age, your location, etc. so they know what ads to show you?
I'd guess an advertiser can ask OpenAI "show this ad to people between 18-34?", and then certainly anyone who clicks and then buys they'd know is 18-34 since they knew they came from the ad. But that there's no way for advertisers to directly buy a list of folks who are 18-34 but don't buy something from their website.
That's how it often works and seems in the spirit of the sentence you quoted.
The difference is that they don't want to be the cheap user data peddler #2942. They want to do what Facebook and Google do and use their user data in their own ecosystem to squeeze all the value out of it.
> You can turn off personalization, and you can clear the data used for ads at any time
So yes, it sounds like they'll do exactly what you say. And they will probably have much better user data than Google gets from search, because people divulge so much in chats. I wonder how creepily relevant these ads will get...
It does seem like that is a pretty fundamental difference. They aren’t giving anything to advertisers, just letting them target ads to users who fit in certain categories or whatever.
> In the coming weeks, we’re also planning to start testing ads in the U.S. for the free and Go tiers, so more people can benefit from our tools with fewer usage limits or without having to pay.
This single sentence probably took so many man-hours. I completely understand why they’re trying to integrate ads but this feels like a generational run for a company founded with the purpose of safely researching superintelligence.
You could tell the article is written in a way to try to calm against the major concerns without actually bringing those concerns up.
"We won't share your chats and you can turn off personalization!" Hmm yeah there's a missing piece of info here...
Interesting that OpenAI is trying to hammer the point that they won't sell user data to advertisers.
That's how all the major ads platforms work. I don't personally agree that it constitutes "selling your data" but certainly people describe it that way for Google/Meta ads which function the same way. By framing it this way they're clearly trying to fool users who really bought into the messaging that Google et al literally sell user data when they only provide targeting. I guess the hope is that the cleaner reputation of OpenAI will mean people think there's some actual difference here.
I refuse to believe OpenAI wont sell the data. They will add the ads then couple days later you will "Change to our privacy policy". Thats how every company did it.
Having more data than others is a competitive advantage in both the ad and the ai industries. It's why Google and Facebook don't sell your data (unlike, eg, many medium-sized businesses today), they "just" collect it heavily.
I work on ads as a SWE at a company youve heard of. Albeit, its been less than a few years for me.
Maybe OpenAI does things different, but as soon as an OKR around ad performance gets committed to, the experience will degrade. Sure they're not selling data, however they'll almost certainly have a direct response communication where advertisers tell Open AI what and when youve interacted with their products. Ads will be placed and displayed in increasingly more aggressive positions, although it'll start out non intrusive.
Im curious how their targeting will work and how much control they'll give advertisers to start. Will they allow businesses of all sizes? Will they allow advertisers to control how their ads work? I bet Amazon is foaming at the mouth to get their products fed into chat gpt results.
I think Google has already shown that in the long run, people accept ads and prefer them to paying a subscription fee. If that weren’t true, then YouTube Premium would have double-digit % of youtube users and Kagi Search would be huge.
Right but it is widely acknowledged that despite acceptance (we lack other options) this process eventually degrades the quality of the tool as successive waves of product managers decide “just a little bit more advertisement”.
It is not a choice between ads or subscription. The choice is between ads, adblockers or subscriptions. Hardly anyone will pay the subscription when they have a free way or blocking the ads. It is wild that an AI company is banking on ad funded, when the second major use of the tech will be to block ads entirely. Even in the physical world when AR tech is good enough. Now that is a use for the AI chip on my next PC that I can get behind.
The difference here is the qualitative difference that has existed between Google Search results and other competitors. Switching away from Google Search is a high friction move for most people. I'm not sure the same goes for AI chat.
The problem that providers like Youtube have with the "pay to remove ads" model is that the people with enough disposable income that they're willing to pay $14/month to remove ads are the same demographic of people that advertisers are willing to pay the most to show ads to. It's the same reason why if you watch TV during the middle of the day, the ads are all for medicine (paid for by your insurance), personal injury attorneys, (paid for by the person you're suing), and cash advances for structured settlements (i.e. if you already have a settlement paying $500/mo for 30 years but you'd rather have $20,000 now) rather than for anything you actually have to buy.
What will coca cola pay me to sign a contract where I drink nothing but coca cola for this year under penalty of imprisonment? Think I can crack six figs?
Lots of negative comments here. OpenAI has to make a move, they are not profitable and have massive costs and debt. I think this one of the least bad moves, given all data that they have on users. They could have monetized their data so much more, and sooner.
I think it's damning that they possibly have the most advanced AI on the planet right now, unfiltered, no guard rails, and as much compute as they could possibly want for deep reasoning and inference, and despite that, the best move they could make was enshittification.
I've been bullish for OpenAI, but that's starting to fade. Sama is a master of the artful dodge, though, so it'll be interesting to see what happens. Between the burn rate and the lawsuits and the need for more compute, there's a ton of pressure on them right now.
Once they put ads in it the algorithms will optimize for engagement and time on platform, not returning useful (let alone correct) information. This works for Facebook cause Facebook is essentially entertainment, but I think this will kill ChatGPT as a useful tool.
The long con is already happening. Some unis are going full tilt on AI. Having mandatory AI courses. Buying chatgpt subscriptions for the students. Making them use ai for certain exercises. Ostensibly it is "preparing them for AI in the workforce" but in practice it is actually shaping up the workforce to be dependent on AI. Get an entire generation of workers to reach for chatgpt to do anything at all and suddenly it doesn't matter if it is less efficient than older methods since no one working will know them.
They aren't going to do this right now, but they almost certainly will in the medium term. It would be legitimately shocking if they didn't continue to follow the same path as Google, Facebook, and pretty much every other big tech comp. In OpenAI's case they have even more incentive to abuse their users since they collect so much detailed personable data and have ways to make ads unblockable by including them in outputs and skewing model weights. I've seen absolutely nothing from the company, it's CEO, or investors that make me think they won't do the normal thing of gradually making the product worse in order to wring more value out of their users.
Oh, you sweet summer child. Promises like these are made to be broken [0][1][2]. They would need a mechanism for contractual or regulatory enforcement for these words to carry any weight at all. What makes you think we should give these promises any more weight than promises that OpenAI already[3][4][5] broke?
3: (2024) "OpenAI is developing Media Manager, a tool that will enable creators and content owners to tell us what they own and specify how they want their works to be included or excluded from machine learning research and training." https://openai.com/index/approach-to-data-and-ai/
> Ads do not influence the answers ChatGPT gives you. Answers are optimized based on what's most helpful to you. Ads are always separate and clearly labeled.
I've heard this before from other companies.
OpenAI should just reject all advertisements. That's the only real solution.
At least in the US the ads must be labeled as such by law, so at a bare minimum I expect the ad blocker devs will be able to remove them with some work.
There's a whole design niche dedicated to making that label as subtle and hard to see as possible.
And I'm skeptical ads will remain outside of the ChatGPT output for very long. You can hide a div tag, but you can't hide an advertisement streamlined into the "conversation" with ChatGPT. Is ChatGPT recommending product X because they're an advertiser, or because that's what it "learned" on the internet? Did it learn from another advertisement?
I fully expect them to exploit the plausible deniability.
I wonder if the current laws are written in a way that accounts for these models. Sure, if a specific tool call results in a paid product card for pepsi, that ought to be labeled. But what if the number on some pepsi-related weights is massaged just a bit, way early on in the process? What if the training data is tweaked to include some additional pro-pepsi inputs?
I look grimly forward to the future of adblock, which I predict will literally involve a media interception and re-rendering agent that sits between us and everything we see, hear, read, etc. AR goggles that put beach pictures over bus stop posters and red squigglies under sentences with a high enough adtech confidence score. This shit's gonna get real weird in our lifetimes.
These promises are worth nothing without a contract that a consumer can sue them for violating. And hell will freeze over before megacorps offer consumers contracts that bind themselves to that degree.
The next step is to have them natively in the output. And it'll happen at a scale never seen.
Google had a lot more push-back, because they used to be the entity that linked to other websites, so them showing the AI interview was a change of path.
OpenAI embedding the advertisements in a natural way is much much easier for them. The public already expects links to products when they ask for advice, so why not change the text a little bit to glorify a product when you're asking for a comparison between product A & B.
I think advertising was inevitable for this platform. It is highly surprising that this was not introduced with a new groundbreaking model or new service as a form of justification.
Logically it seems they either have strategised this poorly (seems unlikely), they are under immense immediate financial pressure to produce revenue (I presume most likely) or there is simply no development on the horizon big enough to justify the shift - so just do it now.
"Logically it seems they either have strategised this poorly (seems unlikely)"
I’m not sure that the company who gave us ai slop charts in the gpt 5 launch should be presumed to be master strategists until proven otherwise.
When the training data is 'the Internet', I don't see why you would pay to relegate yourself to a box people will train themselves to ignore. Instead, why not astroturf and ensure the training data will promote you organically?
To the people trying to read between the lines here, do you think OpenAI cares about what they said or didn't say and won't do a 180 if it means more profits? Like a blog post will stop them?
"Our mission is to ensure AGI benefits all of humanity; our pursuit of advertising is always in support of that mission and making AI more accessible."
Also, anything that benefits OpenAI or keeps our runway just a bit longer is (by definition) in support of our mission, so we can do anything that we want and say that it is for the good of humanity.
I remember when I was defending openai for still being relatively open because they are not gatekeeping tech advancements made in model training or inference, but their patent count is shooting up and I am sure the next revolution they will discover will get patented as well. Having the name OpenAI will feel so weird in a couple more years when it'll be the complete opposite with no way to justify the "open" in their name.
Its going to be interesting to see what shenanigans one can do by paying to advertise on OpenAI
Of course they are going to "anonymise" the chats, and only extract keywords summaries.
But, as some people are generally more candid with chatbots, de-anonymisation through keyword selection is trivially possible.
It won't just stay at ultra precise demographic selection (ie all males 35-40, living in london, worried about hair loss). They will offer scenarios that facebook/instagram could only infer/dream of
"middle aged woman with disposable income unhappy with spouse."
Where it gets interesting is how they will provide proof that the advert has landed/reached eyeballs.
AI makes it possible to do active ads, for example: "gradually steer users in group A to do B and C." This is possible because AI imitates humans so well and many have made it their secret most trusted advisor. Imagine your best friend sold his soul to adtech and started steering you into a certain direction over a course of months or even years, while providing the adtech with the most intimate knowledge about you, skillfully bootlicking your ego to earn your trust. Very few will be able to resist this.
I wonder if the adverts in the "personal super-assistant", per the blog post, ("that helps you do almost anything"!) will have the same triggers as the shopping assistant, which pops up underneath messages right now in the web UI.
When first trying 5.2, on a "Pro" plan, I was - and still am - able to trigger the shopping assistant via keyword-matching, even if the conversation context, or the prompt itself, is wildly inappropriate (suicide, racism, etc).
Keyword-matching seems a strange ad strategy for a (non-profit) company selling QKV. It's all very confusing!
Hopefully, for fans of personal super-assistants--and advertising--worldwide, this will improve now that ads have been formalised.
I already don't use ChatGPT. I use OpenWeb UI with OpenRouter, and the API costs for my usage are peanuts. Switching to a different interface is so easy many people will. (You don't need to self host. T3 Chat, for example.) This is the difference between Google Search and ChatGPT.
> we’re also planning to start testing ads in the U.S. for the free and Go tiers, so more people can benefit from our tools with fewer usage limits or without having to pay
No, that is not why they're doing it. They're doing it to make money.
> Our mission is to ensure AGI benefits all of humanity
No, that is not their mission. Their mission is to make money.
If they wanted to benefit all humanity they would axe the entire operation, do a complete 180, and use all their money to fight as hard as they can against everyone else who is doing what they're doing now.
I'm surprised, and more than a little bit relieved that they didn't allow chats to be steered by ads. This could have been a whole new kind of marketing, where product plugs are e.g. slipped into the system prompt and come across as sincere recommendations. I have to wonder if this is still coming down the road.
I guess in the meantime, they will be able to use chat histories to personalize ads on a whole new level. I bet we will see some screenshots of uncomfortably relevant ads in the coming months.
From an ethical standpoint, I think it's .. murky. Not ads themselves, but because the AI is, at least partially, likely trained on data scraped from the web, which is then more or less regurgitated (in a personalized way) and then presented with ads that do not pay the original content creators. So it's kind of like, lets consume what other people created, repackage it, and then profit off of it.
Who says we're falling for it? I expect it, as in I believe that's how it should be. I know that offerings can change and that there are paid services that include ads. I know what I'm getting if I sign up for a paid plan with ads. I also think anyone who offers such a thing should be publicly flogged.
(I continue to be shocked how many people—who should know better—are in denial that the entire "industry" of Generative AI is completely and utterly unsustainable and furthermore on a level of unsustainability we've never before seen in the history of computer technology.)
Seems like a big opportunity for Google to consider keeping Gemini ad-free as a differentiator. They can afford to burn cash on it for a long time to come if they choose to do so.
All enterprise users already pay for it. They’ve included it by force to the base subscription (and about 30% of our company actively uses it, according to in-app stats as an admin).
The difference here though is that ads are baked into the response via plain text.
How far away are we from an offline model based ad blocker? Imagine a model trained to detect if a response contains ads or not and blocked it on the fly. Im not sure how else you could block ads embedded into responses.
I work in marketing, this is already a thing but it's called AEO (Answer Engine Optimization). Generally it's not _hard_ to write in such a way that models hook into the desired messages in text, but if you're not careful you look like a cult leader when you do it. I hate it but this is the Internet we got.
Do you have an example of a text or site written in a way that's been AEO'd? I'd be interested to know what that looks like, especially if it sounds cult-ish.
This is going to be very bad. Clearly defined ads is the start but they will eventually mixed ads into responses in the form of sponsored content. It's just the natural progression of things.
I question whether it matters any more. AI chat is clearly going to be the search interface of the future. phones are the channel for users with Chrome/android being one half and iphone being the other. Google just signed up Apple to be the engine for siri. We also know that users rarely change defaults.
so, google would appear to have boxed out openai from the #1 use case, and already have all the pieces in place to monetize it. This move by OAI isnt surprising, but is it too late to matter?
I'm not sure your logic connects. With respect to "OpenAI being boxed out from [Siri]", advertisement revenue comes neither too late nor too early. Whether or not OpenAI had advertising would not have substantially affected Apple's decision to go with Google's LLM at this time.
If you meant it in a different context, you didn't explain any of the actual context you had in mind.
“Conversation privacy: We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers.”
The same sleight of hand that’s been used by surveillance capitalists for years. It’s not about “selling your data” because they have narrowly defined data to mean “the actual chats you have” and not “information we infer about you from your usage of the service,” which they do sell to advertisers in the form of your behavioral futures.
Fuck all this. OpenAI caved to surveillance capitalism in record time.
What you’re reacting to isn’t just “ads.” It’s the feeling of:
Someone monetizing the collective output of human thought while quietly severing the link back to the humans who produced it.
That triggers a very old and very valid moral instinct.
Why “sleazy” is an accurate word here
“Sleazy” usually means:
technically allowed
strategically clever
morally evasive
I think they realize the end of their moat has come. I see 5.2 doesn't try as hard and gives worse answers. I don't like Elon, but I've found Grok to be better on many questions.
Enshittified, the bright golden AI age began to brown, and regression to the mean once again cast another bleak spell onto humanity. And with that, just as quickly as it broke, another AI winter began. As it turns out, those datacenters were just there to generate shareholder value.
I mean, they certainly know that introducing ads with be a huge motivation for consumers to seek other options.
The primary differentiator of OpenAI is first mover advantage; the product itself is not particularly unique anymore.
IMHO consumers will quickly realize that switching to an alternative AI provider is easy and probably fun.
This seems premature to give up their moat in the name of revenue. Are they feeling real financial pressure all of the sudden? Maybe I'm missing something. Looks like a big win for Google and Anthropic.
Obviously disappointing, but not entirely shocking given how much capital they've already burned through. Convincing individual users to pay $8/mo was never going to even out the balance sheet.
somewhat unrelated, but I've been playing this game with Amazon; when they pop open Rufus and start spewing text at me, I remove everything from my cart, and see how many weeks I can go without shopping at amazon; my current record is 3 weeks, but I think I can do better.
More related, I pay for Kagi, because google results are horrible.
More related, Chatgpt isn't the only model out there, and I've just recently stopped using 5 because it's just slow and there are other models that come back and work just as well. So when Chatgpt starts injecting crap, I'll just stop using them for something else.
What would you do if every time you walked into Walmart and the greeter spit in your face and told you to go F yourself, would you still shop there?
If you had told me in 2011, when I first started discussing artificial intelligence, that in 2026 a trillion dollar company would earnestly publish the statement “Our mission is to ensure AGI benefits all of humanity; our pursuit of advertising is always in support of that mission”, I would have tossed my laptop into the sea and taken up farming instead.
I thought your quote was hyperbole or an exaggerated summary of the post. Nope. It's literally taken verbatim. I can't believe someone wrote that down with a straight face... although to be honest it was probably written with AI
In 2011 I would've had trouble believing there could be a trillion dollar AI company, but that if there was such a company I could almost expect they would make such an asinine statement.
I actually use chatgpt for creating recipes from time to time. I wouldn't be too offended if there's an 'add to amazon' cart button or similar type of add.
What I'm not okay with is being served adds using codex cli, or codex cli gather data outside of my context to send to advertisers. So as long as they're not doing that, I won't complain.
If they start doing that, I'll complain, and I'll need to more heavily sandbox it.
calling it now: people are going to a use a layer of LLM on top of this in a browser extension that takes the ChatGPT text and removes the ads from it.
no company can survive without advertising. when google first launched, it was the same. chatgpt will follow a similar path, and half a century from now, the cycle will still continue in the same way. advertising, regardless of scale, is the art of turning data into revenue. even if this planning seems insignificant for a company’s future today, it will most likely become its greatest advantage.
You’ve equated selling ads, like a newspaper does, with tracking user behavior, collating it with other information purchased on the market, and targeting people to change their behavior. Disingenuous.
scale changes, time changes, but at its core it’s similar. what i look at is chatgpt’s roadmap, a lifeline.
it doesn’t save my life, but at least i’m seeing more relevant ads now :) not getting detergent ads while searching for perfume is still nice, all things considered.
Also, your newspaper is selling the data points it has. If it had more, it would sell more. See: your local paper isn’t selling ads to a car wash six towns over. They do, however, sell ads that align with the political affinities of your local newsrooms area.
"advertising, regardless of scale, is the art of turning data into revenue."
This is disingenuous. Putting up a billboard over a highway to make people aware of a certain brand of beer is not the same as building detailed profiles on people in order to sell to the highest bidder the opportunity to change your behavior right when you're likely to do so. But somehow, this user puts them together with the very convenient "regardless of scale."
Maybe you're OK with an entire industry that makes money trying to get you to do what they want -- buy what they want, think what they want. Maybe you're OK with your past behavior being written on a shadow ledger and sold the highest bidder, traded on the dark web, and used by governments. It's your right to be okay with that, since it's your life. But you being okay with that doesn't change the fact that this is a fundamentally different type of behavior than what is commonly called "advertising." It's a curious equivocation, this sane-washing, and it does make one wonder why an otherwise intelligent person feels to need to do it.
reply