WYSIWYG is a concept that pre-dates the web and what this article is talking about is not the same thing. WYSIWYG was coined as a term to describe word processing and desktop publishing software where the appearance of your text matched the final printed output; the same fonts, weights, sizes, styles, etc. That's it.
It's something we mostly take for granted today but was a real advancement over earlier, often text-based, programs that used simple text effects like highlighting or different colors to represent visual effects that were only fully realized when you printed your document.
It has nothing to do with being able to view source, or copy other designs, or any of that.
The term was later also extended to things like visual GUI builders, where the appearance in the editing interface matches the appearance of the final GUI (e.g. the Visual Basic form editor). This specific WYSIWYG variation mostly hasn't returned, unfortunately.
> It's something we mostly take for granted today but was a real advancement over earlier, often text-based, programs that used simple text effects like highlighting or different colors to represent visual effects that were only fully realized when you printed your document.
I am tasked with maintaining documentation in Confluence and Notion-and I wasn’t enjoying it. Then I built a system with bidirectional sync between the two of them and a Git repo full of Markdown documents-and now I find the task to be much more pleasant.
WYSIWYG came about when displays became bit-mapped graphics with a sufficient amount of dots per inch.
Previously, displays used a character generator ROM chip which mapped ASCII onto one character. For a terminal I designed and built in those days, I used an off-the-shelf character generator chip which had a 5x7 font.
Aside from the LLM writing vibes, or perhaps because it was written by an LLM, I think this article has very little tether to reality.
> It’s bringing back something we collectively gave away in the 2010’s when the algorithmic feed psycho-optimized its way into our lives: being weird.
It's really not. Prompting an LLM for a website is the exact opposite of being weird. It spits out something bland that follows corporate design fads and which contains no individuality. If you want to see weird websites, people are still making those by hand; the recently posted webtiles[1] is a nice way to browse a tiny slice of the human internet, with all its weirdness and chaotic individuality.
> It's really not. Prompting an LLM for a website is the exact opposite of being weird. It spits out something bland that follows corporate design fads and which contains no individuality. If you want to see weird websites,
I see your point, but I disagree. You consider part of the "weirdness" of being how it's done; and yes, it is indeed "weird" to learn several languages, consisting mostly of punctuation, in order to create an online self-promotion. But I think for most people, the "weirdness" (or its absence) is to be found in the end result. To that end, if a person wants a personal web page with animated tentacles around the edges and flying bananas in the background and pictures of demonic teddy bears, that is something that an AI can easily do when asked.
Back in the bad old days, people created websites because they had no choice in the matter. You simply had to do that to share anything with the rest of the world. Most of the tools we had back then still exist. The barrier to entry has never been lower, and those that are motivated to tinker do just that. But going through history... once mainstream blogging became a thing, and then social media conquered all, the motivation to share with others became monetized, as did the methods of sharing with others. AI isn't going to fix that. On the flip side, those same monsters that destroyed the world we knew through monetizing everything are the same ones spending trillions of dollars on AI.
"You're not the customer, you're the product being sold."
It's all very "free" until ICE rams my car and drags me away because someone sold them the geolocation and facial-recognition data being automatically collected by that "excellent" software.
OK, sure, that's a dramatic example, but the same principle holds for plenty of other scenarios involving employers, insurance rates, etc.
The government already has your facial features recorded and databased. (From your passport photo, DL photo, and when you get on an airplane.) LPRs are being installed all over town by the government.
I'm not sure what you mean here: Are you agreeing and layering on other depressing considerations, or are you downplaying that kind of privacy-break as having no effect?
It does matter. Imagine if Anne Frank (or Anna Franco) is hiding in my attic, and then myself or a guest accidentally takes a picture, perhaps without disabling the internet connection.
There's also the the social-graph it allows someone to construct:
This reads like a love letter to our collective youth. I like the perspective! It's interesting too, because I feel a lot of programmer types might see WYSIWYG and AI both as stepping stones towards a more disciplined approach to engineering.
> The barrier to entry is lower than it’s ever been.
I don't see a web full of projects created by people who aren't technical. A substantial number of young people grew up on phones and iPads and might not even understand filesystems well enough to have the imagination to create things like this. So the power exists, but the people who are taking best advantage to me seem like the people who were building stuff before the LLMs came to be.
> I don't see a web full of projects created by people who aren't technical.
Sure, but this is very new technology. It will take some time for the idea of building software easily to seep into the public consciousness. In that time, AI will get better and the barrier to entry will get even lower.
For comparison, the internet has been around in some form since the 1960s (more-or-less: depending on the exact technology that you consider to represent its beginning), but it took until the late 1990s or even early 2000s before most people were aware of it, and longer than that before it became central to their lives. I would expect the development of AI-coding-for-the-masses to happen much faster, but not instantaneously.
It's something we mostly take for granted today but was a real advancement over earlier, often text-based, programs that used simple text effects like highlighting or different colors to represent visual effects that were only fully realized when you printed your document.
It has nothing to do with being able to view source, or copy other designs, or any of that.
reply