Hacker Newsnew | past | comments | ask | show | jobs | submit | 9dev's commentslogin

I feel like it’s a niche feature that got way too much attention. In a past job, we maintained a widget customers could embed onto their page. How much trouble we had with parent styles creeping into our widget and ruining the layout! This would have been so much easier with shadow DOM effectively isolating it from the customer site; that is the only valid use case for it, I feel.

Yet, for actual web components, I entirely agree with you.


> There's exactly zero arguments for any kind of flat or minimalistic design outside of art, or if you want to make a statement.

If that were true, road signage would look a lot different than it does.

Minimalistic design clearly has advantages when quickly grasping intent is key.


Road signage has a lot of constraints that don't apply to computer UIs.

There’s a neat little story called "Anecdote on Lowering the work ethic" that you might like. You can read the plot on the Wikipedia page here: https://en.wikipedia.org/wiki/Anekdote_zur_Senkung_der_Arbei...

"More reach" seems a valid enough goal/desire in and of itself (even if you deride it as a shallow form of communication, shallow attention is what provides the opportunity for deeper connections); this sets the goal-activity of creative pursuits apart from "lounging alone at the beach" (which is itself a flawed representation of retirement, but that's another story).

The reason OP gave for trying to achieve more reach was this:

> it's been pointed out to me in harsh ways I could be easily growing if I tried a little harder, so I've invested more resources into the channel, equipment, actually trying growth, etc.

…which made me think of the tourist in the story.

Is it really more reach that they desire, transforming their content into whatever sates the algorithm, chasing metrics, investing time and money? Or is their current level of reach perhaps already enough as it is, a work of love and dedication, without someone—something—else deciding what’s best?


Not the person you responded to, but I very much liked this. Thank you for sharing

> The apocalypse is delayed, permanently.

Until it isn't. The Cuban Missile Crisis could have put a very permanent end to it all, hadn't cooler minds prevailed, but that was a binary moment. There's absolutely no guarantee the coin won't flip to tails the next toss.


The Cuban missile crises I would say was a lot less precarious than Able Archer or the 1983 Soviet nuclear false alarm alert - which was averted, by, ahem - an engineer!

https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alar...


There is an incredibly good minute-by-minute account of the Cuban crisis: "One Minute to Midnight: Kennedy, Khrushchev and Castro on the Brink of Nuclear War" - it covers a lot of areas that aren't often mentioned such as the U2 flight at the North Pole going astray or the Soviet nuclear cruise missile teams targetting Guantanamo that taken together with the more well known events make it seem remarkable to me that we survived.

In first 20 minutes of the documentary The Fog of War Robert McNamara goes over the Cuban missile crisis in detail. Even he admits it came down to luck.

His meeting with Cuba in the 90s and the new information presented that McNamara didn’t have during the crisis was especially sobering. McNamara ended the meeting early because he was “unprepared” to learn there were missiles already operational and authorization was already granted to launch if the Cuban build sites were struck.


Didn't watch it. So what about the million dollar question: would nuclear (or global) war have started if the US didn't have nuclear weapons? I mean, it's the basis of the US nuclear strategy after all.

It must be quite depressing to live life always wondering what could have been.

Not as depressing as life in a world where nobody ever stops to reflect.

Or a life where everyone operates in absolutes, with no shades of grey allowed.

Zero reflection and total constant analysis paralysis are both non viable.


Oh, I absolutely agree!

These are questions worth posing. Everyone working in tech right now plays a part in a lot of horrors haunting the world right now, and all of us are partly guilty.

If your answer to this is, "I don't care about the environment, everyone's right to privacy, psychological effects of social media use, or any of those other adverse effects as long as I get a good salary"—that's a valid answer for sure, if you aren't bothered by it. If that is not your answer, maybe it's time to change some things.


I mean “begging the question” in the traditional philosophical sense – assuming the conclusion (societal collapse) as part of the premises. Not the more common vernacular usage which has come to mean “asking questions”.

Plus the sky wasn’t falling in the last few times I checked.


I looked into the iOS issue once, and in the EU at least, it should be possible to add a minimal implementation of the store API to an app, so other iPhones could download the app from an iPhone hosting it.

After discovering the amount of pain involved with that API, I quickly discarded the idea though


You can just airdrop iOS apps to people. I don’t think the recipient needs network connectivity to receive it.

Source? I guess you're thinking of long tapAirdrop, but that essentially shares a link to the Appstore via Airdrop. You're not transfering the app itself.

Using your brain is so vastly more energy efficient, we might just only need half of that 30 GW capacity if fewer people had these leftpad-style knee-jerk reactions.

A Gemini query uses about a kilojoule. The brain runs at 20 W (though the whole human costs 100 W). So, the human is less energy if you can get it done in under 50 seconds.

Where does that number come from? Does it factor in the energy required to build the servers used to train the model? Does it factor in… the training?

There is no end to this energy comparison...

For example does it factor in the 18-24 years needed to train a human and the energy used for that?


No, but humans existing is the one, fundamental constant of our existence. AI, and its usage for the most minuscule of tasks, is a choice.

Hopefully we aren’t doing too much AI training to work out 200 * 1000. If a computer is involved at all it’s disappointing, if AI is used, more so.

It doesn't matter what the model is actually doing at the end of the day, when training and hosting it involves massive amounts of energy.

If humans aren’t more efficient the energy is still used, as they remain alive. Maybe the AI will notice this?

Each person uses about 100W (2000kcal/24h=96W). Running all of humanity takes about 775GW.

Sure, using or not using your brain is a negligible energy difference, so if you aren't using it you really should, for energy efficiency's sake. But I don't think the claim that our brains are more energy efficient is obviously true on its own. The issue is more about induced demand from having all this external "thinking" capacity on your fingertips


Is there an AI system with functionality at or equal to a human brain that operates on less than 100W? Its currently the most efficient model we have. You compare all of humanity's energy expenditure, but to make the comparison, you need to consider the cost of replicating all that compute with AI (assuming we had an AGI at human level in all regards, or a set of AIs that when operated together could replace all human intelligence).

>all human intelligence

So, this is rather complex because you can turn AI energy usage to nearly zero when not in use. Humans have this problem of needing to consume a large amount of resources for 18-24 years with very little useful output during that time, and have to be kept running 24/7 otherwise you lose your investment. And even then there is a lot of risk they are going to be gibbering idiots and represent a net loss of your resource expenditure.

For this I have a modern Modest Proposal they we use young children as feed stock for biofuel generation before they become a resource sink. Not only do you save the child from a life of being a wage slave, you can now power your AI data center. I propose we call this the Matrix Efficiency Saving System (MESS).


No one will ever agree on when AI systems have equivalent functionality to a human brain. But lots of jobs consist of things a computer can now do for less than 100W.

Also, while a body itself uses only 100W, a normal urban lifestyle uses a few thousand watts for heat, light, cooking, and transportation.


> Also, while a body itself uses only 100W, a normal urban lifestyle uses a few thousand watts for heat, light, cooking, and transportation.

Add to that the tier-n dependencies this urban lifestyle has—massive supply chains sprawling across the planet, for example involving thousands upon thousands of people and goods involved in making your morning coffee happen.


Wikipedia quoted global primary energy production at 19.6 TW, or about 2400W/person. Which is obviously not even close to equally distributed. Per-country it gets complicated quickly, but naively taking the total from [1] brings the US to 9kW per person.

And that's ignoring sources like food from agriculture, including the food we feed our food.

To be fair, AI servers also use a lot more energy than their raw power demand if we use the same metrics. But after accounting for everything, an American and an 8xH100 server might end up in about the same ballpark

Which is not meant as an argument for replacing Americans with AI servers, but it puts AI power demand into context

https://www.eia.gov/energyexplained/us-energy-facts/


>Is there an AI system with functionality at or equal to a human brain that operates on less than 100W?

Obviously not equal to a human brain, but my GPU takes about 150W and can draw an image in a minute that would take me forever to replicate.


Obviously we don't have AGI so we can't compare many tasks. But on tasks where AI does perform at comparable levels (certain subsets of writing, greenfield coding and art) it performs fairly well. They use more power but are also much faster, and that about cancels out. There are plenty of studies that try to put numbers on the exact tradeoff, usually focused more on CO2. Plenty that find AI better by some absurd degree (800 times more efficient at 3d modelling, 130 to 1500 times more efficient at writing, or 300 to 3000 times more efficient at illustrating [1]). The one I'd trust the most is [2] where GPT4 was 5-19 times less CO2 efficient than humans at solving coding challenges

1: https://www.nature.com/articles/s41598-024-54271-x?fromPaywa...

2: https://www.nature.com/articles/s41598-025-24658-5


I did some math for this particular case by asking Google’s Gemini Pro 3 (via AI studio) to evaluate the press release. Nvidia has since edited the release to remove the “tons of copper” claim, but it evaluated the other numbers at a reported API cost of about 3.8 cents. If the stated pricing just recovers energy cost, that implies 1500kJ of energy as a maximum (less if other costs are recovered in the pricing). A human thinking for 10 minutes would use sbout 6kJ of direct energy.

I agree with your point about induced demand. The “win” wouldn’t be looking at a single press release with already-suspect numbers, but rather looking at essentially all press releases of note, a task not generally valuable enough to devote people towards.

That being said, we normally consider it progress when we can use mechanical or electrical energy to replace or augment human work.


While I don't know how more or less efficient it is, WolframAlpha works well for these sorts of approximations, and shows its work more clearly than the AI chatbots I've used.

The way I use them, is for annotating possible extension points or refinements that would improve things, but are more of the "nice to have" kind, or stuff someone coming across might want to take care of later. Many of these don’t warrant real issues in a tracker, as they would just clog the backlog and get eventually deleted anyway.

Hey Felix, would love to give you feedback, but the language redirect of the website is trying to route me to de-de, and thus I can't see the page.

You might want to fix this.


I think this should be fixed now. If not can you tell me the URL you're getting redirected to.

You could make that argument about HTTP, SMTP, Slack, or the English language. It turns out that yes, the actual concrete points of collaborative interaction do need to be standardised to a degree, unless you're thinking of everyone shouting into their particularly flavoured void, with no means of communicating. You can have different clients speaking the same protocol, but you can't have different protocols.

Even before git, you generally had to use what your team was using, or the FOSS project you were trying to contribute to. So it's kind of a moot point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: