Hacker Newsnew | past | comments | ask | show | jobs | submit | mk12's commentslogin

It does not, at all. Forming that judgment because of “Enter X” is ridiculous. I recognize my friend Claude in disguise all the time on HN and this is not one of those cases.

Notice the “quiet” at the end. LLMS love to shoehorn “quiet” or “quietly” into their writing. I learned this from Sam Kriss’s NYT piece and I keep noticing it now.

This is one of the best posts on I’ve read on this topic in the years since ChatGPT launched. Was hoping it would have gotten more discussion here!

The "knowledge base" at the bottom is 100% slop. Why? Why inflict this on people?


Yeah, you're right — that part is pretty rough. I wanted to help people actually understand compound interest (it's kind of life-changing once it clicks), but I got lazy and let AI do it without proper editing. Defeats the whole point.

I'll figure out a better way. Thanks for calling it out.


I think the words are "you're absolutely right".


You're absolutely right too.


lol, chances of the same person using that kind of phrase and an em dash is so marginally low


[flagged]


Please don't be hostile to newcomers. That's a way to destroy this place.

https://news.ycombinator.com/newsguidelines.html


These posts will destroy this place. Post your AI written tools if you like - fine, but using an LLM to reply to comments is just insulting, and will make this place a wasteland of LLM. I wouldn’t post this if I didn’t care about the usual good quality of the discussions on this site.


I appreciate and share your concern for the quality of HN! But there are much better ways to take care of that than aggression towards newcomers.

It's common for communities who feel that they're under attack to shoot-first-ask-questions-later when outsiders show up. This is itself a step towards community decline and has particularly bad side effects. We need to consciously avoid that here.

Most HN users don't want to see LLM-generated posts [1] or even LLM-filtered posts [2], and we share those views. But this is a long-term trend that we're all slowly learning how to navigate. Brutalizing noobs doesn't need to be part of that, and we should all be careful about not doing so—partly because it's wrong to treat others that way, and equally because it's necessary to the life of this community to welcome newcomers.

[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


I second this.

Vibe coded projects can be cool (if they're impressive), articles about using AI can be cool (from the right people), articles about the future of AI can be cool. All of these can sometimes be too much and some of them are just poor projects / articles etc. But they should definitely be allowed; some of them are genuinely interesting / thought provoking.

Someone prompting gpt-4o "Write a nice reply comment for this <paste>" and then pasting it here is never cool. If you can't write in english, you can use google translate or even ask an llm to translate, but not to write a comment for you!


How do we know if newcomers are real? I thing bigDinosaur is reacting to the fact that OP’s entire post and replies appear LLM generated.


We can't always know for sure and people frequently guess wrong. Because of that, it's important not to lead by attacking the other person, especially when they account is new, because newcomers usually don't know anything about the conventions of this site. Under these circumstances, it's mean and destructive to blast them with rudeness. Rather, we should be welcoming them and gently educating them. If they're a good new user, they'll respond well to that (you can see an example of this here: https://news.ycombinator.com/item?id=46682308). If they aren't a good new user, that will eventually become clear and the immune system (flagging, moderation, community members emailing hn@ycombinator.com, etc.) will eventually take care of it.


OP could be a bot/agent whatever. some signs point to it. It could be a funny experiment they are running on HN.


Just another AI generated website with 5000 calculators thrown together that looks like every other single one. From a brand new account with a post that looks like it was also written from ChatGPT. Somehow getting enough votes to show up on my homepage.

Things are definitely changing around HN compared to when it first started.


Fair call — it did kind of explode from one calculator to 60+ I’m a real person (long-time lurker, finally posting), but I get why it looks sus. Things are changing fast, and I’m just happy to be part of the messy early wave. Thanks for the honesty.


> Thanks for the honesty.

It's impossible to tell if this is AI or not. Another version of Poe's law. The only thing to do is assume everything is AI, just like you must assume all posts have ulterior (generalluy profit-driven) motives, all posters have a conflict of interest, etc.

Maybe the only thing to do is stop trying to understand posters' motivations, stop reading things charitably, stop responding, just look for things that are interesting (and be sure to check sources).


Reader, keep in mind that OP being "a real person" has nothing to do with whether their content is appropriate for HN.

Every spammer and scammer, even a bot, is ultimately controlled by a real person in some sense. That doesn't mean we want their content here.


people are hurt because something which defined them as a person can now be done by a machine; don't let them dissuade you


A fair amount of AI hype traffic is likely to be astroturfed and automated. Just serving AI investors.

Anyone who disagrees with the above are just hurt that their manual hyping has been replaced with machines.


People are hurt when people turn person-to-person communication into person-to-machine communication. It's dismissive of their use of genuine wall-clock time trying to engage with you.


I would add to this: skills mean nothing if you don’t use them.

OP made a site with a bunch of calculators. Their critics didn’t make that!


We're busy building real software, not toys. I routinely write all kinds of calculators in my game development, in addition to having 100x more complex code to contend with. This task is as trivial as it gets in coding, considering computers were literally made to calculate and calculation functions are part of standard libraries. OP definitely didn't use Claude to implement math functions from scratch, they just did the basic copy-and-paste work of tying it to a web interface on a godawful JS framework stack which is already designed for children to make frontends with at the cost of extreme bloat and terrible performance. Meanwhile I actually did have to write my own math library, since I use fixed-point math in my game engine for cross-CPU determinism rather than getting to follow the easy path of floating-point math.

It's cool that ChatGPT can stitch these toys together for people who aren't programmers, but 99% of software engineers aren't working on toys in the first place, so we're hardly threatened by this. I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.


> I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.

"Software engineering" doesn't matter to anyone except to software engineers. What matters is executing that idea that's been gathering dust for ages, or scratching that pain point that keeps popping up in a daily basis.


Software engineering matters very much to anyone who has ideas or pain points that are beyond the capabilities of a next-token prediction engine to solve.


Some day in the future, this could be a lot like saying “hand-building engines matters to employees at Aston Martin.”


Not really. Those ideas or pain points are simply ignored or endured by anyone who isn't a software engineer until the tools (no-code platform, LLM, etc) become good enough, or someone else builds the thing and makes it available.


Idk, your superiority complex about the whole issue does make it sound like you’re feeling threatened. You seem determined to prove that AI can’t really make any decent output.

What’s even the point of writing out that first paragraph otherwise?


> What’s even the point of writing out that first paragraph otherwise?

I was correcting your misguided statement:

> Their critics didn’t make that!

by pointing out that we, among other things, build the libraries that you/Claude are copy-and-pasting from. When you make an assertion that is factually incorrect, and someone corrects you, that does not mean they are threatened.


Did you build a library?

If you did, did you put yourself in a clean room and forget about every existing library you’ve ever seen?

Have you made sure your code doesn’t repeat anything you’ve seen in a CS101 textbook? Is your hello world completely unique and non-identical to the one in the book?

When you write a song do you avoid using any chord progression that has been used by someone else?

LLMs are just doing a dumbed down version of human information processing. You can use one to make an app and tell it not to use any libraries. In fact, I’d argue that using an LLM negates the need for many libraries that mostly serve to save humans from repetitive hand-writing.

You can even tell AI to build a new library which essentially defeats your entire argument here. Are you trying to imply that LLMs can’t work at an assembly language level? I’m pretty sure they can because they’ve read every CS textbook you have and then some.

Will it be quality work? The answer to that question changes every day.

But the fact remains that you are indeed acting threatened. You’re not “correcting” me at all, because I didn’t claim that AI-assisted developers are doing anything in some kind of “pure” way.

My claim is that they’re seeing something they want to exist and they’re making it exist and putting it out there, while the vast majority of haters aren’t exactly out there contributing to much of anything in terms of “real software engineering.”

Imitation is a form of flattery. When something “copies” you and makes it better/cheaper/more customized, that’s a net gain. If AI is just a fancy copy machine, that functionality alone is a net benefit.


> My claim is that they’re seeing something they want to exist and they’re making it exist and putting it out there, while the vast majority of haters aren’t exactly out there contributing to much of anything in terms of “real software engineering.”

Except that they didn't. They thought of something and then asked a tool to make it badly. I know it's hard to separate for a lot of people and it makes them feel like they made something. It's especially bad when that thing then has stuff on it like "Made with attention to accuracy" or some similar marketing claim when there is zero accuracy and a bunch of mistakes in there.

But me running the cmd to create the Hello World angular example does not mean i made anything.


> We're busy building real software

My response is perhaps a bit raw, but so is the quote above.

Stop with the gate keeping. I've studied CS to understand coding, not to have some sort of pride to build "real software". Knowledge is a tool, nothing more, nothing less.

There are enough developers whose whole job it is to edit one button per week and not much more. And yes, there are also enough developers that actually apply their CS skills.

> but 99% of software engineers aren't working on toys in the first place

Go outside of your bubble. It's way more nuanced than that.

> I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.

Moving goal posts. Always has been.

It's not that I fully disagree with you either. And I'm excited about your accomplishments. But just the way it reads... man...

I guess it hits me because I used to be disheartened by comments like this. It just feels so snarky as if I am never good enough.

The vibe is just "BUH BUH BUH and that's it." That's how it comes across.

And I've come to mature enough to realize I shouldn't feel disheartened. I've followed enough classes at VUSEC with all their rowhammer variations and x86-64 assignments to have felt a taste of what deep tech can be. And the thing is, it's just another skill. It doesn't matter if someone works on a web app or a deep game programming problem.

What matters (to me at least) that you feel the flow of it and you're going somewhere touching an audience. Maybe his particular calculator app has a better UX for some people. If that's the case, then his app is a win. If your game touches people, then that's a win. If you feel alive because you're doing complex stuff, then that's a win (in the style of "A Mathematician's Apology"). If you're doing complex stuff and you feel it's rough and you're reaching no one with it, it's neutral at best in my book (positive: you're building a skill, negative: no one is touched, not even you).

Who cares what the underlying technology is. What's important is usability.


> Moving goal posts.

Feel free to point out where I moved goal posts. To say that I moved goal posts would imply that at one point I stated that creating a trivial website was software engineering. If you're comparing my statement to what some other person said, who made arguments I did not make, then we cannot have any kind of constructive dialogue. At that point you are not talking to me, but talking to an imaginary projection of me meant to make yourself feel better about your argument.

> Stop with the gate keeping.

I'm not gatekeeping anything. You can disagree with my descriptive terms if you want, but the core point I'm trying to get across is: what people are doing with Claude can not replace what I do. I would know, I've tried extensively. Development is a lot of hard work and I would love it if my job were easier! I use LLMs almost every day, mostly for trivial tasks like reformatting text or writing advanced regex because I can't be bothered to remember the syntax and it's faster than looking it up. I also routinely pose SOTA models problems I'm working on to have them try to solve them, and I am routinely disappointed by how bad the output is.

So, in a thread where people were asserting that critics are merely critics because they're afraid of being replaced I pointed out that this is not factually correct, that no, we're not actually afraid of being replaced, because those of us who do "real" engineering (feel free to suggest a different term to substitute for "real" if the terminology is what bothers you) know that we cannot be replaced. People without experience start thinking they can replace us, that the exhilarating taste of coding they got from an LLM is the full extent to the depth of the software engineering world, but in fact it is not even close.

I do think that LLMs fill a useful gap, for projects where the time investment would be too large to learn to code and too unimportant to justify paying anyone to program, but which are simple enough that a non-engineer can have an LLM build something neat for themselves. There is nothing wrong with toys. Toys are a great thing to have in the world, and it's nice that more people can make them[1]. But there is a difference between a toy and what I do, and LLMs cannot do the thing I do. If you're taking "toy" in a derogatory manner, feel free to come up with another term.

[1] To some extent. While accessibility is generally a great thing, I have some misgivings. Software is dangerous. The web is arguably already too accessible, with frameworks enabling people who have no idea what they're doing to make professional-looking websites. These badly-made websites then go on to have massive security breaches that affect millions of users. I wish there was a way to make basic website development accessible, whether through frameworks or LLMs, in a way that did not give people using them the misplaced self-confidence to take on things way above their skill level at the cost of other people's security.


You're right that this is simple compared to what real engineers build. I have a lot of respect for people like you who write things like custom math libraries for cross-CPU determinism — that's way beyond my level.

I'll keep learning and try to make this less of a toy over time. And hopefully I can bring what I've learned from years in investing into my next product to actually help people. Thanks for the perspective.


This response makes it 100% clear to me that this is just a bot. The verbatim use of a completely random thing like 'custom math libraries for cross-CPU determinism', combined with the agreeable tone and em dash use are pretty much a dead giveaway.

The internet has become so useless, everywhere is just marketing nonsense and bots.


So true. I sometimes wonder how many ai bots there really are. I often see the telltale signs but often miss.


What are you implying?. He would have had to hire a good developer at least for a full month salary to build something like this.

And if you are thinking enterprise, it would take 2-3 developers, 2 analysts, 2 testers, 1 lead and 1 manager 2-3 months to push something like this. (Otherwise why would lead banks spent billions and billions for IT development every year? What tangible difference you see in their website/services?)

5000 calculators may look excessive, but in this case it magnifies the AI capabilities in the future - both in terms of quality and quantity.


> (Otherwise why would lead banks spent billions and billions for IT development every year? What tangible difference you see in their website/services?)

Well, I don't think all those people are spending their time making simple calculators.


Twitter/X incentivizes you to get engagements because with a blue checkmark you get paid for it, so people shill aggressively, post idiotic comments on purpose trying to ragebait you. It's like LinkedIn in for entrepreneurs. Reddit or it's power hungry moderators (shadow)bans people often. The amount of popular websites that people can shill their trash is dwindling, so it gets worse here as a result I assume too.


This has been happening a lot recently, where an article immediately sets off all my AI alarm bells but most people seem to be happily engaging with it. I’m worried we’re headed for a dystopian future where all communication is outsourced to the slop machine. I hope instead there is a societal shift to better recognize it and stigmatize it.


I've noticed some of this in recent months. I've also noticed people editing out some of the popular tells, like replacing em-dashes with commas, or at least I think so, because of odd formatting/errors in places where it sounds like the LLM would have used a dash.

But at this point I'm not confident that I'm not failing to identify a lot of LLM-generated text and not making false positives.


>instead there is a societal shift to better recognize it

Unlikely. AI keeps improving, and we are already at the point where real people are accused of being AI.


I hope someone will create a Debian package for Immich. I’m running a bunch of services and they are all nicely organized with user foo, /var/lib/foo, journalctl -u foo, systemctl start foo, except for Immich which is the odd one out needing docker compose. The nix package shows it can be done but it would probably be a fair amount of work to translate to a Debian package.


I'll try to install in a short-ish while, and look into its installation in a detailed manner.

I may try to package it, and if it proves to be easy to maintain, I might file an ITP.


You could try setting it up as a Podman Quadlets, those hook into systemd so you can treat them like a normal service.


I have a few different types of content on my website and wanted to offer both individual feeds and a combined feed, so I was disappointed that nothing seems to support the category tag. I settled for prefixing the titles in the combined feed, e.g. "[Blog]" for blog posts.


I think the new async IO is great in simple examples like the one shown in the article. But I’m much less sure how well it will work for more complex I/O like you need in servers. I filed an issue about it here: https://github.com/ziglang/zig/issues/26056


It’s not the same situation because with async/await you end up with two versions of every function or library (see Rust’s std and crates like async_std, Node’s readFile and readFileSync). In Zig you always pass the “io” parameter to do I/O and you don’t have to duplicate everything.


Colors for 2 ways of doing IO vs colors for doing IO or not are so different that it’s confusing to call both of them “function coloring problem”. Only the former leads to having to duplicate everything (sync version and async version). If only the latter was a thing, no one would have coined the term and written the blog post.


IMO the problem was never about it actually doing IO or an async actions or whatever. It's about not being able to a call a async function from a sync function. Because in my experience you almost never wholesale move from sync to async everywhere. In fact I would consider that an extremely dangerous practice.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: