Hacker Newsnew | past | comments | ask | show | jobs | submit | josecodea's commentslogin

You should definitely read "The Ones Who Walk Away From Omelas" by Ursula Le Guin. It is short too!


> 3. With equity like you're serious, make the salaries low-ish. Not so low that it's nonviable for modest family cost of living, but low enough to self-select out the people who aren't committed to the company being successful, or who don't actually believe in the company.

This is how you select for anyone that cannot land a better payrate. There are startups that get better funding and can pay real salaries.


To leverage their pre-existing motivations*, which is the argument on OPs side, this pre-existing motivation is hired-for, not generated on the role.


Fully agree.

No one plans to hire their assistant based on how much they will motivate the other people that are going to deal with the assistant. Sure, it is important that they are pleasant, but that's it. Their role is actually an administrative one of brokering information. Managers are essentially the same role with higher stakes, trying to make it about anything deeper seems to be main character syndrome in full effect.


but we are a family /s


It stands to reason that if a clown can upset you at work, they can also cheer you up. (no, not really lol)


You are being too generous by saying that there are big words in the text. I find it blunt and uncouth. Actually, that's the problem that I see in the text, an attitude of pessimism and lack of self-reflection. An LLM would certainly give me something more interesting to read!


Tailscale Funnel, no?

For the permissions, just add basic auth in the reverse proxy and choose whom to share the passwd with.

Now if you want OAuth or something like that... well tough luck, you need to set up OIDC or whatever and that's going to be taking you some time, but it still works how you want.


Perhaps try asking it a question that other people in HN could also answer, lol...


> state it as confidently incorrect

It's funny for me to read this. They don't exhibit "confidence". You are just getting the most accurate text that it can produce. Of course, the training data doesn't contain "I don't know" for questions, that would be really bad training data! If you are getting "attitudes", it would be because you are triggering some kind of dialogue-esque data with your prompts (or the system prompt might be doing that).

Expecting the LLM to say "sorry I don't know" would be like expecting google search to return "we found some pages but deemed them wrong, so we won't show you any".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: