Odd how many of those instructions are almost always ignored (eg. "don't apologize," "don't explain code without being asked"). What is even the point of these system prompts if they're so weak?
It's common for neural networks to struggle with negative prompting. Typically it works better to phrase expectations positively, e.g. “be brief” might work better than ”do not write long replies”.
But surely Anthropic knows better than almost anyone on the planet what does and doesn't work well to shape Claude's responses. I'm curious why they're choosing to write these prompts at all.
I’ve previously noticed that Claude is far less apologetic and more assertive when refusing requests compared to other AIs. I think the answer is as simple as being ok with just making it more that way, not completely that way. The section on pretending not to recognize faces implies they’d take a much more extensive approach if really aiming to make something never happen.
It lowers the probability. It's well known LLMs have imperfect reliability at following instructions -- part of the reason "agent" projects so far have not succeeded.
Given that it's a big next-word-predictor, I think it has to do with matching the training data.
For the vast majority of text out there, someone's personality, goals, etc. are communicated via a narrator describing how thing are. (Plays, stories, almost any kind of retelling or description.) What they say about them then correlates to what shows up later in speech, action, etc.
In contrast, it's extremely rare for someone to directly instruct another person what their own personality is and what their own goals are about to be, unless it's a director/actor relationship.
For example, the first is normal and the second is weird:
1. I talked to my doctor about the bump. My doctor is a very cautious and conscientious person. He told me "I'm going to schedule some tests, come back in a week."
2. I talked to my doctor about the bump. I often tell him: "Doctor, you are a very cautious and conscientious person." He told me "I'm going to schedule some tests, come back in a week."
But #2 is a good example of "show, don't tell" which is arguably a better writing style. Considering Claude is writing and trained on written material I would hope for it to make greater use of the active voice.
> But #2 is a good example of "show, don't tell" which is arguably a better writing style.
I think both examples are almost purely "tell", where the person who went to the doctor is telling the listener discrete facts about their doctor. The difference is that the second retelling is awkward, unrealistic, likely a lie, and just generally not how humans describe certain things in English.
In contrast, "showing" the doctor's traits might involve retelling a longer conversation between patient and doctor which indirectly demonstrates how the doctor responds to words or events in a careful way, or--if it were a movie--the camera panning over the doctor's Certificate Of Carefulness on the office wall, etc.
Many people are telling me the second one is weird. They come up to me and say, “Sir, that thing they’re doing, the things they’re saying, are the weirdest things we’ve ever heard!” And I agree with them. And let me tell you, we’re going to do something about it.
these prompts are really different as i have seen prompting in chat gpt.
its more of a descriptive style prompt rather than instructive style prompt that we follow in GPT.
maybe they are taken from the show courage the cowardly dog.
It is actually linked from the article, from the word "published" in paragraph 4, in amongst a cluster of other less relevant links. Definitely not the most obvious.
After reading the first 2-3 paragraphs I went straight to this discussion thread, knowing it would be more informative than whatever confusing and useless crap is said in the article.
> Claude responds directly to all human messages without unnecessary affirmations or filler phrases like “Certainly!”, “Of course!”, “Absolutely!”, “Great!”, “Sure!”, etc. Specifically, Claude avoids starting responses with the word “Certainly” in any way.
If i offer to bring you fresh breakfast in the morning but not offer the service to other apartments down the street -> „lock in“
If i do not allow you to get your breakfast (of equal quality)elsewhere -> „lock out“