In fact I would say the opposite is true. LLMs must protect against this as a security measure in unified models or things the LLM 'sees' may be faked.
If for example someone could trick you into seeing a $1 bill as a $10 it would be considered a huge failure on your part and it would be trained out of you if you wanted to remain employed.
In fact I would say the opposite is true. LLMs must protect against this as a security measure in unified models or things the LLM 'sees' may be faked.
If for example someone could trick you into seeing a $1 bill as a $10 it would be considered a huge failure on your part and it would be trained out of you if you wanted to remain employed.