RBAC doesn't help. Prompt injection is when someone who is authorized causes the LLM to access external data that's needed for their query, and that external data contains something intended to provoke a response from the LLM.
Even if you prevent the LLM from accessing external data - e.g. no web requests - it doesn't stop an authorized user, who may not understand the risks, from pasting or uploading some external data to the LLM.
There's currently no known solution to this. All that can be done is mitigation, and that's inevitably riddled with holes which are easily exploited.
The issue is if you want to prevent your LLM from actually doing anything other than responding to text prompts with text output, then you have to give it permissions to do those things.
No-one is particularly concerned about prompt injection for pure chatbots (although they can still trick users into doing risky things). The main issue is with agents, who by definition perform operations on behalf of users, typically with similar roles to the users, by necessity.
don't give it access to your data/production systems.
"Not using LLMs" is a solved problem.