Unlike the Quality evaluators (which talk to an LLM and cache LMM responses using ResposeCachingChatClient), Safety evaluators which talk to the Azure AI Content Safety service currently do not cache the evaluation responses from the service. This issue tracks the task to add response caching support for the Safety evaluators as well.