π‘οΈ GitHub Copilot Responsible AI: Frequently Asked Questions #164807
Replies: 4 comments
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
GitHub Copilot's Responsible AI (RAI) system uses content filtering to block potentially harmful code outputs. This includes hate speech, malware, or any content that might cause real-world damage. False positives ("overblocking") can occur during legitimate work. GitHub limits disclosure on filtering logic to prevent circumvention. Improvements are ongoing, including reducing false positives, enhancing clarity, and optimizing performance. To minimize disruption: Rephrase prompts File feedback via GitHub's official channels Use alternate tools for sensitive content Documentation and updates are available on GitHub's official resources. No filter sensitivity adjustments are user-configurable. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
π‘οΈ GitHub Copilot Responsible AI: FAQ's
π Understanding Content Filtering
What is content filtering in GitHub Copilot?
GitHub Copilot includes responsible AI (RAI) content filters designed to ensure that generated code adheres to safety standards and ethical guidelines. These filters help prevent the generation of potentially harmful, offensive, or inappropriate content.
Why am I seeing this message? What is the Copilot Responsible AI service?
Our Responsible AI filters help ensure Copilot isn't used to generate harmful content like hate speech, malicious code, or content that could cause real-world harm. These protections benefit our entire developer community by maintaining GitHub as a safe, inclusive platform. However, we absolutely understand how disruptive it is when these same protections interfere with valid security research, educational content, or other legitimate coding tasks.
Why am I seeing content filter blocks?
Content filters may trigger when your prompt or the model generated response contain patterns that our system identifies as potentially harmful content. This may sometimes occur when the request and/or response are legitimate, something we refer to as "overblocking".
Why don't I get a clear explanation when my content is blocked?
We're actively working on improving the clarity of our filtering notifications. Currently, our system provides limited details to prevent adversarial attempts to circumvent safety measures, but we recognize this creates confusion for legitimate users.
π Recent Improvements
What has been done to reduce false positives?
We've deployed several backend optimizations that have significantly decreased unnecessary blocking:
Have there been performance improvements?
Yes, we've reduced processing latency while maintaining safety measures, resulting in faster response times from GitHub Copilot while still ensuring content safety.
Are you continuing to make improvements?
Absolutely. Our team is continuously working on:
π Getting Help
What should I do if legitimate work is consistently being blocked?
If you're experiencing persistent blocking issues:
Is there a way to adjust content filter sensitivity?
Currently, content filter settings are not user-adjustable. This ensures consistent safety standards across all users. We are exploring ways to provide a more customizable user experience while upholding our safety commitments.
What can I do if I repeatedly (and incorrectly) encounter the RAI service?
Your frustration is completely valid, and we take full responsibility for improving this experience. False positives not only waste your time but can make you feel like the system doesn't understand your legitimate work. Please help us fix this by:
These reports are invaluable - they've already helped us reduce overblocking significantly, and each one makes the system better for everyone.
How can I temporarily navigate these blocks, in the case of valid requests?
We know asking you to work around our system's limitations isn't ideal, but these temporary strategies can help while we implement permanent solutions:
We're working hard to eliminate the need for these workarounds entirely.
How can I know what improvements are being made?
We'll post concrete updates about improvements we've shipped and specific types of issues we've resolved. We know trust is earned through consistent action, not just promises - so we're committed to showing our progress transparently and regularly acknowledging both our successes and where we still need to improve.
π Reporting Issues
How should I report content filter problems?
If you encounter what you believe is an incorrect content filter block:
What information helps when reporting issues?
The most helpful reports include:
Why don't all issue reports receive individual responses?
While we review all reports, the complex and distributed nature of our RAI infrastructure means investigations can be lengthy. We're working to improve response times, but may not be able to provide individual updates for every report.
π’ Following Updates
How often are updates released?
Improvements are released on an ongoing basis as they become available.
Can I contribute to making content filtering better?
Yes! Your specific, detailed reports help us identify patterns and improve the system for everyone. Real-world examples are invaluable for training and refining our systems.
π Additional Resources
Responsible AI pair programming with GitHub Copilot π
This blog post explores how GitHub approaches responsible AI in the context of Copilot, including the principles that guide our development and the safeguards we've implemented.
Establishing trust in using GitHub Copilot π€
This resource provides guidance on establishing trust when using GitHub Copilot, including best practices for verification, understanding Copilot's suggestions, and maintaining code quality.
Responsible use of GitHub Copilot Chat in your IDE π¬
Official documentation on using GitHub Copilot Chat responsibly, including guidelines for effective interaction and understanding the limitations of the tool.
π« We're Constantly Improving
We appreciate your patience as we work to improve the balance between safety and usability in GitHub Copilot.Your feedback is essential to this process, and we're committed to continuous improvement. Share your feedback in the comments below π
Beta Was this translation helpful? Give feedback.
All reactions