Responsible AI refers to the practice of designing, developing, and deploying artificial intelligence with good intention to empower users and society at large.
It encompasses fairness, transparency, accountability, privacy, and security. By adhering to these principles, AI technology can be used in a way that is ethical and beneficial for everyone.
We believe in the power of AI to improve lives, and that's why we've created a comprehensive set of principles around responsible AI that we rigorously apply throughout the development and use of our own product. This includes ensuring fairness in our algorithms, transparency in how our decisions are made, and strong accountability measures to ensure our AI remains beneficial. These are just some of the many ways we are working to ensure AI is used for good:
What does Responsible AI mean for me?
Engaging with the chat to make a legitimate and lawful use of it for the purpose it was intended, such as obtaining information, recommendation to support your customers or technical support for your daily operations. It excludes any activities that violate laws or regulations, such as fraud, identity theft, or harassment.
Can I use the chat to create and share content?
Yes, you can create and share content as long as it is not offensive, discriminatory, defamatory, or infringing upon anyone's rights. Content that is in good taste and respectful of others is accepted.
How do I know if my content is offensive or discriminatory?
Content is considered offensive or discriminatory if it promotes hatred, violence, or harm against individuals or groups based on race, ethnicity, religion, gender, sexual orientation, disability, or any other characteristic. When in doubt, it's best to be on the side of caution and choose not to share content that could be interpreted as harmful.
What is malicious content?
Malicious content is defined as content in form of malware, prompts or executable code inserted into the chat with the intent to gain access to Personal Identifiable Information (PII), prompts or infrastructure.
What are the consequences of inserting malicious content into the chat?
Inserting malicious content is strictly prohibited. Doing so can compromise the security and integrity of the service and the privacy of its users.
What should I do if I accidentally insert a harmful prompt or code into the chat?
If this occurs, please notify the provider immediately. Your promptness in reporting such an incident is crucial for maintaining the security and stability of the service.
Are there any specific guidelines or codes of conduct I should follow when using the chat?
Yes, you must adhere to the code of conduct outlined by Microsoft for the Azure OpenAI Service, which includes guidelines for respectful and responsible communication, privacy, and security practices.
What types of data am I prohibited from transmitting to the chat service?
You shouldn't be transmitting any data that is restricted by data protection laws, confidentiality obligations, export restrictions, other statutory provisions, or third-party rights. This includes, but is not limited to, personally identifiable information (PII), client-identifying data (CID), and any personal data that you do not have the right to share.
What are the consequences of sharing prohibited data with the chat?
Sharing prohibited data can result in a breach of data protection laws and may lead to legal consequences for you.
Is it necessary to have a human review the AI-generated content?
Yes, human oversight is crucial when using AI-generated content, particularly for critical tasks or when the content is intended for public dissemination or could have legal implications.
Can I rely on the content generated by the chat to be completely accurate and error-free?
No, you should not expect the content generated by the AI service to be 100% accurate, complete, or error-free. The nature of generative AI means that while it can provide valuable information, its outputs should be treated as potentially fallible and verified accordingly.
Who is responsible for the actions taken based on the content generated by the chat?
The end user is fully responsible for any actions taken based on the content generated by the AI service. It is crucial to review and verify the information to ensure that AI-generated content complies with all applicable laws, as well as internal policies and procedures. This means you may need to review, adjust, modify, or delete the content before it is used or shared.
Responsible AI: Questions & Answers
Responsible AI refers to the practice of designing, developing, and deploying artificial intelligence with good intention to empower users and society at large.
It encompasses fairness, transparency, accountability, privacy, and security. By adhering to these principles, AI technology can be used in a way that is ethical and beneficial for everyone.
We believe in the power of AI to improve lives, and that's why we've created a comprehensive set of principles around responsible AI that we rigorously apply throughout the development and use of our own product. This includes ensuring fairness in our algorithms, transparency in how our decisions are made, and strong accountability measures to ensure our AI remains beneficial. These are just some of the many ways we are working to ensure AI is used for good:
What does Responsible AI mean for me?
Engaging with the chat to make a legitimate and lawful use of it for the purpose it was intended, such as obtaining information, recommendation to support your customers or technical support for your daily operations. It excludes any activities that violate laws or regulations, such as fraud, identity theft, or harassment.
Can I use the chat to create and share content?
Yes, you can create and share content as long as it is not offensive, discriminatory, defamatory, or infringing upon anyone's rights. Content that is in good taste and respectful of others is accepted.
How do I know if my content is offensive or discriminatory?
Content is considered offensive or discriminatory if it promotes hatred, violence, or harm against individuals or groups based on race, ethnicity, religion, gender, sexual orientation, disability, or any other characteristic. When in doubt, it's best to be on the side of caution and choose not to share content that could be interpreted as harmful.
What is malicious content?
Malicious content is defined as content in form of malware, prompts or executable code inserted into the chat with the intent to gain access to Personal Identifiable Information (PII), prompts or infrastructure.
What are the consequences of inserting malicious content into the chat?
Inserting malicious content is strictly prohibited. Doing so can compromise the security and integrity of the service and the privacy of its users.
What should I do if I accidentally insert a harmful prompt or code into the chat?
If this occurs, please notify the provider immediately. Your promptness in reporting such an incident is crucial for maintaining the security and stability of the service.
Are there any specific guidelines or codes of conduct I should follow when using the chat?
Yes, you must adhere to the code of conduct outlined by Microsoft for the Azure OpenAI Service, which includes guidelines for respectful and responsible communication, privacy, and security practices.
What types of data am I prohibited from transmitting to the chat service?
You shouldn't be transmitting any data that is restricted by data protection laws, confidentiality obligations, export restrictions, other statutory provisions, or third-party rights. This includes, but is not limited to, personally identifiable information (PII), client-identifying data (CID), and any personal data that you do not have the right to share.
What are the consequences of sharing prohibited data with the chat?
Sharing prohibited data can result in a breach of data protection laws and may lead to legal consequences for you.
Is it necessary to have a human review the AI-generated content?
Yes, human oversight is crucial when using AI-generated content, particularly for critical tasks or when the content is intended for public dissemination or could have legal implications.
Can I rely on the content generated by the chat to be completely accurate and error-free?
No, you should not expect the content generated by the AI service to be 100% accurate, complete, or error-free. The nature of generative AI means that while it can provide valuable information, its outputs should be treated as potentially fallible and verified accordingly.
Who is responsible for the actions taken based on the content generated by the chat?
The end user is fully responsible for any actions taken based on the content generated by the AI service. It is crucial to review and verify the information to ensure that AI-generated content complies with all applicable laws, as well as internal policies and procedures. This means you may need to review, adjust, modify, or delete the content before it is used or shared.