Revised 02/23/2026
Purpose
This Acceptable Use Policy (AUP) establishes guidelines for the responsible, secure, and ethical use of Artificial Intelligence (AI) and Machine Learning (ML) technologies within UDC. Its purpose is to:
- Protect company assets
- Safeguard customer and employee data
- Ensure compliance with applicable laws, regulations, and contractual obligations
- Promote the ethical use of AI/ML in line with UDC’s values and virtues
Scope
This policy applies to the following.
Users: All employees, contractors, partners, subcontractors, and third parties who use or interact with AI/ML systems on behalf of UDC.
Devices: Any corporate or personal device used to access UDC accounts, licenses, or AI/ML systems.
Systems: All AI/ML tools, platforms, and services used by UDC o UDC’s enterprise Copilot Chat
Environments: On-premises, cloud, and hybrid deployments.
Acceptable Use
Users may utilize AI / ML for business purposes to support their daily work in the following ways.
Permitted Activities
Users may utilize AI/ML for legitimate business purposes in line with approved roles and responsibilities, subject to the following requirements:
- Completing a Risk Review by UDC’s Data Protection and Cyber Security practice. The AI tools listed in Appendix B have completed a risk review.
- Comply with UDC’s data classification and handling policies when training, testing, or deploying AI/ML models.
- Apply least-privilege access controls to AI/ML systems and related data.
- Document model development processes, including inputs, outputs, assumptions, and decision-making logic.
- Validate accuracy, fairness, and performance of AI/ML models before production deployment.
- Retain datasets and models only as long as required, following company data retention policies.
- Project related deliverables upon written approval from the client.
Unacceptable Use
Users must not engage in any of the following.
Prohibited Activities
Users must not:
- Input, process, or expose personally identifiable information (PII), protected health information (PHI), or confidential data in unapproved AI/ML systems (e.g., public AI tools). Unapproved AI/ML systems have not been evaluated by the standard and guidelines governance body.
- Deploy AI/ML models without required review, testing, and approval by UDC’s standards and guidelines governance body.
- Rely solely on AI/ML outputs for high-risk or legally binding decisions without human oversight.
- Develop or deploy AI/ML in ways that violate license agreements or UDC policies, or that introduce bias, discrimination, or unethical practices.
- Circumvent established security controls, monitoring, or logging requirements.
- Use AI/ML systems for personal, non-business, or unauthorized commercial purposes.
- Using UDC or Client data in consumer Copilot, unapproved Copilot plugins/agents or other consumer unapproved AI tools or agents.
Licensing & Cost Awareness
All AI/ML licenses must receive prior approval through established governance processes and shall be purchased, managed, and provisioned by UDC strictly based on documented business necessity and in accordance with applicable policies and approval authorities.
Requests for paid services must include business justification and approval from the business area lead, if budgeted, otherwise approval should be from the Executive Leadership of the department. Requests should be submitted via email to the UDC Help Desk.
Misuse or abuse of AI/ML licenses may result in revocation of access and disciplinary action.
Monitoring & Enforcement
Accounts and licenses are managed exclusively by UDC IT.
AI/ML usage is monitored and logged for compliance and security purposes using the following tools.
- Purview auditing of Copilot interactions
Violations of this policy may result in disciplinary measures, including revocation of access, termination of employment, or legal action as applicable under local, state, federal, or international law.
Exceptions
Any exception to this policy must be requested in writing and approved by UDC’s COO. Requests must include:
- A description of the requested exception
- The business justification
- The duration of the exception
The Director of Data Protection and Cybersecurity (DPC Director) will review requests and determine eligibility for exception review. Eligible requests will be escalated to the UDC COO for approval.
All approved exceptions must be documented and reviewed annually to confirm ongoing need and compliance.
Acknowledgment
By using artificial intelligence or machine learning, users acknowledge they have read, understood, and agree to comply with this Acceptable Use Policy.
Appendix A
AI Usability vs Security: Key Takeaways & Best Practices
AI systems cannot be risk-free, but they can be made risk aware. The focus should be on building strong guardrails, monitoring, and governance frameworks that allow business usability while maintaining regulatory compliance and security integrity.
Balancing Usability and Security
Tension exists
- Data science/AI teams want to maximize available data for training.
- Security/compliance teams want to minimize exposure and restrict what enters the model.
Best practice
Define the business goals of your LLM deployment, then align security controls to acceptable levels of risk. Recognize that no environment can be risk-free.
Core Risks & Vulnerabilities in Large Language Models
Bias & Accuracy
- Guard against bias and misinformation; establish workflows for correction and removal of inaccurate outputs.
- Regulators increasingly require demonstrable fairness, explainability, and auditability.
Common Vulnerabilities
- Model Extraction – adversaries replicate a model’s functionality by repeated queries.
- Data Inference – attackers validate whether sensitive data is included in the training set.
- Data Extraction – attempts to recover private or proprietary training data.
- Model Poisoning – malicious fine-tuning or injection of harmful data.
- Prompt Injection – direct and indirect manipulations that override intended safeguards. Most time should be spent here.
Unavoidable Reality
Some level of vulnerability is inevitable. The defense strategy must therefore focus on mitigation, monitoring, and resilience.
Mitigation Strategies
Guardrails & Controls
- Apply proxying and filtering to sanitize inputs (e.g., scanning PDFs before ingestion).
- Use policy-driven guardrails for safe use cases and data access boundaries.
- Avoid blind adversarial training unless risks are well-understood—it can create instability.
Authentication, Authorization, and Accounting (AAA)
- Most LLMs lack native AAA controls.
- Surround the model with infrastructure-level protections (API gateways, monitoring, System and Organization Controls (SOC).
Benchmarks & Testing
- Develop benchmarks for LLM “breakability”—measuring leakage, manipulation resistance, and reliability.
- Similar to penetration testing in traditional security, but requires LLM-specific frameworks.
- Vendors like NetSPI are working on standardized benchmarking.
Compliance & Regulatory Considerations
Data Privacy & Anonymization
Personally identifiable information (PII) must be anonymized or excluded in line with General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), and other regulations.
Auditability & Transparency
Maintain usage logs and monitoring to show accountability and traceability.
Bias and Fairness
Regulators increasingly demand that companies identify, document, and mitigate bias in AI.
Incident Response & Reporting
Organizations should prepare for scenarios like model poisoning or data theft with formal incident response procedures.
Industry Best Practices
1. Define AI Governance Framework – align goals, guardrails, and controls with business objectives.
2. Integrate Security into SOC – use AI both as a tool to enhance detection and as an asset to be protected.
3. Layer Security – protect not just the model, but the surrounding infrastructure and data pipelines.
4. Prioritize Risk by Use Case – identify which vulnerabilities are most relevant for your business context.
5. Adopt Continuous Monitoring – LLMs evolve over time; establish continuous benchmarking and policy review.
6. Security & Data Liaison – designate roles bridging AI teams and security teams to balance usability with compliance.