Artificial Intelligence at Work: US

This Artificial Intelligence (AI) at Work policy is a US-wide best-practice policy that sets clear expectations for how employees can use AI tools responsibly, even though there's no single US law that universally requires an AI workplace policy. It helps your organization encourage innovation while reducing common risks, like sharing confidential or personal data with third-party tools, relying on inaccurate outputs, creating intellectual property and disclosure issues, and using AI in ways that could lead to discrimination or other compliance problems. Because AI rules are changing quickly at the federal, state, and local levels, this policy is designed to be practical today and flexible enough to update as new requirements and guidance emerge.

The History Behind AI at Work Policies in US

Artificial intelligence at work forced employers to revisit old rules under Company Property & Confidential Information. Early workplace automation mostly lived in back-office systems, so the usual controls focused on access, passwords, and data classification. Generative AI changed the risk profile because employees could paste sensitive material into a public tool and get it stored, reused, or exposed in ways nobody intended. Employers started writing AI policies as a practical extension of trade secret protection under the Defend Trade Secrets Act and state Uniform Trade Secrets Act laws, plus long-standing privacy and security duties tied to customer data and regulated information.

 

Employment law also caught up fast because AI started showing up in hiring, performance management, and scheduling. The Equal Employment Opportunity Commission (EEOC) signaled early that existing discrimination laws still apply when an algorithm makes or influences an employment decision, including through its 2022 guidance on ADA-related risks from software that can screen out people with disabilities. High-profile enforcement and settlements pushed the point further, including the EEOC's 2023 settlement with iTutorGroup over age discrimination claims tied to automated hiring screening. Courts also kept reminding employers that "the vendor did it" is not a defense when the tool produces discriminatory outcomes.

 

States and cities then began regulating specific AI use cases, which made a one-size-fits-all approach hard to defend. Illinois put biometric rules on the map with BIPA litigation over fingerprints and face scans, and employers learned how quickly a "cool feature" can turn into a class action. New York City added a gatekeeper in 2023 with Local Law 144, which requires bias audits and notices for certain automated employment decision tools. Colorado followed with a broad AI law in 2024 that targets "high-risk" systems and sets expectations around risk management and discrimination, even when the law is not limited to employment. Policies became the glue that connects these legal pressures to day-to-day behavior, especially around data inputs, transparency, and how AI is used for people decisions.

Which Law is the AI at Work Policy Meant to Comply With?

There's no federal law that specifically requires an Artificial Intelligence at Work policy for US-based employees. We include this policy anyway because it is either (1) a common best practice that answers employee FAQs and sets clear expectations, or (2) a topic that is regulated in many states, so employers often use one company-wide policy that meets or exceeds the toughest state requirements.

How to Write a US-Specific Artificial Intelligence at Work Policy

  • Start with "why" and introduce the concept.
  • Define what counts as AI tools covered by the policy.
  • Require approval before employees use AI tools for work.
  • Set expectations for responsible use, including protecting confidential information and respecting privacy and intellectual property.
  • Make employees responsible for reviewing and verifying AI outputs before using them.
  • Require appropriate disclosure when work product was created with AI.
  • Prohibit unethical or unlawful AI use, including discrimination, harassment, or misleading content.
  • Ban certain high-risk uses without written approval, including employment decisions, personal data handling, professional advice, and bypassing security controls.
  • Maintain an approved AI tool list and a process to propose new tools or use cases.
  • Explain that AI use may be monitored and that the policy will be updated as technology and laws change.
  • Provide a way to report suspected misuse or risky outputs and prohibit retaliation for good-faith reports.
  • Reinforce that AI does not replace human judgment, ethics, or creativity.

When to Include this Policy in Your Employee Handbook

The law does not require you to publish a policy or issue a specific notice. That said, you still have to comply with the requirements that apply to you as an employer. 

 

This is a "depends on your workplace" policy. Include it if you offer the benefit, operate in a setting where this comes up, have a state-specific rule that differs from your national approach, or you've had issues in this area before. If you already have a clear all-employee policy that covers the same ground (and it meets US's requirements), you may not need a separate policy here. 

Other Considerations

None.

Exceptions

None.

Model Policy Template for an Artificial Intelligence at Work Policy

Artificial Intelligence at Work

We believe AI can be a powerful tool for creativity, productivity, and problem-solving when used responsibly. As the technology continues to evolve, so will our approach to using it at {​{​Organization Name​}​}.

This policy outlines what we expect when it comes to using AI at work. It helps protect our company, {​{​employees​}​}, customers, and communities while still encouraging innovation.

What We Mean by AI

This policy applies to AI tools that generate content or automate decisions based on prompts, data, or training. These tools can help with things like drafting text, summarizing information, or generating code or other content.

 

Not sure whether a tool is covered? Ask your {​{​manager​}​} or {​{​the HR Team​}​}.

Getting Started: Approval and Training

You must have approval from your {​{​manager​}​} before using any AI tools for work. This includes browser-based tools, apps, or built-in features that use AI to generate content or make recommendations.

If your role requires AI tools, you may be asked to complete training on how to use them responsibly and securely.

Rules for Using AI at Work

When using AI, whether it's an approved, standalone AI tool or features built into apps you already use, you're expected to:

  • Protect confidential information. Don’t input anything sensitive, private, or proprietary unless explicitly allowed. This includes customer data, financials, product designs, and internal communications.
  • Respect privacy and intellectual property. Avoid uploading copyrighted material, trade secrets, or personal information without approval.
  • Check your work. AI-generated content may sound convincing but can still be wrong or misleading. You’re responsible for verifying all facts, outputs, and recommendations before using them.
  • Be transparent. If you use AI to help create something, disclose it when appropriate, especially in work shared externally or with clients.
  • Stay ethical and lawful. Don’t use AI in ways that discriminate, mislead, harass, or otherwise violate company policies, laws, or industry standards. Understand that content generated by AI tools may have unclear ownership and don’t assume the output is free to use.

 

The following are never permitted without written approval:

  • Using AI tools to make decisions about hiring, promotion, or discipline.
  • Uploading or generating customer or {​{​employee​}​} personal information.
  • Relying on AI for legal, financial, or medical advice.
  • Bypassing company security protocols to access AI tools.

AI Tool Directory

We maintain an internal list of approved AI tools and their acceptable use cases. If you’d like to suggest a new tool or use case, talk with your {​{​manager​}​} or {​{​the IT Team​}​}.

Monitoring and Updates

We may monitor how AI tools are used at work, especially when company systems, data, or accounts are involved. This policy will evolve as AI technology and the laws surrounding it change. Please check back periodically for updates or ask {​{​the HR Team​}​} or {​{​the IT Team​}​} if you’re unsure about anything.

Reporting Concerns

If you suspect an AI tool is being misused or is producing biased, offensive, or otherwise risky content, let us know. You can talk to your {​{​manager​}​} or contact {​{​the HR Team​}​} directly. You will not face retaliation for speaking up in good faith.

We’re in this together

AI isn’t a replacement for judgment, ethics, or creativity. We trust you to use it wisely and help shape a safe, effective, and forward-looking workplace.

Reminder

The information provided here does not, and is not intended to, constitute legal advice. Only your own attorney can determine whether this information, and your interpretation of it, applies to your particular situation. You should contact legal counsel for advice on any specific legal matter.