Global and U.S. AI Regulation: Striking a Balance Between Innovation and Ethics
- Claire Chen
- Jul 11
- 6 min read
By Global Mandarin Academy

Introduction
As generative AI systems increasingly power global communication, education, and governance, questions about who should regulate AI, and how, have become central. While the United States debates the role of federal vs. state oversight, the international community is establishing frameworks grounded in ethics, human rights, and risk classification. This blog offers a comparative look at both regulatory spheres and suggests adaptive strategies for language and education platforms operating in this evolving legal landscape.
隨著生成式人工智慧系統日益推動全球溝通、教育與治理,誰應該監管 AI、以及如何監管,成為全球焦點議題。美國正辯論聯邦與州政府的監管角色,而國際社會則致力於建立以倫理、人權及風險分類為基礎的規範架構。本文比較這兩種監管模式,並為語言與教育平台在快速變化的法律環境中提出調適策略。
随着生成式人工智能系统日益驱动全球交流、教育与治理,关于谁应监管 AI,以及如何监管的问题,已成为核心议题。美国正在讨论联邦与州政府的监管分工,而国际社会则着力构建以伦理、人权与风险分类为基础的监管框架。本文比较这两种监管路径,并为语言与教育平台在变化中的法律环境下提供应变策略
Key Regulatory Developments
United States: Federal vs. State Tensions
House Proposal (2025): Sought a 10-year ban on state-level AI regulation to avoid legal fragmentation and support national strategy.
眾議院提案(2025): 提議禁止州政府在十年內制定 AI 法規,以避免法律碎片化並支持全國戰略。
众议院提案(2025): 提议在十年内禁止州级 AI 法规,以避免法律碎片化并支持国家战略。
Senate Rejection: A 99–1 vote struck down the proposal after opposition from governors, civil society, and privacy advocates.
參議院否決: 以 99 比 1 的票數否決該提案,州長、公民社會及隱私倡議者表示反對。
参议院否决: 以 99 比 1 的投票结果否决该提案,因州长、公民社会及隐私倡导者反对。
Current State:
No unified federal AI law yet.
目前尚無統一的聯邦 AI 法律。
目前尚无统一的联邦 AI 法律。
Executive Orders (e.g., EO 14179) guide AI risk management in national security, civil rights, and public use.
總統行政命令(如 EO 14179)指導 AI 在國安、公民權利與公共用途上的風險管理。
总统行政命令(如 EO 14179)指导 AI 在国家安全、公民权利与公共用途方面的风险管理。
Over 26 states have passed or are developing laws on AI use in hiring, child protection, algorithmic bias, and deepfakes.
已有 26 個以上州制定或草擬了 AI 法律,涵蓋招聘、兒童保護、演算法偏見與深度偽造。
已有 26 个以上州制定或起草了 AI 法律,涵盖招聘、儿童保护、算法偏见与深度伪造。
International and Multilateral Efforts
Initiative | Scope | Key Features |
EU AI Act (2024) | Binding EU-wide | Risk-based framework; bans high-risk uses; strict rules for GPT-like models. |
UNESCO AI Ethics Recommendation (2021) | 193 member states | Universal ethical principles: fairness, transparency, sustainability. |
Council of Europe AI Treaty (2025) | Intercontinental | Legally binding on human rights, democracy, rule of law in AI governance. |
Global Partnership on AI (GPAI) | OECD & G7 countries | Policy collaboration on responsible AI, data governance, future of work. |
Hiroshima AI Process (G7 2024) | 49 nations | Promotes shared guardrails for foundation models (e.g., GPT, G |
Core Tensions: U.S. vs. Global Models
Dimension | United States | International Approach |
Governance Model | Federated (Federal vs. State) | Centralized in regional blocs (EU), multilateral cooperation elsewhere |
Binding Mechanisms | Mostly Executive Orders and voluntary codes | EU AI Act and Council of Europe treaties are legally binding |
Industry Role | Big Tech drives voluntary compliance; self-regulation contested | Global frameworks emphasize external accountability and transparency |
Risk Classification | Emerging in federal draft policies | Central in EU (risk-based tiers); UNESCO promotes ethical risk-based frameworks |
Enforcement Tools | Patchy enforcement; FTC involvement; lawsuits and agency oversight vary by sector | Robust enforcement via independent regulators (e.g., EU AI Board, Data Protection Authorities) |
Privacy Standards | Sector-specific (e.g., HIPAA, COPPA); no comprehensive federal privacy law | GDPR-like comprehensive data protection and privacy mandates |
Public Participation | Limited to public comment periods; tech lobbying influential | Broader stakeholder inclusion (civil society, academia, labor unions) in policymaking |
International Coordination | Bilateral dialogues (e.g., U.S.–EU TTC); cautious on binding global rules | Active in shaping international norms through OECD, UNESCO, UN AI Advisory Bodies |
Policy Recommendations for AI Usage in Education
1. Integrate Risk-Aware Design
Align courseware and platforms with EU-style risk classifications: low-risk for tutoring apps, higher if AI interacts with minors or adaptive testing systems.
整合風險意識設計: 教材與平台應對應歐盟風險分級制度,補教類屬低風險,涉及未成年或自適應測驗功能者為高風險。
整合风险意识设计: 教材与平台应对接欧盟风险分类制度,补教类属低风险,涉及未成年或自适应测验功能者为高风险。
2. Adopt Global Ethical Standards
Use UNESCO’s AI Ethics as a baseline: include human oversight, cultural inclusion, and data privacy in platform design and teacher training.
採納全球倫理標準: 以 UNESCO AI 倫理原則為基準,納入人為監管、文化包容與資料隱私,於平台設計與師資培訓中實踐。
采纳全球伦理标准: 以 UNESCO AI 伦理原则为基准,纳入人为监管、文化包容与数据隐私,在平台设计与师资培训中落实。
3. Monitor U.S. State-Level Laws
If operating in the U.S., ensure your tools comply with emerging state-specific laws on:
Employment bias (e.g., New York, California)
Child safety (e.g., Texas, Utah)
Data transparency
留意美國州級法規: 若在美國營運,務必符合各州新興法規,如:
招聘偏見(如紐約、加州)
兒童安全(如德州、猶他州
數據透明度
留意美国州级法规: 若在美国运营,务必遵守各州新兴法规,如:
招聘偏见(如纽约、加州)
儿童安全(如德州、犹他)
数据透明度
4. Support U.S. Federal–State Hybrid Models
Advocate for a U.S. approach that combines federal safety baselines with state autonomy to address local cultural and educational contexts—especially in multilingual settings.
支持聯邦與州並行模式: 推動聯邦基本安全標準與州政府文化與教育自主權並存,特別是在多語環境中。
支持联邦与州并行模式: 倡导联邦基本安全标准与州政府文化和教育自主权并行,特别适用于多语言场景。
5. Engage in International Policy Communities
Join global conversations like GPAI or AI policy panels in language education to voice needs for:
Cross-cultural AI fairness
Representation of non-Western languages in LLMs
Global access to ethical language learning technologies
參與國際政策社群: 加入 GPAI 等國際論壇或語言教育相關 AI 政策組織,倡議:
跨文化的 AI 公平性
大型語言模型中非西方語言的代表性
公平與合規的全球語言學習技術可近性
参与国际政策社群: 加入 GPAI 等国际论坛或语言教育相关 AI 政策组织,倡议:
跨文化的 AI 公平性
大型语言模型中非西方语言的代表性
合规的全球语言学习技术可达性
Conclusion
The global governance of AI is coalescing around two complementary principles: ethical alignment and adaptive compliance. The United States remains a contested space, balancing federal innovation policy with state-level protections. Internationally, the push is toward harmonized, enforceable rules. For language educators and edtech developers, the path forward lies in building platforms that are ethically grounded, globally informed, and locally responsive.
全球 AI 治理正逐步形成兩大核心原則:倫理一致性與彈性合規。美國仍是政策競合之地,試圖在聯邦創新與州層保護之間取得平衡;而國際社會則朝向一致且可執行的規範努力。對語言教育者與教育科技開發者而言,未來的方向在於打造兼具倫理基礎、全球視野與在地回應的學習平台。
全球 AI 治理正逐步围绕两个核心原则整合:伦理对齐与适应性合规。美国仍是政策博弈的焦点,联邦创新与州级保护并存;而国际社会则倾向于推动统一且可执行的规则。对语言教育者与教育科技开发者而言,未来之路在于构建具备伦理基础、全球意识与本地回应的学习平台。




Comments