What is the South Korea AI Basic Act?
A Detailed Guide to Compliance
Effective January 10, 2026
Risk-Based Requirements Similar to EU AI Act
Fines for Failures to Disclose, Designate or Comply
FAQs About the South Korea AI Basic Act
Steps to Achieve Compliance
High-Level Overview
What are the core concepts of the South Korea AI Basic Act?
1) The first comprehensive AI legislation in East Asia
The AI Basic Act is the second such legislation globally, after the EU AI Act.
2) Risk-based approach to governance, similar to the EU AI Act
There is a special emphasis on “high-impact systems,” defined by their impact on fundamental rights and similar to the EU’s “high-risk” classification, which will be subject to additional controls.
3) Penalties for non-compliance for corporations
Companies may face fines of up to ₩30,000,000 / $20,000 for failing to designate a local agent, violations of transparency requirements about generative AI, or failure to comply with regulators.
South Korea AI Basic Act Scope
Who does the AI Basic Act apply to?
The AI Basic Act covers any “AI business operator,” which includes both developers of AI tools, as well as individuals or organizations that are deployers of tools developed by others.
Importantly, the law explicitly states that it covers not only conduct within South Korea, but also “acts conducted abroad that affect the domestic market or users in the Republic of Korea.” In other words, any AIs trained on the data of South Korean residents, or services not located within the country but accessible by South Koreans, will be subject to the Act’s provisions.
How does the AI Basic Act define “artificial intelligence”?
The act defines artificial intelligence as “an electronic implementation of intellectual capabilities that humans possess, such as learning, reasoning, perception, judgment, and language comprehension.” It defines an AI system as “an AI-based system that, with various levels of autonomy and adaptability, infers predictive, advisory, or decision-making outputs for given goals to influence real or virtual environments.
Compliance Requirements
What are the compliance requirements of the AI Basic Act?
The AI Basic Act encourages all AI business operators to establish an AI Ethics Committee. It also sets out certain requirements for all AI business operators, and then prescribes additional requirements for those offering AI products or services employing generative AI, systems exceeding certain thresholds, and those that qualify as high-impact based on their use cases.
AI Ethics Committees
The government will publish a set of AI Ethics Principles covering safety and reliability, accessibility, and values to ensure that AI contributes to human life and prosperity.
Following the publication of these principles, the government encourages numerous organizations, including AI business operators, to set up a “Private Autonomous AI Ethics Committee to comply with the Ethics Principles.”
Obligations to Ensure AI Transparency
AI business operators providing products or services employing generative AI or other high-impact AI must notify users in advance that their product or service employs AI. Generative AI products must clearly indicate that their outputs are produced by generative AI, especially in the case of systems that produce voice, images, or video that may be difficult to distinguish from reality.
Obligations to Ensure AI Safety
AI business operators offering systems that exceed certain thresholds for computation or training data (to be determined) must establish a risk management system. This system must both identify, assess and mitigate risks throughout an AI’s lifecycle. It must also monitor and respond to any AI-related safety incidents. The “results” of this implementation must be submitted to the Minister of Science and ICT. Further guidelines will be provided later.
Designation of a Domestic Representative
All AI business operators meeting certain criteria who do not otherwise have an address or office in South Korea will be required to designate a “domestic representative” with a South Korean address. This individual will be responsible for submitting the results of the risk management system to the Minister of Science and ICT, applying for confirmation that systems qualify as high-impact, assisting in implementing safety and reliability measures, and preparing documentation of those measures.
High-Impact AI
The law imposes additional requirements on AI business operators providing “high-impact AI,” a category that seems broadly similar to the EU AI Act’s definition of “high risk.” South Korea defines high-impact as “an AI system that may have significant effects on or pose risks to human life, bodily safety, or fundamental rights.” The list of high-impact uses includes:
- Energy supply
- Food production
- Health care provision
- Medical devices
- Nuclear materials
- Biometric identification and analysis for law enforcement
- Material decision-making impacting rights (e.g. employment, credit or loan screening)
- Major transportation systems
- State decision-making (e.g. qualification assessments, tax collection)
- Student evaluations in primary or secondary education
Companies will have the ability to consult with the Minister of Science and ICT about whether an AI system qualifies as high-impact.
Before offering products or services that qualify as high-impact AI, companies must conduct an impact assessment on the system’s effect on “human fundamental rights.” Government institutions will “give priority” to products that have already undergone an impact assessment. More information about the contents of impact assessments will be provided later.
In addition to the impact assessment, business operators providing high-impact AI or products/services based on it must:
- Develop and operationalize a risk management plan
- As far as technically possible, implement measures to provide explainability of AI outputs, the criteria they derive from, and an overview of training data
- Develop and operationalize user protection measures
- Ensure human oversight and supervision of all high-impact AI
- Prepare and retain documentation of measures taken to ensure safety and reliability
Non-Compliance Penalties
What are the non-compliance penalties for the AI Basic Act?
South Korea’s Ministry of Science and Information and Communication Technology is charged with enforcing the Act. Companies may face fines of up to ₩30,000,000 (approximately $20,000) for:
- Failure to designate a domestic representative
- Failure to notify users that they are interacting with generative AI
- Failure to comply with an order to correct other violations
Status
When does the AI Basic Act go into effect?
The Act became law on January 10th, 2025 and will enter into effect on January 10th, 2026.
Steps To Compliance
How can organizations ensure compliance with the AI Basic Act?
Drawing from our extensive work in AI governance and compliance, we’ve identified five best practices to ensure compliance:
- Adopt an AI Governance or Risk Management program. Although specific requirements differ across jurisdictions, the basic principles in frameworks such as the NIST AI RMF or ISO 42001 will be broadly useful around the world.
- Designate a Domestic Representative if your organization does not already have a physical presence in South Korea.
- Build an inventory of your AI applications, starting with a risk assessment that can help determine whether your products and services will qualify as high-risk and therefore be subject to greater scrutiny and documentation requirements.
- Begin standardizing documentation such as model cards, testing, and risk management. Although the South Korean government will release further information about the exact requirements for impact assessments and documentation, so far their efforts appear broadly aligned with other regulators around the world, such as the EU AI Act.
Staying informed and engaged will be key to ensuring compliance with the South Korea AI Basic Act.
AI Compliance Tools
How FairNow’s AI Governance Platform Helps
Developed by specialists in AI risk management, testing and compliance, FairNow’s AI Governance Platform is tailored to tackle the unique challenges of AI risk management. FairNow provides:
- Streamlined compliance processes, reducing reporting times
- Centralized AI inventory management with intelligent risk assessment
- Clear accountability frameworks and human oversight integration
- Ongoing testing and monitoring tools
- Efficient regulation tracking and comprehensive compliance documentation
FairNow enables organizations to ensure transparency, reliability, and unbiased AI usage, all while simplifying their compliance journey.
Experience how our industry-informed platform can simplify AI governance.
AI compliance doesn't have to be so complicated.
Use FairNow's AI governance platform to: