Experts Comments: What Should Other Countries Take From Denmark’s New AI Laws?

Denmark is getting ready to update its copyright laws to give people more control over how their faces, voices and bodies are used online. The new law would make it illegal to use someone鈥檚 image or voice in deepfakes without their permission.

Culture minister Jakob Engel-Schmidt says the change is about sending a clear message, and that is… people should have the right to decide how they appear and sound in the digital world.

Right now, he says, the law does not fully protect that. With AI making it easy to copy someone鈥檚 appearance or voice, Denmark wants to make sure those copies can鈥檛 be used without consent.

The law has support from most political parties, and the plan is to bring it to parliament in the autumn. If it passes, anyone in Denmark will be able to ask for deepfake content to be removed if it was shared without their agreement.

 

What Exactly Will The New Rules Cover?

 

The proposed law focuses on 鈥渞ealistic, digitally generated imitations.鈥 That includes AI-generated videos, photos and audio that closely copy how someone looks or sounds. It also protects performances such as those in music or acting.

What these rules don’t apply to are satire or parody, as long as the content isn鈥檛 misleading or harmful. But if someone uses a fake video to spread lies or damage someone鈥檚 reputation, that would count.

The law also applies to people from other European Economic Area countries, so it could have an effect even outside Denmark. Engel-Schmidt has said that if tech platforms don鈥檛 co-operate, they could face fines or action from the European Commission.

 

What Should Countries Take From Denmark’s Law?

 

Our Experts:

 

  • James Kirkham, Founder, ICONIC
  • Eugeny Malyutin, Head of LLM, Sumsub
  • Nat谩lia Fritzen, AI Compliance & Policy Specialist, Sumsub
  • Oana Leonte, IP Expert and Founder, Unmtchd
  • Arshad Khalid, Technology Advisor, No Strings Public Relations
  • Camden Woollven, Group Head of AI Product Marketing, GRC International Group

 

James Kirkham, Founder, ICONIC

 

 

“More countries should urgently follow Denmark鈥檚 lead in enshrining AI laws that give people copyright over their own faces, voices and bodies to help fight the spread of deepfakes. This means legally recognising a person鈥檚 likeness as intellectual property鈥攋ust like music writing or inventions鈥攁nd putting clear penalties in place for unauthorised use by AI systems, platforms or third parties.

“Giving people copyright over their own face voice and body is the most powerful line in the sand we鈥檝e seen so far. It reframes identity as personal property not something to be scraped borrowed or cloned at will.It forces platforms and AI developers to think twice before building on stolen likeness and鈥攃rucially鈥攊t gives people not corporations the default rights to who they are. I think that鈥檚 a profound shift. This isn鈥檛 just about protecting privacy but about rebuilding trust in the entire system.

“If people don鈥檛 feel safe and if they don鈥檛 feel seen as real they鈥檒l opt out entirely from content culture and participation. Copyrighting the self won鈥檛 stop deepfakes overnight but it gives the next generation a foundation of ownership in this overly synthetic world. And in a time when everything can be replicated edited re-skinned and faked then owning your own image might be the most human act we have.”

 

Eugeny Malyutin, Head of LLM, Sumsub

 

 

鈥淒enmark鈥檚 move to expand copyright and intellectual property law to protect individuals鈥 likeness 鈥 including their face, voice, and other biometric traits 鈥 is a bold and novel approach to identity protection.

鈥淔raudsters use a range of tactics 鈥 often either using fully synthetic identities and documents or hijacking real identities and enhancing them with AI-generated content, such as deepfakes. As a result, they often attempt to use these to bypass verification checks.

鈥淪ynthetic identities are often easier to detect, especially in systems with strong document-free verification and access to authoritative databases. Identity hijacking, on the other hand, is more complex and harder to catch, particularly when combined with convincing AI-driven synthetic identity document forgeries.

鈥淥ur data shows that this 鈥榮ynthetic document fraud鈥 has grown by 378% in Europe from the first quarter of 2024 to 2025 鈥 more than all other regions, and by 275% in the UK. Worryingly, deepfake fraud increased by 900% in the UK over the same time period. Denmark’s new law may help individuals reclaim control over their identity but still remains a 鈥榬eactive鈥 measure and doesn’t replace technical or biometric safeguards.鈥

 

Nat谩lia Fritzen, AI Compliance & Policy Specialist, Sumsub

 

 

鈥淒enmark is trying to amend its copyright laws to give people legal ownership over their likeness (face features, voice, etc) in an attempt to curb a continuing rise in deepfakes – making it illegal to publish such content without consent. Anyone would have the right to request its removal from online platforms, and in certain cases, seek financial compensation – while platforms that do not comply could be fined.

鈥淭his is a truly inventive and uniquely proactive approach. Most other bills try to curb deepfakes by focusing on watermarks and other types of transparency requirements – but this puts responsibility onto the creators of deepfakes, rather than the websites used to share them, and offer little remedies to the victims. Denmark鈥檚 strategy is a clear contrast to the EU AI Act, which takes a more technical and disclosure-based approach that lacks strong enforcement or remediation for individuals.

鈥淎lthough not the first to try this (a bill introduced in US congress, the No Fakes Act, tried something similar but it hasn鈥檛 yet moved in the legislative process ), Denmark鈥檚 political landscape makes it more likely that this Bill will come into force. Furthermore, the country鈥檚 new position as President of the Council of the EU may give them the influence to push this solution as a universal benchmark.鈥

 

Oana Leonte, IP Expert and Founder, Unmtchd

 

 

“Denmark’s move to give people copyright over their faces and voices isn’t just about protecting individuals, it signals a monumental shift in how we think about identity as intellectual property in the AI age.

“More countries absolutely should follow Denmark’s lead. When AI can replicate anyone’s likeness in seconds, personal identity becomes a business asset that needs systematic protection. What Denmark recognises is that in an AI-first world, if you don’t own your assets, someone else will.

“But this thinking needs to extend beyond personal identity. If individuals need copyright protection over their faces, brands desperately need the same systematic protection for their assets. Most companies still have their brand IP scattered across folders, legal docs, and forgotten systems, completely vulnerable to AI replication.

“Denmark’s law shows that governments are starting to understand that in the age of AI, everything becomes code, and everything that’s coded needs protection. Personal identity, brand identity, it’s all strategic infrastructure now.”
 

 

Arshad Khalid, Technology Advisor, No Strings Public Relations

 

 

“Giving individuals copyright-like control over their own likeness is a necessary step in the age of AI. Denmark鈥檚 approach recognises that people should have a say in how their face, voice or identity is replicated or manipulated by machines. Without clear legal ownership, there鈥檚 no effective recourse when deepfakes are used to spread misinformation, humiliate someone, or exploit their image commercially.

“This isn鈥檛 just a celebrity problem anymore. Anyone can be targeted. What Denmark is proposing could help set a standard where platforms and AI developers are legally required to respect people鈥檚 digital identities – and remove unauthorised content when asked.

“More countries should follow suit by embedding these rights into copyright, privacy or personality laws. It鈥檚 about modernising regulation to match the speed of the technology. Without this, people are left to deal with the fallout of deepfakes on their own, while those creating or hosting them face little accountability.”

 

Camden Woollven, Group Head of AI Product Marketing, GRC International Group

 

 

What impact could Denmark鈥檚 new AI law鈥攇iving people copyright over their face, voice, and body鈥攈ave on global biometric data protection standards?

“It鈥檚 a big shift. If this passes, Denmark becomes the first country to treat your face, voice and body as something you legally own. That goes beyond GDPR and directly tackles how AI models use biometric data, especially in deepfakes.

“The timing matters. Denmark just took over the EU Council presidency, so it鈥檚 in a strong position to push this across Europe. If that happens, it could set a new baseline for biometric protections that puts pressure on other regions to follow.

“It also changes the conversation globally. Most laws treat biometric data as something that needs safeguarding, not something people own. This flips that and puts platforms on the hook to take down unauthorised deepfakes or face penalties. That level of accountability will stand out.

“Long term, this could move us toward treating personal likeness more like intellectual property. That has real implications for how AI training data is sourced and how platforms handle content that blurs the line between creativity and identity theft.”

Would classifying biometric data as copyrighted material help strengthen personal data rights, or could it conflict with laws like GDPR or CCPA?

“It could do both. Giving people legal ownership of their likeness makes it easier to go after misuse, especially with deepfakes or AI-generated content. It gives clearer ground to demand takedowns or compensation, which is something privacy laws often struggle with. But it does create friction with existing laws.

“GDPR treats biometric data as personal, not owned. It鈥檚 about consent, access and erasure. Copyright鈥檚 about control and permanence, which doesn鈥檛 always line up. You could end up in situations where someone has the right to delete their data but also holds permanent copyright over it. That gets messy fast.

“There鈥檚 also a practical side. Adding copyright protections on top of GDPR and CCPA could make day-to-day things like biometric logins harder to manage, especially if you need licensing just to use someone鈥檚 face or voice in a system.

“So while the idea makes sense for specific harms like deepfakes, it probably needs to be more targeted. You want stronger enforcement without creating legal overlap that slows everything down.”

What steps could help countries work together to protect people鈥檚 digital identities across borders from AI misuse?

“It starts with getting aligned. Right now, countries are handling digital identity and biometric data in totally different ways, which leaves gaps that are easy to exploit. We need shared standards, clearer rules and actual ways to enforce them across borders.

“That includes agreeing on what counts as secure infrastructure, how platforms handle biometric data and where the red lines are. The EU鈥檚 already pushing on this with the AI Act and digital ID wallet. Other regions could build on that and make sure systems are interoperable.

“There鈥檚 a tech angle too. Tools like decentralised ID, better authentication and smarter deepfake detection already exist. Countries should be investing in those together and sharing intel on how threats are evolving.

“But none of that works without proper oversight. It鈥檚 not just about fraud. It鈥檚 about rights. We need accountability for platforms, but also for how governments use this data. If countries want to keep people safe from cross-border AI misuse, they need to act like a team.”

What are the main pros and cons of giving people legal control over their likeness to fight deepfakes?

“The biggest upside is control. If you own your likeness, you don鈥檛 have to wait for a platform or regulator to step in. You can go after misuse directly and stop content at the source. It also gives you stronger tools. You don鈥檛 have to prove harm. You can demand takedowns, claim compensation and push for enforcement without jumping through hoops. That creates a deterrent, not just for the people making deepfakes but for the platforms hosting them.

“But there are risks. It鈥檚 not always clear what counts as a likeness. Does a voice pattern count? What about parody or satire? If the rules aren鈥檛 clear, platforms might over-remove just to stay on the safe side.

“It also complicates legitimate use. Biometric data鈥檚 already heavily regulated in places like the EU. Add copyright-style protections and you make things harder for researchers, artists and companies using face or voice tech in normal ways.

“And then there鈥檚 enforcement. Most deepfakes come from anonymous users in other countries. Having rights is one thing, but getting anyone to respect them is another. So in theory, it鈥檚 a strong step forward. But it needs clear limits, solid definitions and proper enforcement. Otherwise it risks overreaching and doing more harm than good.”

How realistic is it for other countries to follow Denmark鈥檚 lead, and what legal or tech obstacles would need to be addressed?

“Some will move faster than others, but this isn鈥檛 going global any time soon. EU countries are the most likely to adopt it first, especially with Denmark leading the Council this year. The AI Act has already laid the groundwork by classifying biometric use as high risk.

“Outside the EU, it鈥檚 harder. Countries like the US and UK don鈥檛 have a legal tradition of treating personal likeness as copyrightable. In the US, it would clash with First Amendment rights. And in countries with limited resources or weaker legal systems, enforcement just isn鈥檛 a priority.

“Tech鈥檚 another hurdle. Saying people own their likeness is one thing. Detecting and acting on misuse is another. Deepfake detection is still patchy, false positives are a problem and platforms aren鈥檛 aligned on how to handle this across borders.

“What鈥檚 more realistic is phased rollout. Start with the EU, maybe a few other privacy-focused countries, and focus on the worst cases like non-consensual deepfakes. Build from there once the systems and legal clarity are in place.”