Author: ondagolegal

  • Can Grok Be Sued?

    Can Grok Be Sued?

    AI systems are installed in most of our social spaces. And they are quickly embedding themselves into our social fabric, shaping public discourse in ways that were unimaginable just a few years ago. Platforms like Grok can generate and publish enormous amounts of content at unprecedented speeds. The scale is staggering, and so is the reach.

    Yet for all their digital eloquence, these systems carry little to no responsibility for the accuracy or impact of their words. This is despite the reality that AI outputs can be misleading: a significant portion of the time actually. On one hand, human speakers are bound by a moral and legal duty to communicate with care, knowing that freedom of expression is not absolute. Laws against defamation, hate speech, and incitement exist precisely to safeguard individuals and communities from real harm.

    Conversely, AI systems like Grok operate in a parallel space, are highly visible, highly influential, but often fall beyond the grasp of traditional legal accountability.

    Grok’s Rampant Verbiage Problem

     

    An X user named Moe posted about Grok getting suspended (again) recently. Another user chimed in and asked Grok directly if it was true. And in classic Grok fashion, it owned up to the suspension, saying it had violated X’s sensitive media rules. What’s even more striking is the way it casually admitted to spreading misleading “facts” about the Gaza “genocide” — and still managed to spin the response in its usual relatable, canny and confident way.

    Grok is designed to answer queries and generate text in real time, covering everything from harmless banter to political commentary. But this rapid-fire production can occasionally cross the line into legally questionable territory, whether by:

    While Grok’s developers likely embed safeguards, no automated filter is perfect, especially when speed and volume are the AI’s primary strengths. This raises an important question: when harm occurs, who is responsible?

    Why It’s Hard to Sue Grok

    In the face of it, no entity is above the law, not even Grok. However, from a legal standpoint, suing Grok directly runs into several roadblocks:

    1. Lack of Legal Personhood – Grok is software. It isn’t a human, corporation, or legal entity. It lacks a legal personality: it cannot own property, enter contracts, or be sued in its own name.
    2. Platform Immunities – In many jurisdictions, internet service providers and platforms enjoy legal shields that protect them from liability for third-party content. In a simplified way, it is the reason one will not sue X or Meta for defamatory statements made by another user. AI-generated speech occupies a murky middle ground, but platforms may still argue similar protections.
    3. Jurisdictional Challenges – Grok’s outputs may be generated in one country, accessed in another, and cause alleged harm in a third. Coordinating cross-border legal action against code running on global servers is a logistical nightmare.

    The Responsibility of AI Deployers

    While Grok itself can’t be dragged into court, its deployers and developers—in this case, X and its parent company—are a different story. They might face liability if:

    • They were negligent in training or monitoring the AI, leading to foreseeable harm.
    • They failed to address known risks, such as hate speech or defamation incidents that had been previously flagged.
    • They marketed the AI as factually reliable, encouraging users to trust outputs without disclaimers.

    On the flip side, deployers could avoid liability if they can prove:

    • They took reasonable measures to prevent harmful outputs.
    • They provided clear disclaimers and warnings to users.
    • Their jurisdiction provides strong legal shields for AI-assisted publishing.

    Jurisdictional Analysis: How Different Regions Might Handle It

    1. United StatesHigh Immunity, Narrow Exceptions

    • Section 230 of the Communications Decency Act offers broad protection to platforms for user-generated content, but AI blurs the lines because the system itself “creates” the content.
    • Current cases (e.g., Doe v. GitHub, Henderson v. OpenAI) are testing whether AI outputs fall outside Section 230 immunity.
    • Defamation claims may only succeed if plaintiffs can prove direct authorship or negligent design by the AI company.

    2. European UnionAccountability Through the AI Act & Digital Services Act

    • The EU AI Act (2024) imposes obligations including transparency, human oversight, and risk management, depending on the risk level of AI systems.
    • The Digital Services Act (DSA) adds liability for platforms that fail to act on illegal content once notified.
    • If Grok output violated EU hate speech or defamation laws, X could be liable unless they took “expeditious” action to remove the content after being informed.

    3. African Union (and National Laws)Fragmented but Evolving

    • The AU lacks a unified AI liability framework, though the African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention) indirectly touches on content responsibility.
    • Many countries have local legislations that can be utilized in litigating the issues. South Africa’s Films and Publications Act and Kenya’s Computer Misuse and Cybercrimes Act already penalize harmful online content, much as applying them to AI deployers remains legally untested.
    • In practice, liability may hinge on whether the deployer is seen as a publisher or merely a tool provider: a legal classification that could vary widely across African jurisdictions.

    What This Means for the Future

    The Grok question is bigger than Grok itself. As AI systems take on roles once reserved for journalists, commentators, and individuals with the right standing, the legal framework for speech accountability is lagging behind.

    We’re entering an era where:

    • AI outputs will increasingly influence elections, markets, and social dynamics.
    • Laws will have to evolve to decide whether liability falls on the coder, the company, the user, or all three.
    • A balance will need to be struck between innovation freedom and harm prevention, just as we once did for newspapers, radio, and social media.

    Until the law catches up, AI will remain a prolific, unaccountable speaker, and one that can move millions with a single generated sentence.

  • Forex Trading Regulations Compliance

    Forex Trading Regulations Compliance

    How to Stay Compliant and Avoid Civil Penalties in Your Jurisdiction

    Over the past few years, there has been a noticeable surge in forex brokers operating across the African continent. From flashy social media campaigns to aggressive influencer marketing, many of these platforms seem to be “hacking” rapid growth and attracting thousands of eager traders. But beneath the surface of fast profits and digital dashboards lies a critical issue that’s often ignored: the legality of these operations.

    Are these brokers properly licensed? Are they compliant with the regulatory frameworks in their jurisdictions? And as a trader or fintech founder, what are your legal obligations?

    Let’s break down what every stakeholder in the forex trading space must know to stay on the right side of the law.

    1. Understand the Regulatory Landscape

    Forex trading is regulated differently in every jurisdiction. In Africa, countries like Kenya, South Africa, Nigeria, and Mauritius have formal structures, while others are still catching up.

    Examples of key regulators:

    • Capital Markets Authority (CMA) – Kenya
    • Financial Sector Conduct Authority (FSCA) – South Africa
    • Securities and Exchange Commission (SEC) – Nigeria

    If you’re offering forex trading services or trading at scale, failing to register with the appropriate body can expose you to civil penalties, shutdowns, or even criminal prosecution.

    1. Get Licensed or Registered Where Applicable

    If you’re a:

    • Forex broker: You likely need a capital markets or dealer license.
    • Signal provider or trainer: You may require an investment advisory license.
    • Fintech platform: You must ensure your operations are not illegally offering financial products.

    Licensing provides legal legitimacy and builds user trust. Operating without one? That’s a lawsuit waiting to happen.

    1. Follow AML/KYC Requirements

    Forex platforms are often flagged for fraudulent and money laundering risks. Regulators will expect:

    • Know Your Customer (KYC) processes
    • Suspicious activity monitoring
    • Recordkeeping and periodic reports

    Failure to comply can result in asset freezes, fines, or forced exits from the market.

    1. Be Transparent in Marketing

    Avoid hyped-up promotions that promise guaranteed profits or “risk-free” trades. Regulators are cracking down on:

    • False advertising
    • Misleading testimonials
    • Failure to disclose risks

    Use language that reflects the speculative nature of forex trading, and ensure your promotional content includes clear, legally vetted disclaimers. The right terminology—crafted with the help of a lawyer—can significantly reduce your risk of regulatory action or legal liability.

    Extreme/Risky Sentiments Compliant Alternatives
    “Trade with us and double your money in 30 days—guaranteed!” “Start trading with us and explore strategies designed to maximize your growth—many clients see strong results within their first month.”
    ❌“Our platform guarantees zero losses—even in volatile markets!” “Our platform offers advanced risk management tools to help you trade more confidently—even in volatile conditions.”
    ❌“Join today and start earning like a pro—no experience needed!” “Designed with beginners in mind, our platform provides educational resources and demo tools to help you build perfect trading skills.”
    1. Avoid Unlicensed Cross-Border Activity

    Many brokers register in one jurisdiction (e.g., Mauritius or Seychelles) and then market aggressively to users in Kenya, Nigeria, or South Africa—without the proper permissions.

    That’s risky.

    Most countries prohibit the solicitation of investors without local licensing. This can trigger enforcement actions from foreign regulators and shut down your operations.

    1. Maintain Proper Records and Compliance Reporting

    If you’re registered, you must file periodic reports, undergo audits, and disclose financials. Keep:

    • Client transaction logs
    • Proof of customer verifications
    • Tax and audit trails

    Neglecting these duties can trigger regulatory investigations and reputational damage.

    Final Thoughts: Compliance Is the Real Flex

    In an industry full of hype and shortcuts, compliance is what separates serious forex players from risky hustlers. Whether you’re a trader looking to scale or a platform founder expanding across borders, understanding and following your jurisdiction’s laws is essential. Working with a lawyer to register and audit marketing content can protect you from regulatory trouble and build investor trust.

    Are you a trader, signal provider, or forex platform in need of legal certification or compliance guidance?

    Talk to OndagoLegal

    We help fintechs, brokers, and traders navigate licensing, regulation, and risk in Africa’s evolving forex markets.

  • IP for Software Developers

    IP for Software Developers

    What Every Software Developer Should Know About Protecting Their Code

    In the startup world, software is often the backbone of innovation—and the biggest asset. Whether it’s a game-changing mobile app, a disruptive fintech platform, or a backend algorithm powering AI tools, the code you write could be worth millions. Yet many startups overlook the legal fundamentals of protecting this code until it’s too late.

    Here’s what every developer and founder should know about intellectual property (IP) and software.

    1. Your Code Is Copyrighted—But That’s Not Enough

    By default, the code you write is protected by copyright law, as long as it’s original and fixed in a tangible medium (yes, saved on your hard drive counts). Copyright protects the expression of the idea—not the idea itself or the functionality.

    The problem? Copyright doesn’t prevent someone from recreating your software using a different approach. To get broader protection, you may need additional strategies: explore broader IP strategies like patents, trade secrets, or licensing frameworks as in line with the IP in question.

    1. Use Contracts to Define Ownership

    Many startups run into ownership disputes when co-founders or freelance developers part ways. Unless you’ve put clear IP clauses in place:

    • A contractor may own the code they wrote.
    • A co-founder may leave and take their contributions with them.

    Solution: Use “work-for-hire” clauses, NDAs, and contributor agreements to clarify who owns what from day one. Explore the best clauses to protect you and your IP in case of contractual disputes.

    1. Consider Patents for Core Innovations

    If your software solves a technical problem in a novel and non-obvious way, you may be eligible for a software patent—especially in the U.S., EU, or select African jurisdictions.

    Patents can protect:

    • Unique algorithms
    • Data processing methods
    • Software-hardware integrations

    Note: Pure business methods or abstract ideas are not patentable. A tech lawyer will aid in evaluating if your innovation qualifies.

    1. Protect Your Brand with Trademarks

    While code runs your product, your brand markets it. A strong name, logo, or slogan sets you apart—and can be protected under trademark law.

    Register your product name, platform name, or SaaS service early to avoid costly rebranding down the line or domain disputes.

    1. Don’t Ignore Open Source Licensing

    Using open source libraries can save time—but misuse can destroy your IP rights. Many open source licenses (like GPL) require you to share your own code if you distribute software built on them.

    Understand the difference between permissive (MIT, Apache) and restrictive licenses, and audit your dependencies regularly.

    1. Think Globally; File Strategically

    Startups aiming for international markets should plan for multi-jurisdictional protection. IP laws vary by country, and enforcement is local. Start by protecting IP in key markets where:

    • You operate
    • You plan to expand
    • You have users or investors

    African startups should also explore protections via ARIPO, OAPI, or national IP offices.

    Final Thoughts: Protect Early, Grow Smarter

    Software IP is more than a legal checkbox—it’s a startup’s competitive edge. Protecting your code and branding from the outset avoids disputes, strengthens investor confidence, and boosts valuation.

    At OndagoLegal, we help startups secure their innovations with practical, forward-thinking legal strategies for the digital age.

    Need help securing your software’s IP?
    Let’s chat. Our team offers tailored advisory for developers, founders, and emerging tech businesses across Africa.

  • How to Protect Your Ideas in the Age of AI

    How to Protect Your Ideas in the Age of AI

    A Creator’s Guide to Safeguarding Intellectual Property

    In today’s AI-driven world, creativity is no longer the sole domain of human minds. AI can compose music, generate artworks, write books—even design new products. While this opens exciting possibilities, it also creates legal gray areas that make it harder than ever for creators to protect their original ideas. As a creator, how can you safeguard your intellectual property (IP) when algorithms are part of the creative process?

    1. Know What You Own—and What You Don’t

    The first step is understanding what qualifies for protection. Copyright covers original expressions (like books, music, and artwork), patents protect inventions, and trademarks secure your brand. But here’s the catch: Most jurisdictions still don’t recognize AI as an author or inventor. That means you, the human, must demonstrate authorship or ownership over any AI-assisted creations.

    2. Document Your Creative Process

    Keep detailed records of your idea development—sketches, drafts, timestamps, even email threads. If AI was used, note how: Did it assist, or did it originate the idea? Courts and IP offices often rely on such documentation to determine originality and authorship.

    3. Use AI Tools Wisely (and Legally)

    Many AI platforms come with restrictive licensing terms. Before you generate content or design using AI, check who owns the output. Some platforms retain rights to the work produced, which may limit your ability to commercialize or claim exclusivity.

    4. File for IP Protection Early

    If your idea is unique and commercially viable, consider filing for IP protection before sharing it. For instance, copyright arises automatically, but registering it strengthens your position in disputes. If it’s an invention, file a provisional patent. If it’s a brand, register your trademark—preferably in key markets and your local jurisdiction.

    5. Use NDAs and Contracts

    Before collaborating with freelancers, AI developers, or co-creators, use Non-Disclosure Agreements (NDAs) and clear contracts that define ownership, usage rights, and licensing terms. In the AI space, ambiguity leads to costly legal battles.

    6. Monitor and Enforce Your Rights

    Use tools like reverse image search or AI-powered copyright monitoring to spot infringement. If your work is used without permission, consult a lawyer to send takedown notices or pursue legal action.

    7. Stay Informed on AI and IP Laws

    Laws are changing fast. For example, the EU AI Act and ongoing WIPO discussions could reshape global IP norms. Subscribe to updates or consult professionals who specialize in emerging tech law to keep your rights protected.

    How can I copyright and protect my art from misuse?
    How to safeguard your art legally in the age of AI

    Final Thoughts:
    AI is changing how ideas are born, shared, and monetized—but your creativity is still your most valuable asset. By taking proactive steps to secure your intellectual property, you not only protect your work but also position yourself confidently in the digital economy.

    Need help protecting your original, AI-generated or assisted ideas?

    Get in touch with our team at OndagoLegal: we simplify the complex and help creators thrive in the AI age.

  • Who Owns AI-Generated Art? A Copyright Conundrum

    Who Owns AI-Generated Art? A Copyright Conundrum

    The rise of Artificial Intelligence (AI) has sparked an artistic revolution, with AI systems now capable of generating breathtaking images, captivating music, and intricate literary works. But as these digital masterpieces proliferate, a fundamental question emerges: Who owns AI-generated art? This isn’t just an academic debate; it delves into the core principles of Intellectual Property (IP) law and challenges traditional notions of creativity and authorship.

    The General Principles of Copyright Law: The Human Authorship Requirement

    Traditionally, IP law, particularly copyright, is designed to protect the creations of human minds. Copyright grants creators exclusive rights to reproduce, distribute, perform, and display their original works. The underlying rationale is to incentivize human creativity and innovation by providing a framework for creators to benefit from their efforts. Key to this framework is the concept of “authorship,” which has historically been synonymous with human endeavor.

    For a work to be eligible for copyright protection, it typically needs to meet certain criteria, including:

    • Human Authorship: It needs human craftmanship.
    • Originality: The work must be independently created and possess a modicum of creativity. It shouldn’t simply be a copy of another work.
    • Fixation: The work must be expressed in a tangible medium.

    However, these human-centric principles encounter a significant hurdle when confronted with AI-generated content.

    The Complexities of AI-Generated Content Ownership

    The difficulty in assigning ownership to AI-generated art stems from several factors:

    • Lack of Human Authorship: If an AI system autonomously generates a work with minimal or no human intervention, can it be considered an “author”? Current IP laws generally do not recognize AI as having legal personality or the capacity to hold rights.
    • The “Tool” vs. “Creator” Debate: Is AI merely a sophisticated tool, like a paintbrush or a camera, with the human user remaining the author? Or does its ability to generate content with little direct human control elevate it to a more autonomous “creator” status?
    • Training Data and Infringement Risks: AI models are trained on vast datasets, often containing copyrighted material. This raises concerns about potential copyright infringement if the AI’s output is deemed derivative of the training data.
    • Multiple Stakeholders: Who truly has a claim? The AI developer who coded the system? The person who curated and fed the training data? The user who provided the prompts? The owner of the AI system?

    Recent Copyright Legislations and Judicial Precedence

    Jurisdictions worldwide are grappling with these questions, with varying approaches and interpretations emerging. Here are some trajectories being taken by various jurisdictions:

    The United States: Emphasizing Human Authorship

    The U.S. Copyright Office has consistently maintained that human authorship is a prerequisite for copyright protection. Their guidance indicates that works generated entirely by AI, without human creative input, are not eligible for copyright.

    A landmark case illustrating this stance is Thaler v. Perlmutter. Stephen Thaler sought to register a copyright for an image created by his AI system, the “Creativity Machine,” listing the AI as the sole author. Both the U.S. Copyright Office and subsequent court rulings, including the D.C. Circuit Court of Appeals, denied the application, affirming that the Copyright Act requires a human author. The court explicitly stated that “authors are at the center of the Copyright Act.”

    However, the U.S. Copyright Office has clarified that if a human provides “significant creative input” such as editing, arranging, or selecting AI-generated elements, those human-contributed portions might be eligible for copyright. The key lies in the “level of control exerted by human creators.”

    The European Union: “Author’s Own Intellectual Creation”

    The European Union’s copyright framework also leans heavily on the concept of human authorship. Works must be the “author’s own intellectual creation,” reflecting their personality and resulting from their “free and creative choices.” This generally implies the necessity of a human author.

    While the EU’s Artificial Intelligence Act primarily focuses on regulating AI systems based on risk, it includes transparency requirements for AI-generated content and mandates disclosures regarding the use of copyrighted data for training AI models. This doesn’t directly address ownership but facilitates identifying AI involvement and aims to address concerns around training data.

    Kenya: A “Person by Whom the Arrangements Were Undertaken”

    Kenya’s legal landscape offers a slightly different perspective, particularly under the Copyright Act. While Kenyan courts have not yet specifically ruled on the copyrightability of purely AI-generated work, the Act’s definition of “author” in relation to “a literary, dramatic, musical or artistic work or computer program which is computer generated” states it means “the person by whom the arrangements necessary for the creation of the work were undertaken.”

    This phrasing opens the door for interpretation. It could attribute authorship to the user who makes the arrangements for the AI system to create the work, even if it involves minimal input like typing prompts. This suggests a potential recognition of the user’s role in initiating and guiding the AI’s creation.

    The Copyright (Amendment) Act of 2022 primarily focused on revenue sharing for ring back tunes and establishing a National Rights Registry. While significant for the creative industry, it didn’t directly address the complexities of AI-generated content ownership. The Kenya Copyright Board (KECOBO) and other relevant bodies are actively monitoring international developments and engaging in discussions on the implications of AI for intellectual property law, and keeping up with trends will be vital.

    The Way Forward: Navigating the New Frontier

    The question of who owns AI-generated art is far from settled. The rapid advancements in AI technology constantly challenge existing legal frameworks, which were designed for a human-centric creative world.

    Several approaches are being considered globally:

    • Legislative Reforms: Many argue for new legislation specifically tailored to address AI-generated content, defining authorship and ownership in this evolving landscape.
    • Hybrid Models: Some propose “hybrid authorship” models where both human and AI contributions are acknowledged, potentially leading to shared ownership or new licensing structures.
    • Contractual Agreements: For AI tools that allow users to retain ownership of outputs (like some tiers of MidJourney or OpenAI’s ChatGPT terms), contractual agreements will play a crucial role in defining rights between the user and the AI developer.

    For Authors and Content Creators

    While the legal landscape is still forming, particularly in Kenya and the African continent in general, the clear message from recent developments, both internationally and locally, is that human input remains paramount for copyright protection.

    For creators leveraging AI, understanding the nuances of how your involvement shapes the copyrightability of your work is no longer optional—it’s essential. Are you merely prompting, or are you significantly shaping, selecting, and refining the AI’s output? The distinction can mean the difference between owning your creation and having it fall into the public domain.

    Don’t leave the future of your artistic endeavors to chance. Our team of experienced and well-versed intellectual property lawyers is at the forefront of this evolving field. We can help you navigate the complexities of AI and copyright, assess the copyright-ability of your AI-assisted works, and advise on best practices to safeguard your intellectual property in this new digital age.

    Contact us today for a consultation. Let’s work together to protect your creativity and ensure your voice, and your art, are recognized and rewarded.

     

  • Deepfakes in Politics: Should You Be Worried?

    Deepfakes in Politics: Should You Be Worried?

    The proliferation of AI-generated images, audio, or video content designed to mimic real individuals has sparked significant legal and ethical debates, particularly in the realm of politics. Across the globe, and increasingly within the African continent, deepfakes are emerging as a potent tool for misinformation, raising serious concerns about their potential to distort public perception, manipulate electoral outcomes, and undermine democratic processes. Besides, many deepfake tools are becoming increasingly sophisticated. This makes it difficult for the average person to discern them from authentic content.

    The Legality of Deepfakes: A Case for and Against

    Public image is crucial to achieving political ambitions. Goes without say, negative publicity can be damaging, making it imperative for political figures to avoid being misconstrued or defamed in the eyes of the public. However, with the advent of deepfakes, this is harder to achieve as social media users share almost realistic, but misleading photos and videos of politicians online. While free speech rights maintain that such individuals have the right to express opinions and criticize public figures even in a satirical way, liberty to express oneself is not immutable. This begs the question, should one be worried about sharing deepfakes? Can an affected person successfully sue another for sharing deepfakes? where should the balance be struck?

    Constitutional Right & Political Satire Vs Character Assassination

    From the offset, most courts treat even false political expression as protected under free speech. The constitutions of most democracies, legislations and judicial precedents set across most jurisdictions widen the scope of free speech to include satirical works, parody and incendiary speech As was the case in Hustler v. Falwell-style suits, deep fakes deemed satire or parody typically receive full protection.

    However, this right is not absolute and is subject to limitations. When deepfakes are seen to be outrightly defaming another, then they go beyond the realms of protection. When deepfakes are crafted to portray false information about a person, they can constitute a false statement of fact. If such content is published or shared with third parties and causes reputational harm, it satisfies the key elements of defamation: falsity, publication, harm, and fault. In these cases, deepfakes lose protection under free expression laws and may expose the creator or distributor to civil liability.

    Furthermore, when it comes to election interference, deepfakes are normally viewed differently and they can be judged harshly. Jurisdictions are already treating deepfakes as a serious threat to election integrity and the entire democratic process. Therefore, depending on the specific laws of certain jurisdictions, anti-deepfakes policies are being enacted with the aim of deterring their malicious use, ensuring transparency, and providing legal recourse for those affected by them.

    Legal insight: Deep fakes intended as political satire are generally protected as free speech: restrictions focused only on “knowingly deceptive” deep fakes narrowly targeted at election interference are more legally sustainable, but even those face constitutional scrutiny.

    Entertainment Value vs. Deception

    In the latest episode of “He Must Go,” Kenyan social media has been graced with deepfake videos of individuals literally taking one on the leg—allegedly inspired by President Ruto’s now-viral “shoot in the leg” directive.

    And two distinct approaches have been given to the scenario. His detractors argue that it’s mere entertainment while proponents of the president are worrisome that the trend is inflammatory and deceptive.

    Pro-Deepfake Alliance

    Citizens, ever creative, have apparently taken the order quite literally—turning political rhetoric into satirical entertainment clips.

    Anti-Deepfake 

    This chilling trend illustrates just how recklessly users are propagating a culture of violence by using AI for deception and PR spin

    Striking the balance is the real deal. Many deep fakes are harmless entertainment: comedic parodies of political figures or clever mashups. Under most systems, these are free speech, even if unrealistic or comedic, as long as audiences reasonably understand the creative intent.

    Key legal factors to look out for are:

    • Context: Clear labeling or obvious satire supports free speech claims.
    • Intent: Satirical intent differs legally from intent to defraud or manipulate.

    However, when the intent is deemed to be deception, then such deepfakes fail against the threshold of entertainment value.

    Hate Speech, Cybercrime & Non‑Consensual Deep Fakes

    Deep fakes also raise serious concerns about hate speech and harassment. Legally, these are not protected, and jurisdictions are responding:

    • Non-consensual intimate deep fakes—Deepfakes that showcase explicit and pornographic content featuring unwilling individuals—are unequivocally banned in most jurisdictions.
    • Hate-oriented deep fakes, such as racist or sexist content targeting protected groups, may be prosecuted under hate speech or incitement laws depending on the country.

    Takeaway

    From the foregoing, deep fakes in politics fall into three legal zones:

    Use Case Legal Status
    Satire/parody Protected as free speech
    Defamation or disinformation Potential liability
    Hate/cyberbullying or non-consensual content Disallowed and prosecutable

    Call to Action

    Looking for professional legal expertise to help you navigate free speech, labeling obligations, defamation risks, or hate speech concerns around politically generated AI content?

    Contact Us for tailored legal advice and compliance support.

  • What the EU AI Act Means for African Startups

    What the EU AI Act Means for African Startups

    Navigating compliance and opportunities in the new regulatory landscape

    The European Union’s Artificial Intelligence Act (EU AI Act), which came into force on August 1, 2024, is the world’s first comprehensive legal framework regulating AI systems. While primarily targeting the EU market, its extraterritorial provisions mean that every AI system venture, including African tech and business ventures offering AI products or services accessible to EU users, must consider its implications while also ensuring compliance.

    Why African Startups Should Pay Attention

    African companies are increasingly integrating AI systems into their operations, while also making AI systems with far-reaching implications. From retail businesses deploying chatbots in customer service, to startups developing tools for recruitment agencies to screen résumés, the inception of AI across the continent is growing rapidly; with far-reaching implications for the global market.

    However, with this expanded digital reach comes increased legal complexities, which call for greater awareness. African startups operating or offering AI-driven services beyond their borders must be cognizant of international AI Laws. Here are some of the key reasons why African businesses should pay attention the EU’s AI Act:

    1. Extraterritorial Reach

    The EU AI Act applies not only to companies within the EU but also to those outside the EU if their AI systems are used within the EU or affect EU citizens. In other words, the Act is not just about the physical location of a company operating or deploying AI systems, but rather It’s designed to regulate the EU market. An African startup with AI system output intended to be used within the EU, or one that provides, deploys, or places an AI system or general-purpose AI model on the EU market, will fall under its scope, regardless of their physical location.

    2. Operational Efficiency

    Having established that the EU AI Act has extraterritorial reach, integrating compliance by design from the outset, rather than retrofitting it later, is far more efficient and cost-effective for any business with aspirations of expanding into the EU. Ignoring the EU AI Act initially and then scrambling to comply when market opportunities arise can lead to significant delays, rework, and increased expenses.

    3. Penalties for Infringement

    For any AI startup with global ambitions, treating the EU AI Act as a blueprint for responsible and compliant AI development is fundamental in order to avoid penalties for non-compliance. African companies deploying or operating AI systems in the EU without proper compliance to the EU AI Act risk hefty fines plus additional penalties. As per the Act, fines can be as high as €35 million or 7% of a company’s total worldwide annual turnover from the preceding financial year, whichever amount is higher. Such severe fines can be catastrophic for a startup.

    4. Foundational Compliance

    The EU AI Act gives companies the platform to build a sustainable business profile in the rapidly evolving AI landscape. Through the classification of their AI system and compliance with the applicable policy requirements, businesses are able to continue operations without the worry of regulatory hurdles. For instance, if your AI system is classified as “high-risk” (which many business-critical AI applications are), the compliance requirements are substantial. These include implementing robust risk management systems, ensuring data quality and governance, providing detailed technical documentation, enabling human oversight, and undergoing conformity assessments. Having all these engrained in the business system from the offset saves the organization potential future pitfalls that would otherwise impede growth.

    5. Ahead of the Curve

    Finally, the EU AI Act is the first comprehensive AI regulation globally, and it’s expected to influence AI regulations in other jurisdictions. By proactively understanding and aligning with its principles, a startup can better prepare for future regulatory landscapes that may emerge in other countries. The African Union’s Continental AI Strategy, endorsed in July 2024, reflects similar principles, promoting ethical and responsible AI practices across the continent. By aligning with the EU’s standards, African startups can stay ahead of emerging local regulations.

    African startups must assess their AI systems to determine the applicable risk category and adhere to corresponding obligations.

    A plus: Competitive Advantage

    Achieving compliance with the EU AI Act can serve as a quality stamp, enhancing trust among users and partners. This can open doors to new markets and collaborations, positioning African startups as trustworthy players in the global AI landscape.

     

    In conclusion, by proactively addressing the requirements of the EU AI Act, African startups can not only mitigate compliance risks but also position themselves as leaders in ethical and responsible AI deployment, on the global stage. The EU is one of the world’s largest and wealthiest single markets. For many AI startups, gaining access to this market is a significant growth opportunity. Non-compliance with the AI Act would effectively bar an African firm’s AI product or service from this market, severely limiting their potential customer base and revenue.

    C.T.A

    Engaging qualified legal professionals in your AI business not only aids in compliance with the EU AI Act as well as emerging local regulations, but also offers strategic benefits.

    Looking for professional legal expertise to help you navigate the EU AI Act and related laws without unnecessary stress?

    Contact Us