Tag: programme

  • TikTok launched community notes. Why are social media sites betting on crowdsourced fact-checking?

    Is TikTok the New Fact‑Checking Fan Club?

    Feel like you’re scrolling through a circus of short‑form clips and wondering who’s actually going to tell you whether that viral dance is legit or just a hoax? TikTok’s got the answer—cue the community‑powered review system.

    How the Crowd‑Sourced Crew Works

    • Write a verdict: Users can craft a quick “thumbs‑up” or “thumbs‑down” comment that explains why a clip is trustworthy or suspect.
    • Rate the context: After reviewing, you can grade the response on a scale—think “top tier truth” vs. “needs more evidence.”
    • Build the breadcrumb: These crowd–generated notes stick to the original post, acting like a breadcrumb trail that guides future viewers.

    Why It Matters…

    Because today’s clip‑culture can feel as shaky as a toddler on roller‑blades. If a quick note from a peer can help cut through the noise, you’re less likely to end up buying an online pyramid scheme or a brand‑new “haunted” Bluetooth speaker.

    What to Expect
    • Dashboard of flagged content that users are already looking at.
    • More transparency about who’s adding context.
    • And the best part—no heavy-handed algorithm patching the truth.

    In short, TikTok is turning its millions of viewers into a massive fact‑checking squad. That’s a move your brain will thank you for—and maybe you’ll finally have a “trusted” comment for that dancing pizza guy.

    Hey TikTok Users: Meet the New “Footnotes” Fact-Check Feature

    Oooh, TikTok just rolled out a brand‑new tool—think of it as a fact‑checker’s passport that lets you add real, community‑verified context to any video. The feature, Footnotes, has launched first in the U.S., and it’s a game‑changer for how we fight misinformation on the “short‑form” playground.

    What’s Footnotes All About?

    • Drop a Note: When you spot a clip that might need a bit of extra clarity, you can slap a quick note to the video. Think “Here’s a research‑grade explanation” or “Check out the latest stats.”
    • Vote on Visibility: You can cast a vote on whether your note should appear under the video’s comments. Everyone’s voice matters.
    • Help the Community: The more helpful content you militate for, the higher the chance it gets highlighted for fellow viewers.

    Why This Big Deal?

    TikTok, Instagram, X—they’re all taking steps to make sure online claims stay a little less “sheep‑ish” and a bit more “sheep‑wise.” Different platforms are cleaning up the noise by letting users contribute their own, trustworthy facts.

    What You Gotta Know

    • Footnotes can cover everything from “a complicated STEM concept” to “new stats on a breaking story.”
    • US roll‑out first; others are likely to follow—so keep an eye on the app updates.
    • If you’re skeptical of something you see, feel free to add a note—just remember the collective voice decides whether it sticks around.

    It’s all about building a community that values accuracy over hype. So next time you scroll, arm yourself with a footnote—because a little extra context can go a long way!

    What is community fact-checking?

    When Social Media Turns Itself Into a Fact-Checking Squad

    Meet the Protagonists

    • Scott Hale – Associate professor at the Oxford Internet Institute, the brains behind Twitter’s Big Bird experiment.
    • Otavio Vinhas – Researcher at Brazil’s National Institute of Science and Technology in Informational Disputes and Sovereignties, the self‑appointed commentator on the latest meta‑notes craze.
    • Elon Musk – The new CEO who took the helm of X (formerly Twitter) in 2022 and decided to keep the community notes train running.
    • Virgil the TikTok Publicist – Proposer of the platform’s new “Footnotes” feature, slightly different from Meta’s and X’s crowd‑sourcing adventures.

    Why It Started… and Why It’s Still Going on

    Back in 2021, Twitter introduced a little program called Birdwatch, an earnest attempt to let users flag factual inaccuracies. Even after Elon Musk bought the platform, the experiment was nudged forward, proving the idea had more teeth than the often‑mistrusted algorithmic curation.

    Meanwhile, Meta—owning Facebook and Instagram—rolled out its own Community Notes this year, aiming to spread the same crowdsourced approach across all its social realms. That launch comes hot on the heels of a trend championed by US President Donald Trump, who has pushed for a more “libertarian” stance on free speech.

    Vin—Beauty, Accuracy, and Your Freedom

    According to Otavio, the demand is simple: platforms should “commit to this libertarian view”. In other words, content moderation should be as hands‑off as possible, letting users weigh in on truth without the platforms stepping in to sanitize or filter narratives.

    He told Euronews Next that a “fair moderation” would prioritize free speech over worries about potential harm or false claims—an approach that might feel like giving the internet a “vote’ of their own.”

    Science Backing the Crowd

    Scott’s research confirms that crowdsourcing can be surprisingly reliable. Studies show that a well‑divided group of ordinary users can almost match professionals when checking facts. This means that the modern “fact‑checkers” on your timeline are, on average, not fools.

    Footnotes vs. Other Programs

    Someone asked whether TikTok’s new “Footnotes” is just retreading the same ground. Virgil (yes, the TikTok ally who invites you to add source links) pointed out that this thing is a tad different from X’s or Meta’s initiatives: users still gotta add the source behind their note, even though X doesn’t require it.

    While all these platforms promise a free‑speech‑friendly environment, they all demand that you do the heavy lifting of providing proof. So, if you’re not a fan of digging for citations, you might as well pay attention to the footnotes.

    Bottom Line

    Social media’s newest “facts‑checking brigade” is a mix of coffee‑shop deliberations and the sheer force of numbers. Whether you think this democracy approach will save the internet or just add another layer of “online proof‑reading” depends on who’s reading it and how prolific the crowds are. For now, every comment is a potential saga of truth, humour, and a dash of libertarian flair—all under the watchful eye of the modern day community notes program.

    Most notes don’t end up on the platforms

    How Social Media’s “Community Notes” Are Failing to Spark Real Debate

    Social media platforms promise to surface the smartest ideas, yet the reality is a bit more… quiet. According to communication strategist Hale, the crux of the mess is that the people who actually get to see these community notes are simply the wrong ones for the job.

    What’s the Idea Behind the Notes?

    All three big services—think Twitter, Meta’s AudioVerse, and LinkedIn’s Pulse—use a “bridge-ranking” method. The tech looks at who you follow or watch, then spots other users who have a similar consumption profile. If you’re two totally different users in the algorithm’s eyes, the platform will show each of you a note to gauge how useful it feels.

    Notes that pass the test get published and become permanently visible on the site. Sound fast? Nope.

    The Nightmare of Unseen Notes

    • Vin has a telling line: “The vast majority of notes are basically invisible.”
    • DDIA’s June study uncovered that over 90% of 1.7 million English and Spanish community notes on X never made it online.

    Even when notes do get through, the waiting game lingers. The average e‑deliver time fell from 100 days in 2022 to 14 days—still a half‑hour of suffocating anticipation.

    Echo Chambers Are a Hard Nut to Crack

    Hale points out that social media’s “echo chambers”—where you’re only fed material that echoes your existing beliefs—make it tough for a user to stumble across content that actually challenges their views. “You’ll find yourself in a network that feels eerily like your own thoughts,” he said.

    Let’s Add Some Gamification!

    One bright idea Hale floated: take a page from Wikipedia. Just so you know, on the wikia, contributors have their own profile page that showcases their edits, and they can earn badges for longevity and impact. Social media could replicate that vibe: give editors awards, let them run contests, and even start fundraisers.

    The Bottom Line

    Vin believes that whether platforms deliver on their lofty promise to level the playing field—or create a marketplace of ideas—remains in doubt. It’s a tangled mess, but that’s precisely why the conversation deserves better.

    What else do social media sites do to moderate content on their platforms?

    Social Media’s Digital Band-aids: How Meta, X, and TikTok Keep the Internet From Turning into a Bad Joke

    1. Meta’s AI Safari

    Imagine a robo‑herder that zips across the digital savannah, sniffing out rogue posts that break the platform’s house rules. Meta’s AI is built for that exact job. Instances that match known violations are instantly snatched away, leaving the virtual crowd free to share memes and cat videos unimpeded.

    Why the AI Strays

    • Training bias: The system has learned to flag only the claims it has seen before. New, sneaky lies can slip through because the AI simply hasn’t met them yet.
    • The human backup: Once a post is flagged, a moderator looks it over to confirm the violation or spot context that the machine missed.

    2. X’s (formerly Twitter) Community Notes

    X is trading its old fact‑checking partners for a tool called Community Notes. Think of it as a crowdsourced “Newsflash” that lets users add clarifying context to a tweet within seconds.

    • Community-Generated Paragraphs: Anyone can write a note explaining why a tweet might be misleading.
    • Procrastination risk: There’s no clear route for this system to keep burning through the same misinformation loop as it could be rote.

    3. TikTok’s Growing Global Fact‑Checking Network

    TikTok is partnering with seasoned fact‑checkers across the world, integrating their insights into a “global fact‑checking program.” The company’s approach is upbeat, but some worry whether it will be sustainable once the initial excitement fades.

    Commercial vs. Community: Two Teams, One Playbook

    • Professional fact‑checkers: Train rigorously, consult experts, and dive deep into official sources.
    • Community notes: Fast, informal, and more likely to reflect the everyday user’s voice.

    While each platform leans on machines for the first pass, the real magic happens when a human hand is on deck, or when the community dives in to add that extra layer of truth‑checking. The balance between automated filters and human oversight—plus community-driven context—is like having both a steel‑toothed bouncer and a chill barista keeping the digital bar functional.

    Experts say that while the AI may miss a new trick, a dedicated fact‑checker can be around the clock, catching political crises that a casual user might miss. So, whether Meta’s AI, X’s community notes, or TikTok’s pro fact‑checkers, the hope is that a blend of technology, community, and seasoned professionals will keep the internet a tad bit safer—and maybe a bit less full of conspiracies.

  • Is Europe ready to police AI? Supervision and sanctions start soon

    Is Europe ready to police AI? Supervision and sanctions start soon

    A range of provisions under the EU’s AI rulebook will enter into force, dealing with national oversight, penalties and general purpose AI models.

    ADVERTISEMENT

    Significant changes in terms of oversight and penalties are round the corner for AI suppliers in Europe as new provisions of the EU’s AI Act enter into force from 2 August.
    Here’s what will change month regarding the EU’s rulebook on AI, which has been in force exactly one year this month, but which has been implemented gradually.

    National oversight

    On 2 August, member states will have to notify the European Commission about which market surveillance authorities they appoint to oversee businesses’ compliance with the AI Act.
    That means that providers of AI systems will face scrutiny as of then.
    Euronews reported in May that with just three months to go until the early August deadline, it remained unclear in at least half of the member states which authority will be nominated.
    The EU executive did not want to comment back in March on which countries are ready yet, but expectations are that member states that recently went through elections will be delayed in setting up these regulators. 
    According to a Commission official, some notifications have now been received, and they are under consideration. 

    Laura Lazaro Cabrera, Programme Director for Equity and Data at the Centre for Democracy and Technology, told Euronews that many member states are set to miss the 2 August deadline to appoint their regulators. 
    She said it’s “crucial” that national authorities are appointed as soon as possible, and “that they are competent and properly resourced to oversee the broad range of risks posed by AI systems, including those to fundamental rights.”
    Artur Bogucki, an associate researcher at the Centre for European Policy Studies (CEPS), echoed the likely delays. 
    “This isn’t surprising when you consider the sheer complexity of what’s required. Countries need to establish market surveillance authorities, set up notifying bodies, define sanction regimes, and somehow find staff with expertise spanning AI, data computing, cybersecurity, fundamental rights, and sector-specific knowledge. That’s a tall order in today’s competitive tech talent market,” he said. 

    Bogucki said it doesn’t stop there, because it remains to be seen how multiple bodies at both EU and national levels need to coordinate together. 
    “This complexity becomes even more challenging when you consider how the AI Act must interact with existing regulations like GDPR, the Digital Services Act, and the Digital Markets Act. We’re already seeing potential for overlaps and conflicts, reminiscent of how different data protection authorities across Europe have taken divergent approaches to regulating tech companies,” he said.

    Related

    Europe’s top CEOs call for Commission to slow down on AI Act EU should simplify its AI Act and digital rulebook, says Danish minister

    Penalties

    Also entering into force are provisions enabling penalties. Companies may be fined up to €35 million for breaches of the AI Act, or up to 7%of total worldwide annual turnover, whichever is higher. 
    EU countries will need to adopt implementing laws that set out penalties for breaches and empower their authorities. For smaller companies, lower fines will apply. 
    The AI Act sets a ceiling not a floor, for fines. According to Lazaro Cabrera, there is likely going to be “significant variability on how member states choose to fine their public authorities for non-compliance of the AI Act, if at all.”
    She said that while there will be some divergence in how member states set the level of fines applicable, “forum-shopping in this context has its limits.”
    “Ultimately market surveillance authorities have jurisdiction to act in connection to any product entering the EU market as a whole, and fines are only one of many tools at their disposal,” she said.
    Bogucki said that the governance structure also needs to grapple with questions about prohibited AI practices, for example when it comes to biometric identification. 
    “Different member states may have very different political appetites for enforcement in these areas, and without strong coordination mechanisms at the EU level, we could see the same fragmentation that has plagued GDPR enforcement,” he said. 

    Related

    Italy and Hungary fail to appoint fundamental rights bodies under AI ActTrump to release AI action plan that borrows from Silicon Valley’s ideas

    GPAI

    Lastly, the rules on general purpose AI systems – which include large language models such as X’s Grok, Google’s Gemini, and OpenAI’s ChatGPT – will enter into force.
    In July the Commission released a much-debated Code of Practice on GPAI. This voluntary set of rules that touches on transparency, copyright, and safety and security issues, aims to help providers of GPAI models comply with the AI Act.
    The Commission has recently said that those who don’t sign can expect more scrutiny, whereas signatories are deemed compliant with the AI Act. But companies that sign the code will still need to comply with the AI rulebook.
    US tech giant Meta said last week that it will not sign, having slammed the rules for stifling innovation, others like Google and OpenAI said they will sign up.
    To make things more complicated, all the products that were placed on the market before 2 August have a two-year period to implement the rules, and all new tools launched after that date have to comply straight away.
    The EU AI Act continues to roll out in phases, each with new obligations for providers and deployers. Two years from now, on 2 August 2027, the AI Act will be applicable in full.