Tag: cta__location

  • The growing debate over expanding age-verification laws

    The growing debate over expanding age-verification laws

    Technologists and policymakers are reckoning with a generation-defining problem on the internet: While it can be a revolutionary force for unprecedented education and connection across the globe, it can also pose dangers to children when they have completely unfettered access.

    There is no simple way, however, to monitor children’s internet access without surveilling adults, paving the way for disastrous online privacy violations.

    While some advocates praise these laws as victories for children’s safety, many security experts warn that these laws are being proposed and passed with flawed implementation plans, which pose dangerous security risks for adult users as well. In the United States alone, 23 states have enacted age-verification laws as of last month, with two more states following suit in September. Meanwhile, the United Kingdom’s Online Safety Act, which took effect in July, requires many online platforms to verify users’ identities before granting access.

    Here’s a primer on where the debate over age and identity verification stands.

    What exactly is age verification?

    When we talk about age verification laws, we aren’t talking about when you made a Neopets account as a kid and checked a box to affirm that you were at least 13 years old. In the United States, those types of age checks are a result of the Children’s Online Privacy Protection Act (COPPA), an internet safety law passed in 1998. But, as you already know, if you had a Neopets account when you were 10, COPPA-era age checks are very easy to navigate around. You simply click a box that says you’re 13.

    In the context of the laws that have cropped up during the 2020s, age verification usually refers to a user uploading an official ID to a third-party verification system to prove who they are. Users might also upload biometric facial scans, like the ones that power Face ID on iPhones.

    What is the point of age verification?

    Of course, internet safety is not really about preventing children from playing games like Neopets. Parents and lawmakers are concerned about children accessing content that’s potentially dangerous for minors, like online pornography, information about illicit drug use, and social media sites where they may encounter strangers with bad intentions.

    Techcrunch event

    Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

    Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.

    Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

    Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.

    San Francisco
    |
    October 27-29, 2025

    REGISTER NOW

    These concerns are not unfounded. Parents have turned to lawmakers to share horrific stories of how their children died after purchasing fentanyl-laced drugs on Facebook, or how they took their own lives after facing incessant bullying on Snapchat.

    As technology becomes more sophisticated, the problem is getting worse: Meta’s AI chatbots have reportedly flirted with children, while Character.AI and OpenAI are facing lawsuits over the suicides of children who were allegedly encouraged by the companies’ chatbots.

    We know the internet isn’t all bad, though. Without leaving your home or spending any money, you can learn to play guitar or write code. You can forge meaningful friendships with people from the other side of the world. You can access specialized telehealth care, even if you live somewhere where no doctor is trained in your diagnosis. You can find the answer to just about any question you want at any given moment (the capital of Madagascar is Antananarivo, by the way).

    This is how global lawmakers have arrived at what they believe to be a sound compromise: They won’t nuke the whole internet, but they’ll just put certain content behind a gate that you can only unlock if you can prove you’re an adult. But in this case, you’re not just clicking a box to confirm your age — you’re uploading your government ID or scanning your biometric data to prove you can access certain content.

    Is it safe to verify your identity by uploading a government ID or a biometric scan?

    The safety of any digital security measure depends on its implementation.

    Apple builds out products like Face ID so that these biometric scans of your face never leave your iPhone — they’re never shared over the cloud, which massively limits the potential for hackers to gain access.

    But when any sort of connection to another network gets involved, that’s when identity verification can get fishy. We’ve already watched how these measures can play out poorly when the technology is anything but rock-solid.

    “No method of age verification is both privacy-protective and entirely accurate,” the Electronic Frontier Foundation writes. “These methods don’t each fit somewhere on a spectrum of ‘more safe’ and ‘less safe,’ or ‘more accurate’ and ‘less accurate.’ Rather, they each fall on a spectrum of ‘dangerous in one way’ to ‘dangerous in a different way.’”

    In recent memory, we have some strong examples of how badly things can go when a company slips up on its security.

    On Tea, an app that women use to share information about men they meet on dating apps, users have to upload selfies and photos of their IDs to prove that they are who they say they are. But users on 4chan, a misogynistic web forum, found that Tea left users’ data exposed, meaning that bad actors could access tens of thousands of users’ government IDs, selfies, and even direct messages on the platform, where women shared sensitive information about their dating experiences. What was once purported to be an app for women’s safety ended up exposing its users to vicious harassment, giving bad actors access to personal information like their home addresses.

    These hacks were possible despite Tea’s promise that these images were not stored anywhere and were deleted immediately (evidently, those claims were false).

    This kind of thing happens all the time — just look at TechCrunch’s security coverage. But it’s not just happening to new apps like Tea. World governments and trillion-dollar tech giants are certainly not exempt from data breaches.

    Does it really matter if I lose my anonymity on the internet? I’m not doing anything shady

    These laws have inspired much backlash, but it’s not just because people are shy about linking their porn viewership to their government IDs.

    In places where people can be prosecuted for political speech, anonymity is vital to allow people to meaningfully discuss current events and critique those in power without fear of retribution. Corporate whistleblowers could be unable to report a company’s wrongdoing if all of their online activity is linked to their identity, and victims of domestic abuse will find it even more difficult to flee dangerous situations.

    In the U.S., the idea of being prosecuted for one’s political beliefs is becoming less theoretical. President Trump has threatened to send his political opponents to prison, and the government has revoked visas from international students who have criticized the Israeli government or participated in protests against the country’s military actions.

    What age-verification laws have gone into effect in the U.S.?

    In the United States, 23 states have enacted age-verification laws as of August 2025, while two more states have laws slated to take effect in late September 2025.

    These laws mostly impact websites that host certain percentages of “sexual material harmful to minors,” which varies from state to state.

    In practice, this means that pornographic websites must verify a user’s identity before they can access the website. But some sites, like Pornhub, have opted to simply block traffic from certain states.

    “Since age verification software requires users to hand over extremely sensitive information, it opens the door to the risk of data breaches,” Pornhub wrote on its blog. “Whether or not your intentions are good, governments have historically struggled to secure this data.”

    What counts as “sexual material harmful to minors”?

    The definition of this term varies depending on who is enforcing the law.

    At a time when LGBTQ rights are under attack in the U.S., activists have warned that laws like this could be used to classify non-pornographic information about the LGBTQ community, as well as basic sex education, as “sexual material harmful to minors.” These concerns appear well-founded, given that President Trump’s administration has removed references to civil rights movements and LGBTQ history from some government websites.

    Texas’ age-verification law — which was upheld in a Supreme Court ruling in June — was passed around the same time the state imposed other legal restrictions on the LGBTQ community, including limits on public drag shows and bans on gender-affirming care for minors. The drag show law was later deemed unconstitutional for violating the First Amendment.

    What’s going on with age verification in the U.K.?

    The United Kingdom enacted the Online Safety Act in July 2025, requiring many online platforms to verify a user’s identity before allowing them access. If a user is identified as a minor, they won’t be allowed on certain websites. The Act applies to search engines, social media platforms, video-sharing platforms, instant messaging services, cloud storage sites — pretty much anywhere that you may encounter media or talk to someone.

    In practice, this means that websites like YouTube, Spotify, Google, X, and Reddit are requiring U.K. users to verify their identity before accessing certain content. These requirements don’t just apply to pornographic or violent content — people in the U.K. have been barred from viewing vital education and news sources, making it difficult to access information without exposing themselves to potential privacy concerns.

    The U.K. does not use one specific way of verifying one’s identity — individual websites can decide what mechanism to use, and Ofcom, the U.K.’s communications regulator, is supposed to oversee this implementation. But as we explained with the Tea example, we can’t trust that any given authentication tool will be safe.

    Now, users who are subject to identity verification must decide if they want to freely access information or if they want to expose themselves to privacy risks.

    Does the U.K. age verification law affect me if I live elsewhere?

    Even if you don’t live in the U.K., you may be impacted by tech platforms that are pre-complying with these regulations.

    In the U.S., YouTube has already begun to roll out technology that is supposed to estimate users’ ages based on their activity, regardless of what age they listed when registering their account.

    Can’t you just use a VPN to get around these barriers?

    Yes, and the App Store charts in the U.K. prove it — after the Online Safety Act took effect, half of the top 10 free apps on iOS were VPNs (virtual private networks). We also saw VPN downloads spike after Pornhub access was blocked in many U.S. states.

    When Pornhub was suspended in France, Proton VPN said that registrations had spiked by 1,000% within half an hour — the company said this was a bigger spike than when TikTok temporarily blocked American users.

    You may have used a VPN before if you logged into your office desktop computer remotely, or if you spoofed your location so that you could watch British sitcoms for free from the U.S.

    This introduces another issue: Free VPNs don’t always have great privacy practices, even if they are advertised as such.

    If you want to learn more about VPNs, TechCrunch has guides on what you need to know about VPNs and how you can decide if you need to use one.

  • Vibe coding has turned senior devs into ‘AI babysitters,’ but they say it’s worth it

    Vibe coding has turned senior devs into ‘AI babysitters,’ but they say it’s worth it

    Carla Rover once spent 30 minutes sobbing after having to restart a project she vibe coded. 

    Rover has been in the industry for 15 years, mainly working as a web developer. She’s now building a startup, alongside her son, that creates custom machine learning models for marketplaces. 

    She called vibe coding a beautiful, endless cocktail napkin on which one can perpetually sketch ideas. But dealing with AI-generated code that one hopes to use in production can be “worse than babysitting,” she said, as these AI models can mess up work in ways that are hard to predict. 

    She had turned to AI coding in a need for speed with her startup, as is the promise of AI tools. 

    “Because I needed to be quick and impressive, I took a shortcut and did not scan those files after the automated review,” she said. “When I did do it manually, I found so much wrong. When I used a third-party tool, I found more. And I learned my lesson.” 

    She and her son wound up restarting their whole project — hence the tears. “I handed it off like the copilot was an employee,” she said. “It isn’t.” 

    Rover is like many experienced programmers turning to AI for coding help. But such programmers are also finding themselves acting like AI babysitters — rewriting and fact-checking the code the AI spits out. 

    Techcrunch event

    Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

    Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.

    Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

    Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.

    San Francisco
    |
    October 27-29, 2025

    REGISTER NOW

    A recent report by content delivery platform company Fastly found that at least 95% of the nearly 800 developers it surveyed said they spend extra time fixing AI-generated code, with the load of such verification falling most heavily on the shoulders of senior developers.

    These experienced coders have discovered issues with AI-generated code ranging from hallucinating package names to deleting important information and security risks. Left unchecked, AI code can leave a product far more buggy than what humans would produce.

    Working with AI-generated code has become such a problem that it’s given rise to a new corporate coding job known as “vibe code cleanup specialist.” 

    TechCrunch spoke to experienced coders about their time using AI-generated code about what they see as the future of vibe coding. Thoughts varied, but one thing remained certain: The technology still has a long way to go. 

    “Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old and saying, ‘Please take this into the dining room and pour coffee for the family,’” Rover said. 

    Can they do it? Possibly. Could they fail? Definitely. And most likely, if they do fail, they aren’t going to tell you. “It doesn’t make the kid less clever,” she continued. “It just means you can’t delegate [a task] like that completely.” 

    “You’re absolutely right!” 

    Feridoon Malekzadeh also compared vibe coding to a child.

    He’s worked in the industry for more than 20 years, holding various roles in product development, software, and design. He’s building his own startup and heavily using vibe-coding platform Lovable, he said. For fun, he also vibe codes apps like one that generates Gen Alpha slang for Boomers. 

    He likes that he’s able to work alone on projects, saving time and money, but agrees that vibe coding is not like hiring an intern or a junior coder. Instead, vibe coding is akin to “hiring your stubborn, insolent teenager to help you do something,” he told TechCrunch. 

    “You have to ask them 15 times to do something,” he said. “In the end, they do some of what you asked, some stuff you didn’t ask for, and they break a bunch of things along the way.” 

    Malekzadeh estimates he spends around 50% of his time writing requirements, 10% to 20% of his time on vibe coding, and 30% to 40% of his time on vibe fixing — remedying the bugs and “unnecessary script” created by AI-written code. 

    He also doesn’t think vibe coding is the best at systems thinking — the process of seeing how a complex problem could impact an overall result. AI-generated code, he said, tries to solve more surface-level problems. 

    “If you’re creating a feature that should be broadly available in your product, a good engineer would create that once and make it available everywhere that it’s needed,” Malekzadeh said. “Vibe coding will create something five different times, five different ways, if it’s needed in five different places. It leads to a lot of confusion, not only for the user, but for the model.”

    Meanwhile, Rover finds that AI “runs into a wall” when data conflicts with what it was hard-coded to do. “It can offer misleading advice, leave out key elements that are vital, or insert itself into a thought pathway you’re developing,” she said. 

    She also found that rather than admit to making errors, it will manufacture results.

    She shared another example with TechCrunch, where she questioned the results an AI model initially gave her. The model started to give a detailed explanation pretending it used the data she uploaded. Only when she called it out did the AI model confess.

    “It freaked me out because it sounded like a toxic co-worker,” she said.On top of this, there are the security concerns.

    Austin Spires is the senior director of developer enablement at Fastly and has been coding since the early 2000s. 

    He’s found through his own experience — along with chatting with customers — that vibe code likes to build what is quick rather than what is “right.” This may introduce vulnerabilities to the code of the kind that very new programmers tend to make, he said. 

    “What often happens is the engineer needs to review the code, correct the agent, and tell the agent that they made a mistake,” Spires told TechCrunch. “This pattern is why we’ve seen the trope of ‘you’re absolutely right’ appear over social media.” 

    He’s referring to how AI models, like Anthropic Claude, tend to respond “you’re absolutely right” when called out on their mistakes.

    Mike Arrowsmith, the chief technology officer at the IT management software company NinjaOne, has been in software engineering and security for around 20 years. He said that vibe coding is creating a new generation of IT and security blind spots to which young startups in particular are susceptible.

    “Vibe coding often bypasses the rigorous review processes that are foundational to traditional coding and crucial to catching vulnerabilities,” he told TechCrunch.

    NinjaOne, he said, counters this by encouraging “safe vibe coding,” where approved AI tools have access controls, along with mandatory peer review and, of course, security scanning. 

    The new normal

    While nearly everyone we spoke to agrees that AI-generated code and vibe-coding platforms are useful in many situations — like mocking up ideas — they all agree that human review is essential before building a business on it.

    “That cocktail napkin is not a business model,” Rover said. “You have to balance the ease with insight.” 

    But for all the lamenting on its errors, vibe coding has changed the present and the future of the job. 

    Rover said vibe coding helped her tremendously in crafting a better user interface. Malekzadeh simply said that, despite the time he spends fixing code, he still gets more done with AI coders than without them.

    “‘Every technology carries its own negativity, which is invented at the same time as technical progress,” Malekzadeh said, quoting the French theorist Paul Virilio, who spoke about inventing the shipwreck along with the ship.

    The pros far outweigh the cons.

    The Fastly survey found that senior developers were twice as likely to put AI-generated code into production compared to junior developers, saying that the technology helped them work faster. 

    Vibe coding is also part of Spires’ coding routine. He uses AI coding agents on several platforms for both front-end and back-end personal projects. He called the technology a mixed experience but said it’s good in helping with prototyping, building out boilerplate, or scaffolding out a test; it removes menial tasks so that engineers can focus on building, shipping, and scaling products. 

    It seems the extra hours spent combing through the vibe weeds will simply become a tolerated tax on using the innovation.

    Elvis Kimara, a young engineer, is learning that now. He just graduated with a master’s in AI and is building an AI-powered marketplace. 

    Like many coders, he said vibe coding has made his job harder and has often found vibe coding a joyless experience. 

    “There’s no more dopamine from solving a problem by myself. The AI just figures it out,” he said. At one of his last jobs, he said senior developers didn’t look to help young coders as much — some not understanding new vibe-coding models, while others delegated mentorship tasks to said AI models.

    But, he said, “the pros far outweigh the cons,” and he’s prepared to pay the innovation tax. 

    “We won’t just be writing code; we’ll be guiding AI systems, taking accountability when things break, and acting more like consultants to machines,” Kimara said of the new normal for which he’s preparing.  

    “Even as I grow into a senior role, I’ll keep using it,” he continued. “It’s been a real accelerator for me. I make sure I review every line of AI-generated code so I learn even faster from it.”