Solega Co. Done For Your E-Commerce solutions.
  • Home
  • E-commerce
  • Start Ups
  • Project Management
  • Artificial Intelligence
  • Investment
  • More
    • Cryptocurrency
    • Finance
    • Real Estate
    • Travel
No Result
View All Result
  • Home
  • E-commerce
  • Start Ups
  • Project Management
  • Artificial Intelligence
  • Investment
  • More
    • Cryptocurrency
    • Finance
    • Real Estate
    • Travel
No Result
View All Result
No Result
View All Result
Home Artificial Intelligence

This tool strips away anti-AI protections from digital art

Solega Team by Solega Team
July 10, 2025
in Artificial Intelligence
Reading Time: 3 mins read
0
This tool strips away anti-AI protections from digital art
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


To be clear, the researchers behind LightShed aren’t trying to steal artists’ work. They just don’t want people to get a false sense of security. “You will not be sure if companies have methods to delete these poisons but will never tell you,” says Hanna Foerster, a PhD student at the University of Cambridge and the lead author of a paper on the work. And if they do, it may be too late to fix the problem.

AI models work, in part, by implicitly creating boundaries between what they perceive as different categories of images. Glaze and Nightshade change enough pixels to push a given piece of art over this boundary without affecting the image’s quality, causing the model to see it as something it’s not. These almost imperceptible changes are called perturbations, and they mess up the AI model’s ability to understand the artwork.

Glaze makes models misunderstand style (e.g., interpreting a photorealistic painting as a cartoon). Nightshade instead makes the model see the subject incorrectly (e.g., interpreting a cat in a drawing as a dog). Glaze is used to defend an artist’s individual style, whereas Nightshade is used to attack AI models that crawl the internet for art.

Foerster worked with a team of researchers from the Technical University of Darmstadt and the University of Texas at San Antonio to develop LightShed, which learns how to see where tools like Glaze and Nightshade splash this sort of digital poison onto art so that it can effectively clean it off. The group will present its findings at the Usenix Security Symposium, a leading global cybersecurity conference, in August. 

The researchers trained LightShed by feeding it pieces of art with and without Nightshade, Glaze, and other similar programs applied. Foerster describes the process as teaching LightShed to reconstruct “just the poison on poisoned images.” Identifying a cutoff for how much poison will actually confuse an AI makes it easier to “wash” just the poison off. 

LightShed is incredibly effective at this. While other researchers have found simple ways to subvert poisoning, LightShed appears to be more adaptable. It can even apply what it’s learned from one anti-AI tool—say, Nightshade—to others like Mist or MetaCloak without ever seeing them ahead of time. While it has some trouble performing against small doses of poison, those are less likely to kill the AI models’ abilities to understand the underlying art, making it a win-win for the AI—or a lose-lose for the artists using these tools.

Around 7.5 million people, many of them artists with small and medium-size followings and fewer resources, have downloaded Glaze to protect their art. Those using tools like Glaze see it as an important technical line of defense, especially when the state of regulation around AI training and copyright is still up in the air. The LightShed authors see their work as a warning that tools like Glaze are not permanent solutions. “It might need a few more rounds of trying to come up with better ideas for protection,” says Foerster.

The creators of Glaze and Nightshade seem to agree with that sentiment: The website for Nightshade warned the tool wasn’t future-proof before work on LightShed ever began. And Shan, who led research on both tools, still believes defenses like his have meaning even if there are ways around them. 



Source link

Tags: antiAIArtDigitalprotectionsstripsTool
Previous Post

Why Bitcoin Is Set To Drop Below $107,000

Next Post

Knox lands $6.5M to compete with Palantir in the federal compliance market

Next Post
Knox lands $6.5M to compete with Palantir in the federal compliance market

Knox lands $6.5M to compete with Palantir in the federal compliance market

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR POSTS

  • 10 Ways To Get a Free DoorDash Gift Card

    10 Ways To Get a Free DoorDash Gift Card

    0 shares
    Share 0 Tweet 0
  • They Combed the Co-ops of Upper Manhattan With $700,000 to Spend

    0 shares
    Share 0 Tweet 0
  • Saal.AI and Cisco Systems Inc Ink MoU to Explore AI and Big Data Innovations at GITEX Global 2024

    0 shares
    Share 0 Tweet 0
  • Exxon foe Engine No. 1 to build fossil fuel plants with Chevron

    0 shares
    Share 0 Tweet 0
  • They Wanted a House in Chicago for Their Growing Family. Would $650,000 Be Enough?

    0 shares
    Share 0 Tweet 0
Solega Blog

Categories

  • Artificial Intelligence
  • Cryptocurrency
  • E-commerce
  • Finance
  • Investment
  • Project Management
  • Real Estate
  • Start Ups
  • Travel

Connect With Us

Recent Posts

Google hires Windsurf CEO Varun Mohan in latest AI talent deal

Google hires Windsurf CEO Varun Mohan in latest AI talent deal

July 12, 2025
Low US Household Leverage Bodes Well For The Economy

Low US Household Leverage Bodes Well For The Economy

July 12, 2025

© 2024 Solega, LLC. All Rights Reserved | Solega.co

No Result
View All Result
  • Home
  • E-commerce
  • Start Ups
  • Project Management
  • Artificial Intelligence
  • Investment
  • More
    • Cryptocurrency
    • Finance
    • Real Estate
    • Travel

© 2024 Solega, LLC. All Rights Reserved | Solega.co