Slashdot

Subscribe to Slashdot feed Slashdot
News for nerds, stuff that matters
Updated: 58 min 40 sec ago

Curl Creator Mulls Nixing Bug Bounty Awards To Stop AI Slop

Wed, 2025-07-16 10:00
Daniel Stenberg, creator of the curl utility, is considering ending its bug bounty program due to a surge in low-quality, AI-generated reports that are overwhelming the small volunteer team. Despite attempts to discourage AI-assisted submissions, these reports now make up about 20% of all entries in 2025, while genuine vulnerabilities have dropped to just 5%. The Register reports: "The general trend so far in 2025 has been way more AI slop than ever before (about 20 percent of all submissions) as we have averaged about two security report submissions per week," he wrote in a blog post on Monday. "In early July, about 5 percent of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased significantly compared to previous years." The situation has prompted Stenberg to reevaluate whether to continue curl's bug bounty program, which he says has paid out more than $90,000 for 81 awards since its inception in 2019. He said he expects to spend the rest of the year mulling possible responses to the rising tide of AI refuse. Presently, the curl bug bounty program -- outsourced to HackerOne - requires the bug reporter to disclose the use of generative AI. It does not entirely ban AI-assisted submissions, but does discourage them. "You should check and double-check all facts and claims any AI told you before you pass on such reports to us," the program's policy explains. "You are normally much better off avoiding AI." Two bug submissions per week on average may not seem like a lot, but the curl security team consists of only seven members. As Stenberg explains, three or four reviewers review each submission, a process that takes anywhere from 30 minutes to three hours. "I personally spend an insane amount of time on curl already, wasting three hours still leaves time for other things," Stenberg lamented. "My fellows however are not full time on curl. They might only have three hours per week for curl. Not to mention the emotional toll it takes to deal with these mind-numbing stupidities." [...] Stenberg says it's not clear what HackerOne should do to reduce reckless use of AI, but insists something needs to be done. His post ponders charging a fee to submit a report or dropping the bug bounty award, while also expressing reservations about both potential remedies. "As a lot of these reporters seem to genuinely think they help out, apparently blatantly tricked by the marketing of the AI hype-machines, it is not certain that removing the money from the table is going to completely stop the flood," he concludes.

Read more of this story at Slashdot.

Categories: Linux fréttir

AI Creeps Into the Risk Register For America's Biggest Firms

Wed, 2025-07-16 07:00
America's largest corporations are increasingly listing AI among the major risks they must disclose in formal financial filings, despite bullish statements in public about the potential business opportunities it offers. The Register: According to a report from research firm The Autonomy Institute, three-quarters of companies listed in the S&P 500 stock market index have updated their official risk disclosures to detail or expand upon mentions of AI-related risk factors during the past year. The organization drew its findings from an analysis of Form 10-K filings that the top 500 companies submitted to the US Securities and Exchange Commission (SEC), in which they are required to outline any material risks that could negatively affect their business and its financial health.

Read more of this story at Slashdot.

Categories: Linux fréttir

Music Insiders Call for Warning Labels After AI-Generated Band Gets 1 Million Plays On Spotify

Wed, 2025-07-16 03:30
Bruce66423 shares a report from The Guardian: They went viral, amassing more than 1m streams on Spotify in a matter of weeks, but it later emerged that hot new band the Velvet Sundown were AI-generated -- right down to their music, promotional images and backstory. The episode has triggered a debate about authenticity, with music industry insiders saying streaming sites should be legally obliged to tag music created by AI-generated acts so consumers can make informed decisions about what they are listening to. [...] Several figures told the Guardian that the present situation, where streaming sites, including Spotify, are under no legal obligation to identify AI-generated music, left consumers unaware of the origins of the songs they're listening to. Roberto Neri, the chief executive of the Ivors Academy, said: "AI-generated bands like Velvet Sundown that are reaching big audiences without involving human creators raise serious concerns around transparency, authorship and consent." Neri added that if "used ethically," AI has the potential to enhance songwriting, but said at present his organization was concerned with what he called "deeply troubling issues" with the use of AI in music. Sophie Jones, the chief strategy officer at the music trade body the British Phonographic Industry (BPI), backed calls for clear labelling. "We believe that AI should be used to serve human creativity, not supplant it," said Jones. "That's why we're calling on the UK government to protect copyright and introduce new transparency obligations for AI companies so that music rights can be licensed and enforced, as well as calling for the clear labelling of content solely generated by AI." Liz Pelly, the author of Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist, said independent artists could be exploited by people behind AI bands who might create tracks that are trained using their music. She referred to the 2023 case of a song that was uploaded to TikTok, Spotify and YouTube, which used AI-generated vocals claiming to be the Weeknd and Drake. Universal Music Group said the song was "infringing content created with generative AI" and it was removed shortly after it was uploaded. Aurelien Herault, the chief innovation officer at the music streaming service Deezer, said the company uses detection software that identifies AI-generated tracks and tags them. He said: "For the moment, I think platforms need to be transparent and try to inform users. For a period of time, what I call the "naturalization of AI', we need to inform users when it's used or not." Herault did not rule out removing tagging in future if AI-generated music becomes more popular and musicians begin to use it like an "instrument." At present, Spotify does not label music as AI-generated and has previously been criticized for populating some playlists with music by "ghost artists" -- fake acts that create stock music. Bruce66423 comments: "Artists demand 'a warning' on such material. Why? If it is what the people want..."

Read more of this story at Slashdot.

Categories: Linux fréttir

Pages