Slashdot
YouTube's Likeness Detection Has Arrived To Help Stop AI Doppelgangers
An anonymous reader quotes a report from Ars Technica: AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators. [...] The likeness detection tool, which is similar to the site's copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes.
Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing "Content detection" menu. In YouTube's demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It's unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules.
After signing up, YouTube will flag videos from other channels that appear to have the user's face. YouTube's algorithm can't know for sure what is and is not an AI video. So some of the face match results may be false positives from channels that have used a short clip under fair use guidelines. If creators do spot an AI fake, they can add some details and submit a report in a few minutes. If the video includes content copied from the creator's channel that does not adhere to fair use guidelines, YouTube suggests also submitting a copyright removal request. However, just because a person's likeness appears in an AI video does not necessarily mean YouTube will remove it.
Read more of this story at Slashdot.
Categories: Linux fréttir
US Investigates Waymo Robotaxis Over Safety Around School Buses
U.S. regulators have opened a new investigation into about 2,000 Waymo self-driving cars after reports that one of the company's robotaxis illegally passed a stopped school bus with flashing lights and children disembarking.
Waymo says it's "already developed and implemented improvements related to stopping for school buses and will land additional software updates in our next software release." The company added "driving safely around children has always been one of Waymo's highest priorities. ... [Waymo] approached the school bus from an angle where the flashing lights and stop sign were not visible and drove slowly around the front of the bus before driving past it, keeping a safe distance from children." Reuters reports: NHTSA opened the investigation after a recent media report aired video of an incident in Georgia in which a Waymo did not remain stationary when approaching a school bus with its red lights flashing and stop arm deployed.
The report said the Waymo vehicle initially stopped then maneuvered around the bus, passing the extended stop arm while students were disembarking.
Waymo's automated driving system surpassed 100 million miles of driving in July and is logging 2 million miles per week, the agency said. "Based on NHTSA's engagement with Waymo on this incident and the accumulation of operational miles, the likelihood of other prior similar incidents is high," the agency said. NHTSA said the vehicle involved was equipped with Waymo's fifth-generation Automated Driving System and was operating without a human safety driver at the time of the incident.
Read more of this story at Slashdot.
Categories: Linux fréttir
ISP Deceived Customers About Fiber Internet, German Court Finds
The German Koblenz Regional Court has banned the internet service provider 1&1 from marketing its fiber-to-the-curb service as fiber-optic DSL. The court found that the company misled customers because its network uses copper cables for the final stage of connections, sometimes extending up to a mile from the distribution box to subscribers' homes.
Customers who visited the ISP's website and checked connection availability received a notification stating that a "1&1 fiber optic DSL connection" was available, even though fiber optic cables terminate at street-level distribution boxes or building service rooms. The company pairs the copper lines with vectoring technology to boost DSL speeds to 100 megabits per second. The Federation of German Consumer Organizations filed the lawsuit. Ramona Pop, the organization's chairperson, said that anyone who promises fiber optics but delivers only DSL is deceiving customers.
Read more of this story at Slashdot.
Categories: Linux fréttir
JetBrains Survey Declares PHP Declining, Then Says It Isn't
JetBrains released its annual State of the Developer Ecosystem survey in late October, drawing more than twenty-four thousand responses from programmers worldwide. The survey declared that PHP and Ruby are in "long term decline" based on usage trends tracked over five years. Shortly after publication, JetBrains posted a separate statement asserting that "PHP remains a stable, professional, and evolving ecosystem." The company offered no explanation for the apparent contradiction, The Register reports.
The survey's methodology involves weighting responses to account for bias toward JetBrains users and regional distribution factors. The company acknowledges some bias likely remains since its own customers are more inclined to respond. The survey also found that 85% of developers now use AI coding tools.
Read more of this story at Slashdot.
Categories: Linux fréttir
TikTok's New Policies Remove Promise To Notify Users Before Government Data Disclosure
TikTok changed its policies earlier this year on sharing user data with governments as the company negotiated with the Trump Administration to continue operating in the United States. The company added language allowing data sharing with "regulatory authorities, where relevant" beyond law enforcement. Until April 25, 2025, TikTok's website stated the company would notify users before disclosing their data to law enforcement. The policy now says TikTok will inform users only where required by law and changed the timing from before disclosure to if disclosure occurs. The company also softened its language from stating it "rejects data requests from law enforcement authorities" to saying it "may reject" such requests. TikTok declined to answer repeated questions from Forbes about whether it has shared or is sharing private user information with the Department of Homeland Security or Immigration and Customs Enforcement. The timing difference prevents users from challenging subpoenas before their data is handed over.
Read more of this story at Slashdot.
Categories: Linux fréttir

