news aggregator
Google Sites lure leads to bogus root certificate
Imagine getting asked to do something by a person in authority. An unknown malware slinger targeting open source software developers via Slack impersonated a real Linux Foundation official and used pages hosted on Google.com to steal developers' credentials and take over their systems.…
Booking.com says hackers accessed customer reservation data in a breach that may have exposed booking details, names, email addresses, phone numbers, addresses, and messages shared with accommodations. PCMag reports: On Sunday, users reported receiving emails from Booking.com, warning them that "unauthorized third parties may have been able to access certain booking information associated with your reservation." The email suggests the hackers have already exploited customer information.
"We recently noticed suspicious activity affecting a number of reservations, and we immediately took action to contain the issue," Booking.com wrote. "Based on the findings of our investigation to date, accessed information could include booking details and name(s), emails, addresses, phone numbers associated with the booking, and anything that you may have shared with the accommodation."
Amsterdam-based Booking.com has now generated new PINs for customer reservations to prevent hackers from accessing them. Still, the incident risks exposing affected customers to potential phishing scams. The Australian Broadcasting Corporation and several Reddit users say they received scam messages from accounts posing as Booking.com.
Read more of this story at Slashdot.
GG noob, who cleared you to land?
The Federal Aviation Administration continues to face an air traffic controller shortage, and it's hoping that a new demographic of potential applicants can fill the ranks: Video gamers. …
According to the Financial Times, Meta is developing an AI avatar of Mark Zuckerberg that could interact with employees using his voice, image, mannerisms, and public statements, "so that employees might feel more connected to the founder through interactions with it." The Verge reports: Meta may start allowing creators to make AI avatars of themselves if the experiment with Zuckerberg succeeds, according to the Financial Times. [...] Zuckerberg is involved in training the AI avatar, the Financial Times reports, and has also started spending five to 10 hours per week coding on Meta's other AI projects and participating in technical reviews.
Read more of this story at Slashdot.
Advisers say fewer staff could mean slower answers and tougher renewals
Oracle customers have been warned to watch for changes in support and pricing as Larry Ellison’s company makes huge datacenter spending commitments to support its AI ambitions.…
Maine is on track to become the first U.S. state to impose a temporary statewide ban on new data center construction. "Lawmakers in Maine greenlit the text of a bill this week to block data centers from being built in the state until November 2027," reports CNBC. "The measure, which is expected to get final passage in the next few days, also creates a council to suggest potential guardrails for data centers to ensure they don't lead to higher energy prices or other complications for Maine residents." From the report: Maine's bill has a few steps to go through before becoming law, notably whether Gov. Janet Mills will exercise her veto power. Mills asked lawmakers to include an exemption for several areas of the state where data center construction could continue. However, an amendment to do so was stuck down in the House, 29 to 115. Complicating Mills' decision is her campaign to become Maine's next senator. Mills is facing off against Graham Platner, an oyster farmer, in a high-profile Democratic primary. Platner is leading Mills in most recent polls by double digits.
Read more of this story at Slashdot.
Dev reports suggest long sessions now burn through usage much faster
Anthropic last month reduced the TTL (time to live) for the Claude Code prompt cache from one hour to five minutes for many requests, but said this should not increase costs despite users reporting faster depleting quotas.…
AI gubbins still there, just tucked under 'Writing Tools'
Copilot is on its way out of Notepad, but a return to the basic text editor is not on the cards.…
An anonymous reader quotes a report from Ars Technica: Several Californians sued Sutter Health and MemorialCare this week over allegations that an AI transcription tool was used to record them without their consent, in violation of state and federal law. The proposed class-action lawsuit, filed on Wednesday in federal court in San Francisco, states that, within the past six months, the plaintiffs received medical care at various Sutter and MemorialCare facilities.
During those visits, medical staff used Abridge AI. According to the complaint, this system "captured and processed their confidential physician-patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems." The complaint adds that these recordings "contained individually identifiable medical information, including but not limited to medical histories, symptoms, diagnoses, medications, treatment discussions, and other sensitive health disclosures communicated during confidential medical consultations."
In recent years, Abridge's software and AI service have been rapidly deployed across major health care providers nationwide, including Kaiser Permanente, the Mayo Clinic, Duke Health, and many more. When activated, the software captures, transcribes, and summarizes conversations between patients and doctors, and it turns them into clinical notes. Sutter Health began partnering with Abridge two years ago. Sutter spokesperson Liz Madison said the company is aware of the lawsuit. "We take patient privacy seriously and are committed to protecting the security of our patients' information," Madison said. "Technology used in our clinical settings is carefully evaluated and implemented in accordance with applicable laws and regulations."
Read more of this story at Slashdot.
Travel giant says names, contact details, dates, and hotel messages potentially exposed
Booking.com is warning customers that their reservation details may have been exposed to unknown attackers, in the latest reminder that the travel giant still can't quite keep a lid on the data flowing through its platform.…
Controlled Feature Rollouts headed for the trash among other changes
Microsoft is giving the Windows Insider program another makeover in the hope of making it less baffling.…
Department putting systems in place to manage 'restrictive licensing practices'
A federal spending watchdog has found the Department of Veterans Affairs (VA) faced "challenges" in understanding the correct number of licenses it should hold for the top five vendors in its $985 million annual software expenditure.…
MoD plans rapid procurement of Cambridge Aerospace's Skyhammer system at home and abroad
Britain is set to buy interceptors from a homegrown startup to counter Iranian Shahed-style attack drones, equipping both its own armed forces and allies in the Persian Gulf region.…
Will some programmers become "AI babysitters"? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google:
"AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson in a LinkedIn post. So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert.
"While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs."
The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.
Read more of this story at Slashdot.
Names, addresses, dates of birth, and bank details accessed, though not passwords
Basic-Fit, Europe's largest gym chain, has confirmed data including the bank details of around a million customers was stolen from its systems.…
Gang claims it accessed Snowflake metrics via third-party tool
ShinyHunters is back, this time pinning Rockstar Games to its leak site and claiming it didn't so much hack its way in as walk through a door someone else left wide open.…
Linux Foundation Europe boss predicts EU will run as fast as it can from US tech companies
Opinion You want to know who's even sicker of President Donald Trump than American liberals? European governments and companies who are realizing that putting all their eggs in one US basket was a stupid move.…
Benchmarking contract lays groundwork for renegotiating £774M software agreement
NHS England is spending £46,000 on "benchmarking" as it gears up for what looks like the next round of negotiations behind one of the UK public sector's biggest software deals.…
Not viral as in cat videos. Viral as in we need a vaccine
Opinion For a sector at the heart of US economic growth, AI claims and counter-claims remain curiously hard to reconcile. Models are improving at the speed of light, AI firms claim, yet the message from the codeface remains that benefits are still more than balanced by the downsides.…
Pages
|