news aggregator
GG noob, who cleared you to land?
The Federal Aviation Administration continues to face an air traffic controller shortage, and it's hoping that a new demographic of potential applicants can fill the ranks: Video gamers. …
According to the Financial Times, Meta is developing an AI avatar of Mark Zuckerberg that could interact with employees using his voice, image, mannerisms, and public statements, "so that employees might feel more connected to the founder through interactions with it." The Verge reports: Meta may start allowing creators to make AI avatars of themselves if the experiment with Zuckerberg succeeds, according to the Financial Times. [...] Zuckerberg is involved in training the AI avatar, the Financial Times reports, and has also started spending five to 10 hours per week coding on Meta's other AI projects and participating in technical reviews.
Read more of this story at Slashdot.
Advisers say fewer staff could mean slower answers and tougher renewals
Oracle customers have been warned to watch for changes in support and pricing as Larry Ellison’s company makes huge datacenter spending commitments to support its AI ambitions.…
Maine is on track to become the first U.S. state to impose a temporary statewide ban on new data center construction. "Lawmakers in Maine greenlit the text of a bill this week to block data centers from being built in the state until November 2027," reports CNBC. "The measure, which is expected to get final passage in the next few days, also creates a council to suggest potential guardrails for data centers to ensure they don't lead to higher energy prices or other complications for Maine residents." From the report: Maine's bill has a few steps to go through before becoming law, notably whether Gov. Janet Mills will exercise her veto power. Mills asked lawmakers to include an exemption for several areas of the state where data center construction could continue. However, an amendment to do so was stuck down in the House, 29 to 115. Complicating Mills' decision is her campaign to become Maine's next senator. Mills is facing off against Graham Platner, an oyster farmer, in a high-profile Democratic primary. Platner is leading Mills in most recent polls by double digits.
Read more of this story at Slashdot.
Dev reports suggest long sessions now burn through usage much faster
Anthropic last month reduced the TTL (time to live) for the Claude Code prompt cache from one hour to five minutes for many requests, but said this should not increase costs despite users reporting faster depleting quotas.…
AI gubbins still there, just tucked under 'Writing Tools'
Copilot is on its way out of Notepad, but a return to the basic text editor is not on the cards.…
An anonymous reader quotes a report from Ars Technica: Several Californians sued Sutter Health and MemorialCare this week over allegations that an AI transcription tool was used to record them without their consent, in violation of state and federal law. The proposed class-action lawsuit, filed on Wednesday in federal court in San Francisco, states that, within the past six months, the plaintiffs received medical care at various Sutter and MemorialCare facilities.
During those visits, medical staff used Abridge AI. According to the complaint, this system "captured and processed their confidential physician-patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems." The complaint adds that these recordings "contained individually identifiable medical information, including but not limited to medical histories, symptoms, diagnoses, medications, treatment discussions, and other sensitive health disclosures communicated during confidential medical consultations."
In recent years, Abridge's software and AI service have been rapidly deployed across major health care providers nationwide, including Kaiser Permanente, the Mayo Clinic, Duke Health, and many more. When activated, the software captures, transcribes, and summarizes conversations between patients and doctors, and it turns them into clinical notes. Sutter Health began partnering with Abridge two years ago. Sutter spokesperson Liz Madison said the company is aware of the lawsuit. "We take patient privacy seriously and are committed to protecting the security of our patients' information," Madison said. "Technology used in our clinical settings is carefully evaluated and implemented in accordance with applicable laws and regulations."
Read more of this story at Slashdot.
Travel giant says names, contact details, dates, and hotel messages potentially exposed
Booking.com is warning customers that their reservation details may have been exposed to unknown attackers, in the latest reminder that the travel giant still can't quite keep a lid on the data flowing through its platform.…
Controlled Feature Rollouts headed for the trash among other changes
Microsoft is giving the Windows Insider program another makeover in the hope of making it less baffling.…
Department putting systems in place to manage 'restrictive licensing practices'
A federal spending watchdog has found the Department of Veterans Affairs (VA) faced "challenges" in understanding the correct number of licenses it should hold for the top five vendors in its $985 million annual software expenditure.…
MoD plans rapid procurement of Cambridge Aerospace's Skyhammer system at home and abroad
Britain is set to buy interceptors from a homegrown startup to counter Iranian Shahed-style attack drones, equipping both its own armed forces and allies in the Persian Gulf region.…
Will some programmers become "AI babysitters"? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google:
"AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson in a LinkedIn post. So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert.
"While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs."
The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.
Read more of this story at Slashdot.
Names, addresses, dates of birth, and bank details accessed, though not passwords
Basic-Fit, Europe's largest gym chain, has confirmed data including the bank details of around a million customers was stolen from its systems.…
Gang claims it accessed Snowflake metrics via third-party tool
ShinyHunters is back, this time pinning Rockstar Games to its leak site and claiming it didn't so much hack its way in as walk through a door someone else left wide open.…
Linux Foundation Europe boss predicts EU will run as fast as it can from US tech companies
Opinion You want to know who's even sicker of President Donald Trump than American liberals? European governments and companies who are realizing that putting all their eggs in one US basket was a stupid move.…
Benchmarking contract lays groundwork for renegotiating £774M software agreement
NHS England is spending £46,000 on "benchmarking" as it gears up for what looks like the next round of negotiations behind one of the UK public sector's biggest software deals.…
Not viral as in cat videos. Viral as in we need a vaccine
Opinion For a sector at the heart of US economic growth, AI claims and counter-claims remain curiously hard to reconcile. Models are improving at the speed of light, AI firms claim, yet the message from the codeface remains that benefits are still more than balanced by the downsides.…
Après ça, le déluge, as plans call for move away from plenty more American software and hardware
France’s Interministerial Directorate for Digital Affairs (DINUM) will drop Windows desktops, and adopt Linux instead.…
Anthropic recently "hosted about 15 Christian leaders from Catholic and Protestant churches, academia, and the business world" for a two-day summit , reports the Washington Post:
Anthropic staff sought advice on how to steer Claude's moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a "child of God."
"They're growing something that they don't fully know what it's going to turn out as," said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. "We've got to build in ethical thinking into the machine so it's able to adapt dynamically." Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations...
Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence. The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude's popularity with programmers, businesses, government agencies and the military.... Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders frequently talk about the need to give it a moral character...
Some Anthropic staff at the meeting "really don't want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty," the participant said. Other company representatives present did not find that framework helpful, according to the participant. The discussions appeared to take a toll on some senior Anthropic staff, who became visibly emotional "about how this has all gone so far [and] how they can imagine this going," the participant said.
Anthropic is working to include more voices from different groups, including religious communities, to help shape its AI, a spokesperson told the Washington Post.
"Anthropic's March summit with Christian leaders was billed as the first in a series of gatherings with representatives from different religious and philosophical traditions, said attendee Brian Patrick Green, a practicing Catholic who teaches AI and technology ethics at Santa Clara University."
Read more of this story at Slashdot.
Pages
|