news aggregator
The U.S. Postal Service plans to impose its first-ever fuel surcharge on packages (source paywalled; alternative source), adding an 8% fee starting in April as it struggles with rising fuel costs and ongoing financial pressure. The surcharge will not apply to letter mail and is currently expected to remain in place until January 2027. The Wall Street Journal reports: Other parcel carriers, including FedEx and United Parcel Service, have imposed fuel surcharges, as well as a basket of other surcharges and fees, for years. Both FedEx and UPS have dramatically raised their fuel surcharges in recent weeks as the price of oil has increased amid the turmoil in the Middle East. [...] The post office has been trying to increase the volume of packages it delivers. It previously differentiated itself from commercial carriers by saying that it doesn't apply residential, Saturday delivery or fuel or remote-delivery surcharges.
Read more of this story at Slashdot.
New submitter haroldbasset writes: Canada's Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department's AI assistant had invented that work experience. She has been working in Canada as a health scientist -- she has a Ph.D. in the immunology of aging -- but the AI genius instead described her as "wiring and assembling control circuits, building control and robot panels, programming and troubleshooting." "It's believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals," reports the Toronto Star. "The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision."
The applicant's lawyer was shocked "how any human being could make this decision." "Somehow, it hallucinated my client's job description," he said. "I would love to see what the officer saw. Something seriously went wrong here."
The applicant's refusal came just as Canada's Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.
Read more of this story at Slashdot.
Apple reportedly has full access to customize Google's Gemini model, allowing it to distill smaller on-device AI models for Siri and other features that can run locally without an internet connection. MacRumors reports: The Information explains that Apple can ask the main Gemini model to perform a series of tasks that provide high-quality results, with a rundown of the reasoning process. Apple can feed the answers and reasoning information that it gets from Gemini to train smaller, cheaper models. With this process, the smaller models are able to learn the internal computations used by Gemini, producing efficient models that have Gemini-like performance but require less computing power.
Apple is also able to edit Gemini as needed to make sure that it responds to queries in a way that Apple wants, but Apple has been running into some issues because Gemini has been tuned for chatbot and coding applications, which doesn't always meet Apple's needs.
Read more of this story at Slashdot.
A proof-of-concept attack on Context Hub suggests there's not much content santization
A new service that helps coding agents stay up to date on their API calls could be dialing in a massive supply chain vulnerability.…
They cleverly mimic most traits of a real phone
Smartphones have fast become the basis of our digital identities, securing payment systems and bank accounts. Now virtual devices that pretend to be real handsets have become a key tool for financial scammers, according to one company. …
Longtime Slashdot reader JackSpratts writes: The Supreme Court unanimously said on Wednesday that a major internet provider could not be held liable for the piracy of thousands of songs online in a closely watched copyright clash. Music labels and publishers sued Cox Communications in 2018, saying the company had failed to cut off the internet connections of subscribers who had been repeatedly flagged for illegally downloading and distributing copyrighted music. At issue for the justices was whether providers like Cox could be held legally responsible and required to pay steep damages -- a billion dollars or more in Cox's case -- if they knew that customers were pirating music but did not take sufficient steps to terminate their internet access.
In its opinion released (PDF) on Wednesday, the court said a company was not liable for "merely providing a service to the general public with knowledge that it will be used by some to infringe copyrights." Writing for the court, Justice Clarence Thomas said a provider like Cox was liable "only if it intended that the provided service be used for infringement" and if it, for instance, "actively encourages infringement." Justice Sonia Sotomayor, joined by Justice Ketanji Brown Jackson, wrote separately to say that she agreed with the outcome but for different reasons. [...] Cox called the court's unanimous decision a "decisive victory" for the industry and for Americans who "depend on reliable internet service."
"This opinion affirms that internet service providers are not copyright police and should not be held liable for the actions of their customers," the company said.
Read more of this story at Slashdot.
Ex-CISA boss also says no reason to panic about AI and security
RSAC 2026 "Everybody feels massive FOMO if they don't get to RSAC," Jen Easterly says.…
An anonymous reader quotes a report from CNN: Stephen Colbert already has a new job lined up for when he ends his 11-year run as host of "The Late Show" in May -- the comedian and well-known J.R.R. Tolkien superfan announced he will co-write and develop a new film in the blockbuster "Lord of the Rings" franchise. Colbert joined "LOTR" director Peter Jackson to reveal the news in a video announcement.
"I'm pretty happy about it. You know what the books mean to me and what your films mean to me," the late-night host told Jackson, who led the Oscar-winning team behind the nearly $6 billion original "Lord of the Rings" and "The Hobbit" trilogies. [...] Colbert said the next installment will be based on parts of Tolkien's "The Fellowship of the Ring" book that didn't make it into the original movies. "The thing I found myself reading over and over again were the six chapters early on in (The Fellowship of the Ring) that y'all never developed into the first movie back in the day ... and I thought, 'Oh, wait, maybe that could be its own story that could fit into the larger story.'" he said.
Colbert said he discussed the idea with his son, screenwriter Peter McGee, to work out the framing of the story. "It took me a few years to scrape my courage into a pile and give you a call, but about two years ago, I did. You liked it enough to talk to me about it," Colbert told Jackson. Colbert said he, McGee and Jackson have been working alongside screenwriter Philippa Boyens on the development of the story. "I could not be happier to say that they loved it, and so that's what we're going to be working on," Colbert said. Colbert's LOTR movie, tentatively titled "Shadow of the Past," will be the second of two new upcoming films in the franchise from Warner Bros. Discovery. The first of which is called "The Hunt for Gollum" due to be released in 2027.
Read more of this story at Slashdot.
Four former NSA bosses walk onto the stage at RSAC…
rsac 2026 There's a theoretical red line with cyber warfare. Cross it, and the US will respond with a physical attack like missile strikes. And that line "is whatever the President says it is," according to former NSA boss retired General Paul Nakasone.…
Forget the metaverse
Meta has begun laying off employees as it focuses more of its cash on building out datacenters, training its own large language models, and recruiting talent for AI.…
A jury found Meta and YouTube negligent in a landmark social media addiction case, ruling that addictive design features such as infinite scroll and algorithmic recommendations harmed a young user and contributed to her mental health distress. The verdict awards $3 million in compensatory damages so far and could pave the way for more lawsuits seeking financial penalties and product changes across the social media industry. "Meta is responsible for 70 percent of that cost and YouTube for the remainder," notes The New York Times. "TikTok and Snap both settled with the plaintiff for undisclosed terms before the trial started." From the report: The bellwether case, which was brought by a now 20-year-old woman identified as K.G.M., had accused social media companies of creating products as addictive as cigarettes or digital casinos. K.G.M. sued Meta, which owns Instagram and Facebook, and Google's YouTube over features like infinite scroll and algorithmic recommendations that she claimed led to anxiety and depression.
The jury of seven women and five men will deliberate further to decide what further punitive damages the companies should pay for malice or fraud. The verdict in K.G.M.'s case -- one of thousands of lawsuits filed by teenagers, school districts and state attorneys general against Meta, YouTube, TikTok and Snap, which owns Snapchat -- was a major win for the plaintiffs. The finding validates a novel legal theory that social media sites or apps can cause personal injury. It is likely to factor into similar cases expected to go to trial this year, which could expose the internet giants to further financial damages and force changes to their products. The verdict also comes on the heels of a New Mexico jury ruling that found Meta liable for violating state law by failing to protect users of its apps from child predators.
Read more of this story at Slashdot.
Fusion Agentic Applications promise autonomous enterprise decisions. Gartner urges caution
Oracle says it's building a suite of AI agents binto its cloud-based enterprise applications, claiming they can make and execute decisions autonmomously within business processes. But analysts are urging caution given unresolved questions around data integration and liability.…
Plus one actual physicist
Donald Trump has named the first members of his President's Council of Advisors on Science and Technology (PCAST), largely comprising Trump allies in the tech industry and one actual scientist.…
Meta lost a child safety trial in New Mexico after a court found that its platforms failed to adequately protect children from exploitation and misled parents about app safety. According to Ars Technica, the jury on Tuesday "deliberated for only one day before agreeing that Meta should pay $375 million in civil damages..." While the jury declined to impose the maximum penalty New Mexico sought, which could have cost the company $2.2 billion, Meta may still face additional financial penalties and could be forced to make changes to its apps. From the report: The trial followed a 2023 lawsuit filed by New Mexico Attorney General Raul Torrez after The Guardian published a two-year investigation exposing child sex trafficking markets on Facebook and Instagram. Torrez's office then conducted an undercover investigation codenamed "Operation MetaPhile," in which officers posed as children on Facebook, Instagram, and WhatsApp. The jury heard that these fake profiles were "simply inundated with images and targeted solicitations" from child abusers, Torrez told CNBC in 2024. Ultimately, three men were arrested amid the sting for attempting to use Meta's social networks to prey on children. At trial, Mark Zuckerberg and Instagram chief Adam Mosseri testified that "harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company's platforms due to their vast user bases," The Guardian reported. Internal messages and documents, as well as testimony from child safety experts within and outside the company, showed that Meta repeatedly ignored warnings and failed to fix platforms to protect kids, New Mexico's AG successfully argued.
Perhaps most troubling to the jury, law enforcement and the National Center for Missing and Exploited Children also testified that Meta's reporting of crimes to children on its apps -- including child sexual abuse materials (CSAM) -- was "deficient," The Guardian reported. Rather than make it easy to trace harms on its platforms, the jury learned from frustrated cops that Meta "generated high volumes of 'junk' reports by overly relying on AI to moderate its platforms." This made its reporting "useless" and "meant crimes could not be investigated," The Guardian reported.
Celebrating the win as a "historic victory," Torrez told CNBC that families had previously paid the price for "Meta's choice to put profits over kids' safety." "Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew," Torrez said. "Today the jury joined families, educators, and child safety experts in saying enough is enough." Meta said the company plans to appeal the verdict. "We respectfully disagree with the verdict and will appeal," Meta's spokesperson said. "We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online."
Read more of this story at Slashdot.
AWS, Google, Broadcom, or Netscape?
OpenAI on Wednesday announced the death of its controversial Sora video creation tool, just two days after publishing a guide on how to use it well.…
An anonymous reader quotes a report from Vanity Fair: Focus Features is releasing The AI Doc: Or How I Became an Apocaloptimist in theaters on March 27. If you're even slightly interested in what's going on with AI, it's required viewing: The film touches on all aspects of the technology, from how it's currently being used to how it will be used in the near future, when we potentially reach the age of artificial general intelligence, or AGI. AGI is a theoretical form of AI that supposedly would be able to perform complex tasks without each step being prompted by a human user -- the point at which machines become autonomous, like Skynet in the Terminator franchise. [...]
[Director Daniel Roher] interviews nearly all the major players in the AI space: Sam Altman of OpenAI; the Amodei siblings of Anthropic; Demis Hassabis of DeepMind (Google's AI arm); theorists and reporters covering the subject. Notably absent are Elon Musk and Mark Zuckerberg. "Have you seen that guy speak? He's like a lizard man," Roher says regarding Zuckerberg. "Musk said yes initially, but it was right when he was doing all the stuff with Trump, and we just got ghosted after a while," adds [codirector Charlie Tyrell]. Altman, arguably AI's greatest mascot, is prominently featured in the documentary. But Roher wasn't buying it. "That guy doesn't know what genuine means," he says. "Every single thing he says and does is calculated. He is a machine. He's like AI, and it's in the service of growth, growth, growth. You can be disingenuous and media savvy." [...]
How, exactly, is Roher an apocaloptimist? "We are preaching a worldview," he says, "in a world that's asking you to either see this as the apocalypse or embrace it with this unbridled optimism." He and his film are taking a stance that rests between those two poles. "It's both at the same time. We have to try and embrace a middle ground so this technology doesn't consume us, so we can stay in the driver's seat," says Roher -- meaning, it's up to all of us to chart the course. "You have to speak up," says Tyrell. "Things like AI should disclose themselves. If your doctor's office is using an AI bot, you have to say, I don't like that." The driving message behind the film is that resistance starts with the people. That position is shared by The AI Doc producer Daniel Kwan, who won an Oscar for directing Everything Everywhere All at Once and has been at the forefront of discussions about AI in the entertainment industry. [...]
Roher and Tyrell both use AI in their everyday lives and openly admit to it being a helpful tool. They also agree that this technology can make daily tasks easier for the average consumer. But at the end of our conversation, we get into the economics of AI and how Wall Street is propping up the industry through huge evaluations of these companies -- and Roher gets going yet again. "This is all smoke and mirrors. The entire economy of AI is being propped up by a Ponzi scheme. The hype of this technology is unlike any hype we've seen," he says. "I feel like I could announce in a press release that Academy Award winner Daniel Roher is starting an AI film company, and I could sell it the next day for $20 million. It's fucking crazy." [...] "These people are prospectors, and they are going up to the Yukon because it's the gold rush."
Read more of this story at Slashdot.
In other browser news, Opera now caters to penguinista gamers
Firefox 149 is here, and although we've already talked about one of the big new features on the way, the release version has some others that will be very welcome.…
Longtime Slashdot reader cusco writes: A private company in China has developed hypersonic missiles that cost the same as a Tesla Model X. This missile, the YKJ-1000, is being marketed for sale at a reported price of $99,000, and it's in mass production now after successful tests. That is far below what countries will spend to target and shoot down the missile if it's heading their way.
Besides the low cost, they can be launched from anywhere. The launcher looks like any one of the tens of millions of shipping containers floating around on the ocean, or sitting at ports, or riding along on trucks, or sitting on industrial lots. The launchers for these missiles are hiding in plain sight, in other words. Whatever tactical advantages great-power countries have in ballistics is going away, fast; 1,300 kilometers is 800 miles, and so the range is anything within 800 miles of wherever someone can send a shipping container. To keep the price down, the missile is reportedly using civilian-grade materials and widely available commercial parts, along with simpler manufacturing methods like die-casting. There are also broader savings from tapping mature supply chains and using China's large-scale civilian industrial base.
Read more of this story at Slashdot.
Effort includes permitting and planning
Microsoft is working with Nvidia on nuclear power. Not to build it, but to offer AI-driven tools to deal with all the red tape, help with the design work, and optimize operations for nuclear projects.…
Exactly how will astronauts get to and from that moonbase?
Opinion NASA's Ignition presentation was heavy on space hardware, but light on details. Not least of which was how astronauts are supposed to get from Earth to its moonbase and back.…
Pages
|