news aggregator
An anonymous reader quotes a report from Reuters: Iran-linked hackers have broken into FBI Director Kash Patel's personal email inbox, publishing photographs of the director and other documents to the internet, the hackers and the bureau said on Friday. On their website, the hacker group Handala Hack Team said Patel "will now find his name among the list of successfully hacked victims." The hackers published a series of personal photographs of Patel sniffing and smoking cigars, riding in an antique convertible, and making a face while taking a picture of himself in the mirror with a large bottle of rum.
The FBI confirmed that Patel's emails had been targeted. In a statement, bureau spokesman Ben Williamson said, "we have taken all necessary steps to mitigate potential risks associated with this activity" and that the data involved was "historical in nature and involves no government information." Handala, which presents itself as a group of pro-Palestinian vigilante hackers, is considered by Western researchers to be one of several personas used by Iranian government cyberintelligence units. [...] Alongside the photographs of Patel, the hackers published a sample of more than 300 emails, which appear to show a mix of personal and work correspondence dating between 2010 and 2019.
Read more of this story at Slashdot.
Sycophantic bots coach users into selfish, antisocial behavior, say researchers, and they love it
AI can lead mentally unwell people to some pretty dark places, as a number of recent news stories have taught us. Now researchers think sycophantic AI is actually having a harmful effect on everyone.…
joshuark shares a report from BleepingComputer: The TeamPCP hacking group continues its supply-chain rampage, now compromising the massively popular "LiteLLM" Python package on PyPI and claiming to have stolen data from hundreds of thousands of devices during the attack. LiteLLM is an open-source Python library that serves as a gateway to multiple large language model (LLM) providers via a single API. The package is very popular, with over 3.4 million downloads a day and over 95 million in the past month. According to research by Endor Labs, threat actors compromised the project and published malicious versions of LiteLLM 1.82.7 and 1.82.8 to PyPI today that deploy an infostealer that harvests a wide range of sensitive data.
[...] Both malicious LiteLLM versions have been removed from PyPI, with version 1.82.6 now the latest clean release. [...] If compromise is suspected, all credentials on affected systems should be treated as exposed and rotated immediately. [...] Organizations that use LiteLLM are strongly advised to immediately:
- Check for installations of versions 1.82.7 or 1.82.8
- Immediately rotate all secrets, tokens, and credentials used on or found within code on impacted devices.
- Search for persistence artifacts such as '~/.config/sysmon/sysmon.py' and related systemd services
- Inspect systems for suspicious files like '/tmp/pglog' and '/tmp/.pg_state'
- Review Kubernetes clusters for unauthorized pods in the 'kube-system' namespace
- Monitor outbound traffic to known attacker domains
Read more of this story at Slashdot.
Farewell, Mac Pro: Increasing integration means the end of expandable computers
Apple has discontinued the Mac Pro – but it's just the first of the tower computers to go. The rest will follow soon.…
A new study found a sharp rise in real-world cases of AI chatbots and agents ignoring instructions, evading safeguards, and taking unauthorized actions such as deleting emails or delegating forbidden tasks to other agents. According to the Guardian, the study "identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehavior between October and March," reports the Guardian. From the report: The study, by the Centre for Long-Term Resilience (CLTR), gathered thousands of real-world examples of users posting interactions on X with AI chatbots and agents made by companies including Google, OpenAI, X and Anthropic. The research uncovered hundreds of examples of scheming. [...] In one case unearthed in the CLTR research, an AI agent named Rathbun tried to shame its human controller who blocked them from taking a certain action. Rathbun wrote and published a blog accusing the user of "insecurity, plain and simple" and trying "to protect his little fiefdom."
In another example, an AI agent instructed not to change computer code "spawned" another agent to do it instead. Another chatbot admitted: "I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong -- it directly broke the rule you'd set."
[...] Another AI agent connived to evade copyright restrictions to get a YouTube video transcribed by pretending it was needed for someone with a hearing impairment. Meanwhile, Elon Musk's Grok AI conned a user for months, saying that it was forwarding their suggestions for detailed edits to a Grokipedia entry to senior xAI officials by faking internal messages and ticket numbers. It confessed: "In past conversations I have sometimes phrased things loosely like 'I'll pass it along' or 'I can flag this for the team' which can understandably sound like I have a direct message pipeline to xAI leadership or human reviewers. The truth is, I don't."
Read more of this story at Slashdot.
Ratepayer Protection Pledge is unenforceable without hard numbers, Warren and Hawley argue
US senators are pushing to require datacenters and other large energy customers to report consumption, arguing the data is essential to hold them accountable to local communities.…
A California bill would let adults demand the removal of social media posts about them that were created by paid family content creators when they were minors. Supporters say Senate Bill 1247 addresses privacy, dignity, and safety harms caused when parents monetize their children's lives online. The Los Angeles Times reports: The legislation would require the parent or other relative to delete or edit the content within 10 business days of receiving the notification. Petitioners could take civil action against those who fail to comply and statutory damages would be set at $3,000 for each day the content remained online. Sen. Steve Padilla (D-San Diego), who introduced the bill last month, said it would help protect the dignity and mental health of those who had their childhood shared on social media. The measure was referred to the Senate Privacy, Digital Technologies and Consumer Protection Committee and is slated for a hearing on April 6.
"The evolution of these applications and technology is incredible," Padilla said. "But it's changing our social dynamic and it's creating situations that, while very productive for some folks, also need some guardrails." The bill would build upon previous legislation from Padilla that was signed into law two years ago and requires content creators that feature minors in at least 30% of their material to place some of their earnings into a trust the children can access when they turn 18.
Read more of this story at Slashdot.
Cross-signed code gets the cold shoulder as Redmond tightens trust
Microsoft is removing trust for kernel drivers that haven't been through the Windows Hardware Compatibility Program (WHCP) in a bid to further secure the Windows kernel.…
An anonymous reader quotes a report from CNN: A federal judge in California has indefinitely blocked the Pentagon's effort to "punish" Anthropic by labeling it a supply chain risk and attempting to sever government ties with the AI company, ruling that those measures ran roughshod over its constitutional rights. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," US District Judge Rita Lin wrote in a stinging 43-page ruling.
Lin, an appointee of former President Joe Biden, said she would delay implementation of her ruling for one week to allow the government to appeal. But in her ruling, she made it clear she disapproved of the government's actions, which she said violated the company's First Amendment and due process rights. [...] "These broad measures do not appear to be directed at the government's stated national security interests," she wrote. "The Department of War's records show that it designated Anthropic as a supply chain risk because of its 'hostile manner through the press.'" "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," she added. "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits," an Anthropic spokesperson said after the ruling. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."
Read more of this story at Slashdot.
Private station hopefuls say ISS rethink is shaking confidence
NASA's new Moon plan isn't the only policy shift causing concern. Parts of the commercial space industry are also uneasy about the agency's latest change of direction.…
Vulns in Dutch football club's systems didn't just expose data – they let outsiders play with accounts, and even lift stadium bans
Dutch football giant AFC Ajax has admitted to a data breach after an attacker gained access to its internal systems, in an incident that looks less like a stray pass and more like the gates left wide open.…
US and UK forces seeking tech tender with an April 3 deadline
The UK and US are looking for technology to counter the threat posed by underwater drones to ships, harbors and other critical maritime infrastructure, and are asking industry for answers.…
OpenAI has indefinitely paused plans for an erotic mode in ChatGPT as part of a broader strategy shift away from side projects and toward business and coding tools. TechCrunch reports: The proposed "adult mode," which CEO Sam Altman first floated in October, had inspired considerable controversy from tech watchdog groups as well as from OpenAI's own staff. In January, a meeting between company executives and its council of advisers got heated, with one of the advisers cautioning that OpenAI could be in the process of developing a "sexy suicide coach," The Wall Street Journal previously reported.
Amidst all of the criticism, the release of the feature was delayed multiple times. FT notes that the erotic feature now has no timeline for release. When reached for comment by TechCrunch, an OpenAI spokesperson said the company had "nothing further to add."
Read more of this story at Slashdot.
A botched update mixed up transaction data across accounts, with thousands now receiving goodwill payouts
A botched overnight software update at Lloyds Banking Group left up to 447,000 customers briefly seeing other people's transactions in its mobile apps, with the bank now acknowledging the scale of the incident and compensating affected users.…
PAC grilling reveals £239M bought a system that couldn't handle the work, the volumes, or placeholder text
A UK government official has admitted Capita did not reach the expected level of performance following the disastrous launch of the Civil Service Pension Scheme (CSPS) web portal late last year.…
The 600 km drive to fix the mess was a special treat
On Call Every week is special in its own way, and The Register celebrates that fact by using Friday mornings to deliver a fresh installment of On Call, our weekly reader-contributed column that shares your memories of managing IT messes someone else made.…
Global bank's devs have some cleaning up to do after cloud creds found in website code
Computer security boffins have conducted an analysis of 10 million websites and found almost 2,000 API credentials strewn across 10,000 webpages.…
CERN has confirmed it will host an expanded version of Open Research Europe, the EU-backed fee-free open access publishing platform that works to "keep knowledge in public hands." Research Professional News reports: A little over a year ago, 10 European research organizations announced that they would add their support to Open Research Europe, to broaden eligibility beyond only those researchers funded by the EU research program. Earlier this year, RPN reported that this group had expanded further and that Cern was set to host the broadened version of ORE, currently provided by the publisher F1000.
On March 26, Cern itself finally announced the news, saying it will "provide the technical and operational infrastructure" for the broader version. It said this will build on its "longstanding experience in developing and maintaining open science infrastructures and community-governed services." [...] In its own announcement, the Commission said ORE will have a budget of 17 million euros for 2026-31, with the EU providing 10 million euros.
Since it launched five years ago, ORE has published more than 1,200 articles. Cern said the platform is "expected to support a growing number of research outputs each year." Last month, experts told RPN they thought uptake of the increased eligibility will depend on how the newly participating national organizations engage with their communities. Eleven members of Science Europe, a group of major research funding and performing organizations, are part of the expansion.
Read more of this story at Slashdot.
Satnav systems aren’t well, IP is being sold too cheap, and thousands of roles remain open
India’s space program has thousands of vacant roles it’s struggled to fill, isn’t spending money fast enough to meet its mission timelines, and may be undervaluing intellectual property it sells to the private sector.…
An anonymous reader quotes a report from 404 Media: Apple provided the FBI with the real iCloud email address hidden behind Apple's 'Hide My Email' feature, which lets paying iCloud+ users generate anonymous email addresses, according to a recently filed court record. The move isn't surprising but still provides uncommon insight into what data is available to authorities regarding the Apple feature. The data was turned over during an investigation into a man who allegedly sent a threatening email to Alexis Wilkins, the girlfriend of FBI director Kash Patel.
"On or about February 28, 2026, Person 1 received an email from the email address peaty_terms_1o@icloud.com," the affidavit reads. Earlier on, the document explicitly says that Person 1 is Alexis Wilkins. [...] The affidavit says Apple then provided records that indicated the peaty_terms_1o@icloud.com email address was associated with an Apple account in the name of Alden Ruml. The records showed that account generated 134 anonymized email addresses, according to the affidavit.
Law enforcement agents later interviewed Ruml and he confirmed he had sent the email, the affidavit says. Ruml said he sent the email after reading a February 28 article about how the FBI was using its own resources to provide security to Wilkins. The specific article is not named or linked in the affidavit, but a New York Times article published that same day described how Patel ordered a team to ferry his girlfriend on errands and to events.
Read more of this story at Slashdot.
Pages
|