Linux fréttir
Fluent Bit has 15B+ deployments … and 5 newly assigned CVEs
A series of "trivial-to-exploit" vulnerabilities in Fluent Bit, an open source log collection tool that runs in every major cloud and AI lab, was left open for years, giving attackers an exploit chain to completely disrupt cloud services and alter data.…
Apple's next major iPhone software update will prioritize stability and performance over flashy new features, according to Bloomberg's Mark Gurman, who reports that iOS 27 is being developed as a "Snow Leopard-style" release [non-paywalled source] focused on fixing bugs, removing bloat and improving underlying code after this year's sweeping Liquid Glass design overhaul in iOS 26.
Engineering teams are currently combing through Apple's operating systems to eliminate unnecessary code and address quality issues that users have reported since iOS 26's September release. Those complaints include device overheating, unexplained battery drain, user interface glitches, keyboard failures, cellular connectivity problems, app crashes, and sluggish animations.
iOS 27 won't be feature-free. Apple plans several AI additions: a health-focused AI agent tied to a Health+ subscription, expanded AI-powered web search meant to compete with ChatGPT and Perplexity, and deeper AI integration across apps. The company has also been internally testing a chatbot app called Veritas as a proving ground for its re-architected Siri, though a standalone chatbot product isn't currently planned.
Read more of this story at Slashdot.
SitusAMC rules out ransomware, but accounting records for major institutions potentially affected
Real estate finance business SitusAMC says thieves sneaked into its systems earlier this month and made off with confidential client data.…
Ubisoft has revealed Teammates, a first-person shooter built around AI-powered squadmates that the company is calling its "first playable generative AI research project" -- not long after the publisher went all-in on NFTs and the metaverse only to largely move on from both. Built in the Snowdrop Engine that powers The Division 2 and Star Wars Outlaws, the game features an AI assistant named Jaspar and two AI squadmates called Pablo and Sofia. Players can issue natural voice commands to direct the squadmates in combat or puzzle-solving, while Jaspar handles mission tracking and guidance. The project comes from the same team behind Ubisoft's Neo NPCs, demonstrated at GDC 2024.
Read more of this story at Slashdot.
Trojanized npm packages spread new variant that executes in pre-install phase, hitting thousands within days
A self-propagating malware targeting node package managers (npm) is back for a second round, according to Wiz researchers who say that more than 25,000 developers had their secrets compromised within three days.…
An anonymous reader shares a report: With the release of its third version last week, Google's Gemini large language model surged past ChatGPT and other competitors to become the most capable AI chatbot, as determined by consensus industry-benchmark tests. [...] Aaron Levie, chief executive of the cloud content management company Box, got early access to Gemini 3 several days ahead of the launch. The company ran its own evaluations of the model over the weekend to see how well it could analyze large sets of complex documents. "At first we kind of had to squint and be like, 'OK, did we do something wrong in our eval?' because the jump was so big," he said. "But every time we tested it, it came out double-digit points ahead."
[...] Google has been scrambling to get an edge in the AI race since the launch of ChatGPT three years ago, which stoked fears among investors that the company's iconic search engine would lose significant traffic to chatbots. The company struggled for months to get traction. Chief Executive Sundar Pichai and other executives have since worked to overhaul the company's AI development strategy by breaking down internal silos, streamlining leadership and consolidating work on its models, employees say. Sergey Brin, one of Google's co-founders, resumed a day-to-day role at the company helping to oversee its AI-development efforts.
Read more of this story at Slashdot.
WordPad died for this?
Microsoft is shoveling yet more features into the venerable Windows Notepad. This time it's support for tables, with some AI enhancements lathered on top.…
Chocolate Factory wins contract to build fully disconnected systems for training and operational support
NATO has hired Google to provide "air-gapped" sovereign cloud services and AI in "completely disconnected, highly secure environments."…
Months after China-linked spies burrowed into US networks, regulator tears up its own response
The Federal Communications Commission (FCC) has scrapped a set of telecom cybersecurity rules introduced after the Salt Typhoon espionage campaign, reversing course on measures designed to stop state-backed snoops from slipping back into America's networks.…
In 2018 researchers claimed evidence of a lake beneath the surface of Mars, detected by the Mars Advanced Radar for Subsurface and Ionosphere Sounding instrument (or Marsis for short).
But new Mars observations "are not consistent with the presence of liquid water in this location and an alternative explanation, such as very smooth basal materials, is needed." Phys.org explains
Aboard the Mars Reconnaissance Orbiter, the Shallow Radar (SHARAD) uses higher frequencies than MARSIS. Until recently, though, SHARAD's signals couldn't reach deep enough into Mars to bounce off the base layer of the ice where the potential water lies — meaning its results couldn't be compared with those from MARSIS. However, the Mars Reconnaissance Orbiter team recently tested a new maneuver that rolls the spacecraft on its flight axis by 120 degrees — whereas it previously could roll only up to 28 degrees. The new maneuver, termed a "very large roll," or VLR, can increase SHARAD's signal strength and penetration depth, allowing researchers to examine the base of the ice in the enigmatic high-reflectivity zone. Gareth Morgan and colleagues, for their article published in Geophysical Research Letters, examined 91 SHARAD observations that crossed the high-reflectivity zone.
Only when using the VLR maneuver was a SHARAD basal echo detected at the site. In contrast to the MARSIS detection, the SHARAD detection was very weak, meaning it is unlikely that liquid water is present in the high-reflectivity zone.
Read more of this story at Slashdot.
Report warns of 2030s capacity crunch without expanding mid-band airwaves
The GSMA says 6G networks will need up to three times the spectrum currently allocated to mobile operators to meet anticipated demands for data.…
Agencies have until December 12 to mitigate flaw that was likely exploited before Big Red released fix
CISA has ordered US federal agencies to patch against an actively exploited Oracle Identity Manager (OIM) flaw within three weeks – a scramble made more urgent by evidence that attackers may have been abusing the bug months before a fix was released.…
Costs a tenner a shot instead of £1M per anti-aircraft missile
Britain's Royal Navy ships will be fitted with the DragonFire laser weapon by 2027 – five years earlier than planned – following recent successful trials involving fast-moving drones.…
Unusual holiday drive raises cash for the people keeping critical code alive
The Open Source Pledge organization is working to combat the problems of FOSS maintainers not getting paid, and the closely related issue of developer burnout, with a Thanksgiving-themed campaign.…
In May MIT announced "no confidence" in a preprint paper on how AI increased scientific discovery, asking arXiv to withdraw it. The paper, authored by 27-year-old grad student Aidan Toner-Rodgers, had claimed an AI-driven materials discovery tool helped 1,018 scientists at a U.S. R&D lab.
But within weeks his academic mentors "were asking an unthinkable question," reports the Wall Street Journal. Had Toner-Rodgers made it all up?
Toner-Rodgers's illusory success seems in part thanks to the dynamics he has now upset: an academic culture at MIT where high levels of trust, integrity and rigor are all — for better or worse — assumed. He focused on AI, a field where peer-reviewed research is still in its infancy and the hunger for data is insatiable. What has stunned his former colleagues and mentors is the sheer breadth of his apparent deception. He didn't just tweak a few variables. It appears he invented the entire study. In the aftermath, MIT economics professors have been discussing ways to raise standards for graduate students' research papers, including scrutinizing raw data, and students are going out of their way to show their work isn't counterfeit, according to people at the school.
Since parting with the university, Toner-Rodgers has told other students that his paper's problems were essentially a mere issue with data rights. According to him, he had indeed burrowed into a trove of data from a large materials-science company, as his paper said he did. But instead of getting formal permission to use the data, he faked a data-use agreement after the company wanted to pull out, he told other students via a WhatsApp message in May... On Jan. 31, Corning filed a complaint with the World Intellectual Property Organization against the registrar of the domain name corningresearch.com. Someone who controlled that domain name could potentially create email addresses or webpages that gave the impression they were affiliated with the company. WIPO soon found that Toner-Rodgers had apparently registered the domain name, according to the organization's written decision on the case. Toner-Rodgers never responded to the complaint, and Corning successfully won the transfer of the domain name. WIPO declined to comment...
In the WhatsApp chat in May, in which Toner-Rodgers told other students he had faked the data-use agreement, he wrote, "This was a huge and embarrassing act of dishonesty on my part, and in hindsight it clearly would've been better to just abandon the paper." Both Corning and 3M told the Journal that they didn't roll out the experiment Toner-Rodgers described, and that they didn't share data with him.
Read more of this story at Slashdot.
Coding purists once considered BASIC harmful. AI can't even manage that
Opinion It is a truth universally acknowledged that a singular project possessed of prospects is in want of a team. That team has to be built from good developers with experience, judgement, analytic and logic skills, and strong interpersonal communication. Where AI coding fits in remains strongly contentious. Opinion on vibe coding in corporate IT is more clearly stated: you're either selling the stuff or steering well clear.…
Lack of effective data flows and reduced scientific investment hampered response
During the early stages of the Covid-19 pandemic in the UK, it took up to three weeks for confirmed cases to be recorded on the health database used at the time.…
Customer signed off and a remaining staffer triggered the mess
Who, Me? Welcome to Monday morning and therefore to a new instalment of Who, Me? It's The Register's weekly column that shares your tales of workplace errors and absolution.…
The shoemaker’s children have new friends
The International Association for Cryptologic Research will run a second election for new board members and other officers, after it was unable to complete its first poll due to a lost encryption key.…
An anonymous reader shared this report from the Guardian:
James and Owen were among 41 students who took a coding module at the University of Staffordshire last year, hoping to change careers through a government-funded apprenticeship programme designed to help them become cybersecurity experts or software engineers. But after a term of AI-generated slides being read, at times, by an AI voiceover, James said he had lost faith in the programme and the people running it, worrying he had "used up two years" of his life on a course that had been done "in the cheapest way possible".
"If we handed in stuff that was AI-generated, we would be kicked out of the uni, but we're being taught by an AI," said James during a confrontation with his lecturer recorded as a part of the course in October 2024. James and other students confronted university officials multiple times about the AI materials. But the university appears to still be using AI-generated materials to teach the course. This year, the university uploaded a policy statement to the course website appearing to justify the use of AI, laying out "a framework for academic professionals leveraging AI automation" in scholarly work and teaching...
For students, AI teaching appears to be less transformative than it is demoralising. In the US, students post negative online reviews about professors who use AI. In the UK, undergraduates have taken to Reddit to complain about their lecturers copying and pasting feedback from ChatGPT or using AI-generated images in courses.
"I feel like a bit of my life was stolen," James told the Guardian (which also quotes an unidentified student saying they felt "robbed of knowledge and enjoyment".) But the article also points out that a survey last year of 3,287 higher-education teaching staff by edtech firm Jisc found that nearly a quarter were using AI tools in their teaching.
Read more of this story at Slashdot.
Pages
|