Linux fréttir
Ravin Academy confirms the intrusion on Telegram, says investigation continues
Iran's school for state-sponsored cyberattackers admits it suffered a breach exposing the names and other personal information of its associates and students.…
Australia's competition regulator sued Microsoft today, accusing it of misleading millions of customers into paying higher prices for its Microsoft 365 software after bundling it with AI tool Copilot. From a report: The Australian Competition and Consumer Commission alleged that from October 2024, the technology giant misled about 2.7 million customers by suggesting they had to move to higher-priced Microsoft 365 personal and family plans that included Copilot.
After the integration of Copilot, the annual subscription price of the Microsoft 365 personal plan increased by 45% to A$159 ($103.32) and the price of the family plan increased by 29% to A$179, the ACCC said. The regulator said Microsoft failed to clearly tell users that a cheaper "classic" plan without Copilot was still available.
Read more of this story at Slashdot.
The U.S. has formed a $1 billion partnership with AMD to construct two supercomputers that will tackle large scientific problems ranging from nuclear power to cancer treatments to national security, said Energy Secretary Chris Wright and AMD CEO Lisa Su. From a report: The U.S. is building the two machines to ensure the country has enough supercomputers to run increasingly complex experiments that require harnessing enormous amounts of data-crunching capability. The machines can accelerate the process of making scientific discoveries in areas the U.S. is focused on.
Energy Secretary Wright said the systems would "supercharge" advances in nuclear power and fusion energy, technologies for defense and national security, and the development of drugs. Scientists and companies are trying to replicate fusion, the reaction that fuels the sun, by jamming light atoms in a plasma gas under intense heat and pressure to release massive amounts of energy. "We've made great progress, but plasmas are unstable, and we need to recreate the center of the sun on Earth," Wright told Reuters.
Read more of this story at Slashdot.
Nations previously exempt from scraping now in the firing line
If you thought living in Europe, Canada, or Hong Kong meant you were protected from having LinkedIn scrape your posts to train its AI, think again. You have a week to opt out before the Microsoft subsidiary assumes you're fine with it.…
Cloud giant says choice and flexibility matter more than standardization – for now
Interview As agentic AI solutions flood the market, users will face a complex environment in terms of deployment and commercial models, with standard practices yet to be resolved, says Olawale Oladehin, AWS director, solutions architecture.…
Countries signed their first UN treaty targeting cybercrime in Hanoi on Saturday, despite opposition from an unlikely band of tech companies and rights groups warning of expanded state surveillance. From a report: The new global legal framework aims to strengthen international cooperation to fight digital crimes, from child pornography to transnational cyberscams and money laundering. More than 60 countries were seen to sign the declaration Saturday, which means it will go into force once ratified by those states. UN Secretary General Antonio Guterres described the signing as an "important milestone", but that it was "only the beginning".
"Every day, sophisticated scams, destroy families, steal migrants and drain billions of dollars from our economy... We need a strong, connected global response," he said at the opening ceremony in Vietnam's capital on Saturday. The UN Convention against Cybercrime was first proposed by Russian diplomats in 2017, and approved by consensus last year after lengthy negotiations. Critics say its broad language could lead to abuses of power and enable the cross-border repression of government critics.
Read more of this story at Slashdot.
Brussels' framework muddies the waters and could hand advantage to foreign hyperscalers, says trade body
Europe's efforts to reduce reliance on US hyperscalers is under fire from many of the local cloud providers it is designed to help.…
Electronic Arts has spent the past year pushing its nearly 15,000 employees to use AI for everything from code generation to scripting difficult conversations about pay. Employees in some areas must complete multiple AI training courses and use tools like the company's in-house chatbot ReefGPT daily.
The tools produce flawed code and hallucinations that employees then spend time correcting. Staff say the AI creates more work rather than less, according to Business Insider. They fix mistakes while simultaneously training the programs on their own work. Creative employees fear the technology will eventually eliminate demand for character artists and level designers. One recently laid-off senior quality-assurance designer says AI performed a key part of his job -- reviewing and summarizing feedback from hundreds of play testers. He suspects this contributed to his termination when about 100 colleagues were let go this past spring from the company's Respawn Entertainment studio.
Read more of this story at Slashdot.
NeutralTrust shows how agentic browser can interpret bogus links as trusted user commands
Researchers have found more attack vectors for OpenAI's new Atlas web browser – this time by disguising a potentially malicious prompt as an apparently harmless URL.…
Social media site dispatches crucial clarification days after curious announcement
X (formerly Twitter) sparked security concerns over the weekend when it announced users must re-enroll their security keys by November 10 or face account lockouts — without initially explaining why.…
Jen Easterly says most breaches stem from bad software, and smarter tech could finally clean it up
Ex-CISA head Jen Easterly claims AI could spell the end of the cybersecurity industry, as the sloppy software and vulnerabilities that criminals rely on will be tracked down faster than ever.…
OpenAI's rival Anthropic has a different approach — and "a clearer path to making a sustainable business out of AI,"
writes the Wall Street Journal.
Outside of OpenAI's close partnership with Microsoft, which integrates OpenAI's models into Microsoft's software products, OpenAI mostly caters to the mass market... which has helped OpenAI reach an annual revenue run rate of around $13 billion, around 30% of which it says comes from businesses.
Anthropic has generated much less mass-market appeal. The company has said about 80% of its revenue comes from corporate customers. Last month it said it had some 300,000 of them... Its cutting-edge Claude language models have been praised for their aptitude in coding: A July report from Menlo Ventures — which has invested in Anthropic — estimated via a survey that Anthropic had a 42% market share for coding, compared with OpenAI's 21%. Anthropic is also now ahead of OpenAI in market share for overarching corporate AI use, Menlo Ventures estimated, at 32% to OpenAI's 25%. Anthropic is also surprisingly close to OpenAI when it comes to revenue. The company is already at a $7 billion annual run rate and expects to get to $9 billion by the end of the year — a big lead over its better-known rival in revenue per user.
Both companies have backing in the form of investments from big tech companies — Microsoft for OpenAI, and a combination of Amazon and Google for Anthropic — that help provide AI computing infrastructure and expose their products to a broad set of customers. But Anthropic's growth path is a lot easier to understand than OpenAI's. Corporate customers are devising a plethora of money-saving uses for AI in areas like coding, drafting legal documents and expediting billing. Those uses are likely to expand in the future and draw more customers to Anthropic, especially as the return on investment for them becomes easier to measure...
Demonstrating how much demand there is for Anthropic among corporate customers, Microsoft in September said Anthropic's leading language model, Claude, would be offered within its Copilot suite of software despite Microsoft's ties to OpenAI.
"There is also a possibility that OpenAI's mass-market appeal becomes a turnoff for corporate customers," the article adds, "who want AI to be more boring and useful than fun and edgy."
Read more of this story at Slashdot.
AI wasn't the cause, and multi-cloud is for rubes
Column AWS put out a hefty analysis of its October 20 outage, and it's apparently written in a continuing stream of consciousness before the Red Bull wore off and the author passed out after 36 straight hours of writing.…
Poor data standards across government hamper scaling, says Parliament spending watchdog
The UK government's Department for Work and Pensions (DWP) has saved £4.4 million over three years by using machine learning to tackle fraud, according to the National Audit Office (NAO). However, the public spending watchdog found the department's ability to expand this work is limited by fragmented IT systems and poor cross-government data standards.…
No, it's just good at mass-production copy and paste. And yes, we're correctly applying Betteridge's Law
Opinion Remember ELIZA? The 1966 chatbot from MIT's AI Lab convinced countless people it was intelligent using nothing but simple pattern matching and canned responses. Nearly 60 years later, ChatGPT has people making the same mistake. Chatbots don't think – they've just gotten exponentially better at pretending.…
When it rains, it pours – and nobody packed an umbrella
Opinion When your cabbie asks you what you do for a living, and you answer "tech journalist," you never get asked about cloud infrastructure in return. Bitcoin, mobile phones, AI, yes. Until last week: "What's this AWS thing, then?" You already knew a lot of people were having a very bad day in Bezosville, but if the news had reached an Edinburgh black cab driver, new adjectives were needed.…
"Mozilla is introducing a new privacy framework for Firefox extensions that will require developers to disclose whether their add-ons collect or transmit user data..." reports the blog Linuxiac:
The policy takes effect on November 3, 2025, and applies to all new Firefox extensions submitted to addons.mozilla.org. According to Mozilla's announcement, extension developers must now include a new key in their manifest.json files. This key specifies whether an extension gathers any personal data. Even extensions that collect nothing must explicitly state "none" in this field to confirm that no data is being collected or shared.
This information will be visible to users at multiple points: during the installation prompt, on the extension's listing page on addons.mozilla.org, and in the Permissions and Data section of Firefox's about:addons page. In practice, this means users will be able to see at a glance whether a new extension collects any data before they install it.
Read more of this story at Slashdot.
Four back-to-back weekends of work – and disastrously bad documentation – will do that do a techie
Who, Me? Welcome to Monday morning and another installment of Who, Me? For the uninitiated, it's The Register's weekly reader-contributed column that tells tales of your greatest misses, and how you rebuilt a career afterward.…
FOSS feud re-ignites with massive counter-claim
The long battle between Automattic and WP Engine has flared again, this time with accusations the latter company issued “false advertising”, and employed “deceptive business practices.”…
Slashdot reader joshuark writes: Microsoft says that the File Explorer (formerly Windows Explorer) now automatically blocks previews for files downloaded from the Internet to block credential theft attacks via malicious documents, according to a report from BleepingComputer. This attack vector is particularly concerning because it requires no user interaction beyond selecting a file to preview and removes the need to trick a target into actually opening or executing it on their system.
For most users, no action is required since the protection is enabled automatically with the October 2025 security update, and existing workflows remain unaffected unless you regularly preview downloaded files. "This change is designed to enhance security by preventing a vulnerability that could leak NTLM hashes when users preview potentially unsafe files," Microsoft says in a support document published Wednesday.
It is important to note that this may not take effect immediately and could require signing out and signing back in.
Read more of this story at Slashdot.
Pages
|