Linux fréttir
Electronic Arts has spent the past year pushing its nearly 15,000 employees to use AI for everything from code generation to scripting difficult conversations about pay. Employees in some areas must complete multiple AI training courses and use tools like the company's in-house chatbot ReefGPT daily.
The tools produce flawed code and hallucinations that employees then spend time correcting. Staff say the AI creates more work rather than less, according to Business Insider. They fix mistakes while simultaneously training the programs on their own work. Creative employees fear the technology will eventually eliminate demand for character artists and level designers. One recently laid-off senior quality-assurance designer says AI performed a key part of his job -- reviewing and summarizing feedback from hundreds of play testers. He suspects this contributed to his termination when about 100 colleagues were let go this past spring from the company's Respawn Entertainment studio.
Read more of this story at Slashdot.
NeutralTrust shows how agentic browser can interpret bogus links as trusted user commands
Researchers have found more attack vectors for OpenAI's new Atlas web browser – this time by disguising a potentially malicious prompt as an apparently harmless URL.…
Social media site dispatches crucial clarification days after curious announcement
X (formerly Twitter) sparked security concerns over the weekend when it announced users must re-enroll their security keys by November 10 or face account lockouts — without initially explaining why.…
Jen Easterly says most breaches stem from bad software, and smarter tech could finally clean it up
Ex-CISA head Jen Easterly claims AI could spell the end of the cybersecurity industry, as the sloppy software and vulnerabilities that criminals rely on will be tracked down faster than ever.…
OpenAI's rival Anthropic has a different approach — and "a clearer path to making a sustainable business out of AI,"
writes the Wall Street Journal.
Outside of OpenAI's close partnership with Microsoft, which integrates OpenAI's models into Microsoft's software products, OpenAI mostly caters to the mass market... which has helped OpenAI reach an annual revenue run rate of around $13 billion, around 30% of which it says comes from businesses.
Anthropic has generated much less mass-market appeal. The company has said about 80% of its revenue comes from corporate customers. Last month it said it had some 300,000 of them... Its cutting-edge Claude language models have been praised for their aptitude in coding: A July report from Menlo Ventures — which has invested in Anthropic — estimated via a survey that Anthropic had a 42% market share for coding, compared with OpenAI's 21%. Anthropic is also now ahead of OpenAI in market share for overarching corporate AI use, Menlo Ventures estimated, at 32% to OpenAI's 25%. Anthropic is also surprisingly close to OpenAI when it comes to revenue. The company is already at a $7 billion annual run rate and expects to get to $9 billion by the end of the year — a big lead over its better-known rival in revenue per user.
Both companies have backing in the form of investments from big tech companies — Microsoft for OpenAI, and a combination of Amazon and Google for Anthropic — that help provide AI computing infrastructure and expose their products to a broad set of customers. But Anthropic's growth path is a lot easier to understand than OpenAI's. Corporate customers are devising a plethora of money-saving uses for AI in areas like coding, drafting legal documents and expediting billing. Those uses are likely to expand in the future and draw more customers to Anthropic, especially as the return on investment for them becomes easier to measure...
Demonstrating how much demand there is for Anthropic among corporate customers, Microsoft in September said Anthropic's leading language model, Claude, would be offered within its Copilot suite of software despite Microsoft's ties to OpenAI.
"There is also a possibility that OpenAI's mass-market appeal becomes a turnoff for corporate customers," the article adds, "who want AI to be more boring and useful than fun and edgy."
Read more of this story at Slashdot.
AI wasn't the cause, and multi-cloud is for rubes
Column AWS put out a hefty analysis of its October 20 outage, and it's apparently written in a continuing stream of consciousness before the Red Bull wore off and the author passed out after 36 straight hours of writing.…
Poor data standards across government hamper scaling, says Parliament spending watchdog
The UK government's Department for Work and Pensions (DWP) has saved £4.4 million over three years by using machine learning to tackle fraud, according to the National Audit Office (NAO). However, the public spending watchdog found the department's ability to expand this work is limited by fragmented IT systems and poor cross-government data standards.…
No, it's just good at mass-production copy and paste. And yes, we're correctly applying Betteridge's Law
Opinion Remember ELIZA? The 1966 chatbot from MIT's AI Lab convinced countless people it was intelligent using nothing but simple pattern matching and canned responses. Nearly 60 years later, ChatGPT has people making the same mistake. Chatbots don't think – they've just gotten exponentially better at pretending.…
When it rains, it pours – and nobody packed an umbrella
Opinion When your cabbie asks you what you do for a living, and you answer "tech journalist," you never get asked about cloud infrastructure in return. Bitcoin, mobile phones, AI, yes. Until last week: "What's this AWS thing, then?" You already knew a lot of people were having a very bad day in Bezosville, but if the news had reached an Edinburgh black cab driver, new adjectives were needed.…
"Mozilla is introducing a new privacy framework for Firefox extensions that will require developers to disclose whether their add-ons collect or transmit user data..." reports the blog Linuxiac:
The policy takes effect on November 3, 2025, and applies to all new Firefox extensions submitted to addons.mozilla.org. According to Mozilla's announcement, extension developers must now include a new key in their manifest.json files. This key specifies whether an extension gathers any personal data. Even extensions that collect nothing must explicitly state "none" in this field to confirm that no data is being collected or shared.
This information will be visible to users at multiple points: during the installation prompt, on the extension's listing page on addons.mozilla.org, and in the Permissions and Data section of Firefox's about:addons page. In practice, this means users will be able to see at a glance whether a new extension collects any data before they install it.
Read more of this story at Slashdot.
Four back-to-back weekends of work – and disastrously bad documentation – will do that do a techie
Who, Me? Welcome to Monday morning and another installment of Who, Me? For the uninitiated, it's The Register's weekly reader-contributed column that tells tales of your greatest misses, and how you rebuilt a career afterward.…
FOSS feud re-ignites with massive counter-claim
The long battle between Automattic and WP Engine has flared again, this time with accusations the latter company issued “false advertising”, and employed “deceptive business practices.”…
Slashdot reader joshuark writes: Microsoft says that the File Explorer (formerly Windows Explorer) now automatically blocks previews for files downloaded from the Internet to block credential theft attacks via malicious documents, according to a report from BleepingComputer. This attack vector is particularly concerning because it requires no user interaction beyond selecting a file to preview and removes the need to trick a target into actually opening or executing it on their system.
For most users, no action is required since the protection is enabled automatically with the October 2025 security update, and existing workflows remain unaffected unless you regularly preview downloaded files. "This change is designed to enhance security by preventing a vulnerability that could leak NTLM hashes when users preview potentially unsafe files," Microsoft says in a support document published Wednesday.
It is important to note that this may not take effect immediately and could require signing out and signing back in.
Read more of this story at Slashdot.
Allows surveillance and cross-border evidence sharing, which worries human rights groups
The United Nations on Saturday staged a signing ceremony for the Convention against Cybercrime, the world’s first agreement to combat online crime. And while 72 nations picked up the pen, critics continue to point out the convention’s flaws.…
America's largest university system, with 460,000 students, is the 22-campus "Cal State" system, reports the New York Times. And it's recently teamed with Amazon, OpenAI and Nvidia, hoping to embed chatbots in both teaching and learning to become what it says will be America's "first and largest AI-empowered" university" — and prepare students for "increasingly AI-driven" careers.
It's part of a trend of major universities inviting tech companies into "a much bigger role as education thought partners, AI instructors and curriculum providers," argues the New York Times, where "dominant tech companies are now helping to steer what an entire generation of students learn about AI, and how they use it — with little rigorous evidence of educational benefits and mounting concerns that chatbots are spreading misinformation and eroding critical thinking..."
"Critics say Silicon Valley's effort to make AI chatbots integral to education amounts to a mass experiment on young people."
As part of the effort, [Cal State] is paying OpenAI $16.9 million to provide ChatGPT Edu, the company's tool for schools, to more than half a million students and staff — which OpenAI heralded as the world's largest rollout of ChatGPT to date. Cal State also set up an AI committee, whose members include representatives from a dozen large tech companies, to help identify the skills California employers need and improve students' career opportunities... Cal State is not alone. Last month, California Community Colleges, the nation's largest community college system, announced a collaboration with Google to supply the company's "cutting edge AI tools" and training to 2.1 million students and faculty. In July, Microsoft pledged $4 billion for teaching AI skills in schools, community colleges and to adult workers...
[A]s schools like Cal State work to usher in what they call an "AI-driven future," some researchers warn that universities risk ceding their independence to Silicon Valley. "Universities are not tech companies," Olivia Guest and Iris van Rooij, two computational cognitive scientists at Radboud University in the Netherlands, recently said in comments arguing against fast AI adoption in academia. "Our role is to foster critical thinking," the researchers said, "not to follow industry trends uncritically...."
Some faculty members have pushed back against the AI effort, as the university system faces steep budget cuts. The multimillion-dollar deal with OpenAI — which the university did not open to bidding from rivals like Google — was wasteful, they added. Faculty senates on several Cal State campuses passed resolutions this year criticizing the AI initiative, saying the university had failed to adequately address students using chatbots to cheat. Professors also said administrators' plans glossed over the risks of AI to students' critical thinking and ignored troubling industry labor practices and environmental costs.
Martha Kenney, a professor of women and gender studies at San Francisco State University, described the AI program as a Cal State marketing vehicle helping tech companies promote unproven chatbots as legitimate educational tools.
The article notes that Cal State's chief information officer "defended the OpenAI deal, saying the company offered ChatGPT Edu at an unusually low price.
"Still, California's community college system landed AI chatbot services from Google for more than 2 million students and faculty — nearly four times the number of users Cal State is paying OpenAI for — for free."
Read more of this story at Slashdot.
PLUS: China demotes tech self-sufficiency goal; Alibaba Cloud quietly quits VMware; India demands deepfake labels; and more!
Asia In Brief Australia’s Competition & Consumer Commission on Monday commenced legal proceedings against Microsoft for allegedly misleading users of its Microsoft 365 bundle.…
GM plans to dump Apple CarPlay and Android Auto on all its car new vehicles "in the near future," reports the Verge.
In an episode of the Verge's Decoder podcast, GM CEO Mary Barra confirmed the upcoming change to "phone projections" for GM cars:
The timing is unclear, but Barra pointed to a major rollout of what the company is calling a new centralized computing platform, set to launch in 2028, that will involve eventually transitioning its entire lineup to a unified in-car experience.
In place of phone projection, GM is working to update its current Android-powered infotainment implementation with a Google Gemini-powered assistant and an assortment of other custom apps, built both in-house and with partners. GM's 2023 decision to drop CarPlay and Android Auto support in its EVs has proved controversial, though for now GM has maintained support for phone projection in its gas-powered vehicles.
Read more of this story at Slashdot.
PLUS: Judge spanks NSO; Mozilla requires data use disclosures; TARmageddon meets Rust; And more!
Infosec In Brief Former basketball star Shaquille O'Neal is 7'1" (215 cm), and therefore uses car customization companies to modify vehicles to fit his frame. But it appears cybercriminals have targeted Shaq’s preferred motor-modder.…
North Dakota experienced an almost 40% increase in electricity demand "thanks in part to an explosion of data centers," reports the Washington Post. Yet the state saw a 1% drop in its per kilowatt-hour rates.
"A new study from researchers at Lawrence Berkeley National Laboratory and the consulting group Brattle suggests that, counterintuitively, more electricity demand can actually lower prices..."
Between 2019 and 2024, the researchers calculated, states with spikes in electricity demand saw lower prices overall. Instead, they found that the biggest factors behind rising rates were the cost of poles, wires and other electrical equipment — as well as the cost of safeguarding that infrastructure against future disasters... [T]he largest costs are fixed costs — that is, maintaining the massive system of poles and wires that keeps electricity flowing. That system is getting old and is under increasing pressures from wildfires, hurricanes and other extreme weather. More power customers, therefore, means more ways to divvy up those fixed costs. "What that means is you can then take some of those fixed infrastructure costs and end up spreading them around more megawatt-hours that are being sold — and that can actually reduce rates for everyone," said Ryan Hledik [principal at Brattle and a member of the research team]...
[T]he new study shows that the costs of operating and installing wind, natural gas, coal and solar have been falling over the past 20 years. Since 2005, generation costs have fallen by 35 percent, from $234 billion to $153 billion. But the costs of the huge wires that transmit that power across the grid, and the poles and wires that deliver that electricity to customers, are skyrocketing. In the past two decades, transmission costs nearly tripled; distribution costs more than doubled. Part of that trend is from the rising costs of parts: The price of transformers and wires, for example, has far outpaced inflation over the past five years. At the same time, U.S. utilities haven't been on top of replacing power poles and lines in the past, and are now trying to catch up. According to another report from Brattle, utilities are already spending more than $10 billion a year replacing aging transmission lines.
And finally, escalating extreme-weather events are knocking out local lines, forcing utilities to spend big to make fixes. Last year, Hurricane Beryl decimated Houston's power grid, forcing months of costly repairs. The threat of wildfires in the West, meanwhile, is making utilities spend billions on burying power lines. According to the Lawrence Berkeley study, about 40 percent of California's electricity price increase over the last five years was due to wildfire-related costs.
Yet the researchers tell the Washington Post that prices could still increase if utilities have to quickly build more infrastructure just to handle data center. But their point is "This is a much more nuanced issue than just, 'We have a new data center, so rates will go up.'"
As the article points out,
"Generous subsidies for rooftop solar also increased rates in certain states, mostly in places such as California and Maine... If customers install rooftop solar panels, demand for electricity shrinks, spreading those fixed costs over a smaller set of consumers.
Read more of this story at Slashdot.
"Snippets of proprietary or copyleft reciprocal code can enter AI-generated outputs, contaminating codebases with material that developers can't realistically audit or license properly."
That's the warning from Sean O'Brien, who founded the Yale Privacy Lab at Yale Law School. ZDNet reports:
Open software has always counted on its code being regularly replenished. As part of the process of using it, users modify it to improve it. They add features and help to guarantee usability across generations of technology. At the same time, users improve security and patch holes that might put everyone at risk. But O'Brien says, "When generative AI systems ingest thousands of FOSS projects and regurgitate fragments without any provenance, the cycle of reciprocity collapses. The generated snippet appears originless, stripped of its license, author, and context." This means the developer downstream can't meaningfully comply with reciprocal licensing terms because the output cuts the human link between coder and code. Even if an engineer suspects that a block of AI-generated code originated under an open source license, there's no feasible way to identify the source project. The training data has been abstracted into billions of statistical weights, the legal equivalent of a black hole.
The result is what O'Brien calls "license amnesia." He says, "Code floats free of its social contract and developers can't give back because they don't know where to send their contributions...."
"Once AI training sets subsume the collective work of decades of open collaboration, the global commons idea, substantiated into repos and code all over the world, risks becoming a nonrenewable resource, mined and never replenished," says O'Brien. "The damage isn't limited to legal uncertainty. If FOSS projects can't rely upon the energy and labor of contributors to help them fix and improve their code, let alone patch security issues, fundamentally important components of the software the world relies upon are at risk."
O'Brien says, "The commons was never just about free code. It was about freedom to build together." That freedom, and the critical infrastructure that underlies almost all of modern society, is at risk because attribution, ownership, and reciprocity are blurred when AIs siphon up everything on the Internet and launder it (the analogy of money laundering is apt), so that all that code's provenance is obscured.
Read more of this story at Slashdot.
Pages
|