Linux fréttir
West Sussex County Council is using up to $31 million from the sale of capital assets to fund an Oracle-based transformation project, originally budgeted at $3.2 million but now expected to cost nearly $50 million due to delays and cost overruns. The project, intended to replace a 20-year-old SAP system with a SaaS-based HR and finance system, has faced multiple setbacks, renegotiated contracts, and a new systems integrator, with completion now pushed to December 2025. The Register reports: West Sussex County Council is taking advantage of the so-called "flexible use of capital receipts scheme" introduced in 2016 by the UK government to allow councils to use money from the sale of assets such as land, offices, and housing to fund projects that result in ongoing revenue savings. An example of the asset disposals that might contribute to the project -- set to see the council move off a 20-year-old SAP system -- comes from the sale of a former fire station in Horley, advertised for $3.1 million.
Meanwhile, the delays to the project, which began in November 2019, forced the council to renegotiate its terms with Oracle, at a cost of $3 million. The council had expected the new SaaS-based HR and finance system to go live in 2021, and signed a five-year license agreement until June 2025. The plans to go live were put back to 2023, and in the spring of 2024 delayed again until December 2025. According to council documents published this week [PDF], it has "approved the variation of the contract with Oracle Corporation UK Limited" to cover the period from June 2025 to June 2028 and an option to extend again to the period June 2028 to 2030. "The total value of the proposed variation is $2.96 million if the full term of the extension periods are taken," the council said.
Read more of this story at Slashdot.
An anonymous reader quotes a report from Ars Technica: On Thursday, Anthropic announced Citations, a new API feature that helps Claude models avoid confabulations (also called hallucinations) by linking their responses directly to source documents. The feature lets developers add documents to Claude's context window, enabling the model to automatically cite specific passages it uses to generate answers. "When Citations is enabled, the API processes user-provided source documents (PDF documents and plaintext files) by chunking them into sentences," Anthropic says. "These chunked sentences, along with user-provided context, are then passed to the model with the user's query."
The company describes several potential uses for Citations, including summarizing case files with source-linked key points, answering questions across financial documents with traced references, and powering support systems that cite specific product documentation. In its own internal testing, the company says that the feature improved recall accuracy by up to 15 percent compared to custom citation implementations created by users within prompts. While a 15 percent improvement in accurate recall doesn't sound like much, the new feature still attracted interest from AI researchers like Simon Willison because of its fundamental integration of Retrieval Augmented Generation (RAG) techniques. In a detailed post on his blog, Willison explained why citation features are important.
"The core of the Retrieval Augmented Generation (RAG) pattern is to take a user's question, retrieve portions of documents that might be relevant to that question and then answer the question by including those text fragments in the context provided to the LLM," he writes. "This usually works well, but there is still a risk that the model may answer based on other information from its training data (sometimes OK) or hallucinate entirely incorrect details (definitely bad)." Willison notes that while citing sources helps verify accuracy, building a system that does it well "can be quite tricky," but Citations appears to be a step in the right direction by building RAG capability directly into the model. Anthropic's Alex Albert clarifies that Claude has been trained to cite sources for a while now. What's new with Citations is that "we are exposing this ability to devs." He continued: "To use Citations, users can pass a new 'citations [...]' parameter on any document type they send through the API."
Read more of this story at Slashdot.
Crouching tiger, hidden layer(s)
Barely a week after DeepSeek's R1 LLM turned Silicon Valley on its head, the Chinese outfit is back with a new release it claims is ready to challenge OpenAI's DALL-E 3.…
Facebook has banned posts mentioning Linux-related topics, with the popular Linux news and discussion site, DistroWatch, at the center of the controversy. Tom's Hardware reports: A post on the site claims, "Facebook's internal policy makers decided that Linux is malware and labeled groups associated with Linux as being 'cybersecurity threats.' We tried to post some blurb about distrowatch.com on Facebook and can confirm that it was barred with a message citing Community Standards. DistroWatch says that the Facebook ban took effect on January 19. Readers have reported difficulty posting links to the site on this social media platform. Moreover, some have told DistroWatch that their Facebook accounts have been locked or limited after sharing posts mentioning Linux topics.
If you're wondering if there might be something specific to DistroWatch.com, something on the site that the owners/operators perhaps don't even know about, for example, then it seems pretty safe to rule out such a possibility. Reports show that "multiple groups associated with Linux and Linux discussions have either been shut down or had many of their posts removed." However, we tested a few other Facebook posts with mentions of Linux, and they didn't get blocked immediately. Copenhagen-hosted DistroWatch says it has tried to appeal against the Community Standards-triggered ban. However, they say that a Facebook representative said that Linux topics would remain on the cybersecurity filter. The DistroWatch writer subsequently got their Facebook account locked... DistroWatch points out the irony at play here: "Facebook runs much of its infrastructure on Linux and often posts job ads looking for Linux developers."
Read more of this story at Slashdot.
Despite impressive benchmarks, the Chinese-made LLM is not without some interesting issues
DeepSeek's open source reasoning-capable R1 LLM family boasts impressive benchmark scores – but its erratic responses raise more questions about how these models were trained and what information has been censored.…
An anonymous reader quotes a report from TechCrunch: TechCrunch gathered data from several sources and found similar trends. In 2024, 966 startups shut down, compared to 769 in 2023, according to Carta. That's a 25.6% increase. One note on methodology: Those numbers are for U.S.-based companies that were Carta customers and left Carta due to bankruptcy or dissolution. There are likely other shutdowns that wouldn't be accounted for through Carta, estimates Peter Walker, Carta's head of insights. [...] Meanwhile, AngelList found that 2024 saw 364 startup winddowns, compared to 233 in 2023. That's a 56.2% jump. However, AngelList CEO Avlok Kohli has a fairly optimistic take, noting that winddowns "are still very low relative to the number of companies that were funded across both years."
Layoffs.fyi found a contradicting trend: 85 tech companies shut down in 2024, compared to 109 in 2023 and 58 in 2022. But as founder Roger Lee acknowledges, that data only includes publicly reported shutdowns "and therefore represents an underestimate." Of those 2024 tech shutdowns, 81% were startups, while the rest were either public companies or previously acquired companies that were later shut down by their parent organizations. So many companies got funded in 2020 and 2021 at heated valuations with famously thin diligence, that it's only logical that up to three years later, an increasing number couldn't raise more cash to fund their operations. Taking investment at too high of a valuation increases the risk such that investors won't want to invest more unless business is growing extremely well. [...]
Looking ahead, Walker also expects we'll continue to see more shutdowns in the first half of 2025, and then a gradual decline for the rest of the year. That projection is based mostly on a time-lag estimate from the peak of funding, which he estimates was the first quarter of 2022 in most stages. So by the first quarter of 2025, "most companies will have either found a new path forward or had to make this difficult choice." "Tech zombies and a startup graveyard will continue to make headlines," said Dori Yona, CEO and co-founder of SimpleClosure. "Despite the crop of new investments, there are a lot of companies that have raised at high valuations and without enough revenue."
Read more of this story at Slashdot.
Uncle Sam will 'no longer blindly dole out money,' State Dept says
US Secretary of State Marco Rubio has frozen nearly all foreign aid cash for a full-on government review, including funds to defend America's allies from cyberattacks as well as steer international computer security policies.…
Dangerous temperatures could kill 50% more people in Europe by the end of the century, a study has found, with the lives lost to stronger heat projected to outnumber those saved from milder cold. From a report: The researchers estimated an extra 8,000 people would die each year as a result of "suboptimal temperatures" even under the most optimistic scenario for cutting planet-heating pollution. The hottest plausible scenario they considered showed a net increase of 80,000 temperature-related deaths a year.
The findings challenge an argument popular among those who say global heating is good for society because fewer people will die from cold weather. "We wanted to test this," said Pierre Masselot, a statistician at the London School of Hygiene & Tropical Medicine and lead author of the study. "And we show clearly that we will see a net increase in temperature-related deaths under climate change." The study builds on previous research in which the scientists linked temperature to mortality rates for different age groups in 854 cities across Europe. They combined these with three climate scenarios that map possible changes in population structure and temperature over the century.
Read more of this story at Slashdot.
Google has open-sourced the PebbleOS, with the original founder, Eric Migicovsky, starting a company to continue where he left off in 2016. "This is part of an effort from Google to help and support the volunteers who have come together to maintain functionality for Pebble watches after the original company ceased operations in 2016," said Google in a blog post. The Verge reports: The company -- which can't be named Pebble because Google still owns that -- doesn't have a name yet. For now, Migicovsky is hosting a waitlist and news signup at a website called RePebble. Later this year, once the company has a name and access to all that Pebble software, the plan is to start shipping new wearables that look, feel, and work like the Pebbles of old. The reason, Migicovsky tells me, is simple. "I've tried literally everything else," he says, "and nothing else comes close." Sure, he may just have a very specific set of requirements -- lots of people are clearly happy with what Apple, Garmin, Google, and others are making. But it's true that there's been nothing like Pebble since Pebble. "For the things I want out of it, like a good e-paper screen, long battery life, good and simple user experience, hackable, there's just nothing."
The core of Pebble, he says, is a few things. A Pebble should be quirky and fun and should feel like a gadget in an important way. It shows notifications, lets you control your music with buttons, lasts a long time, and doesn't try to do too much. It sounds like Migicovsky might have Pebble-y ambitions beyond smartwatches, but he appears to be starting with smartwatches. If that sounds like the old Pebble and not much else, that's precisely the point. [...] Migicovsky also hopes to be part of a broader open-source community around Pebble OS. The Pebble diehards still exist: a group of developers at Rebble have worked to keep many of the platform's apps alive, for instance, along with the Cobble app for connecting to phones, and the Pebble subreddit is surprisingly active for a product that hasn't been updated since the Obama administration. Migicovsky says he plans to open-source whatever his new company builds and hopes lots of other folks will build stuff, too.
Read more of this story at Slashdot.
Microsoft has launched an open-source document database platform built on PostgreSQL, partnering with FerretDB as a front-end interface. The solution includes two PostgreSQL extensions: pg_documentdb_core for BSON optimization and pg_documentdb_api for data operations.
FerretDB CEO Peter Farkas said the integration with Microsoft's DocumentDB extension has improved performance twentyfold for certain workloads in FerretDB 2.0. The platform carries no commercial licensing fees or usage restrictions under its MIT license, according to Microsoft.
Read more of this story at Slashdot.
Maxwell, Pascal and Volta, oh my! But fear not, driver support is still safe
The end of the road is nearing for a range of aging Nvidia graphics cards, as support for several architectures was marked as feature-complete in the latest release of its CUDA runtime this month.…
Nvidia has responded to the market panic over Chinese AI group DeepSeek, arguing that the startup's breakthrough still requires "significant numbers of NVIDIA GPUs" for its operation. The US chipmaker, which saw more than $600 billion wiped from its market value on Monday, characterized DeepSeek's advancement as "excellent" but asserted that the technology remains dependent on its hardware.
"DeepSeek's work illustrates how new models can be created using [test time scaling], leveraging widely-available models and compute that is fully export control compliant," Nvidia said in a statement Monday. However, it stressed that "inference requires significant numbers of NVIDIA GPUs and high-performance networking." The statement came after DeepSeek's release of an AI model that reportedly achieves performance comparable to those from US tech giants while using fewer chips, sparking the biggest one-day drop in Nvidia's history and sending shockwaves through global tech stocks.
Nvidia sought to frame DeepSeek's breakthrough within existing technical frameworks, citing it as "a perfect example of Test Time Scaling" and noting that traditional scaling approaches in AI development - pre-training and post-training - "continue" alongside this new method. The company's attempt to calm market fears follows warnings from analysts about potential threats to US dominance in AI technology. Goldman Sachs earlier warned of possible "spillover effects" from any setbacks in the tech sector to the broader market. The shares stabilized somewhat in afternoon trading but remained on track for their worst session since March 2020, when pandemic fears roiled markets.
Read more of this story at Slashdot.
Chinese AI startup DeepSeek has launched Janus Pro, a new family of open-source multimodal models that it claims outperforms OpenAI's DALL-E 3 and Stable Diffusion's offering on key benchmarks. The models, ranging from 1 billion to 7 billion parameters, are available on Hugging Face under an MIT license for commercial use.
The largest model, Janus Pro 7B, surpasses DALL-E 3 and other image generators on GenEval and DPG-Bench tests, despite being limited to 384 x 384 pixel images.
Read more of this story at Slashdot.
Meta's AI chatbot will now use personal data from users' Facebook and Instagram accounts for personalized responses in the United States and Canada, the company said in a blog post. The upgraded Meta AI can remember user preferences from previous conversations across Facebook, Messenger, and WhatsApp, such as dietary choices and interests. CEO Mark Zuckerberg said the feature helps create personalized content like bedtime stories based on his children's interests. Users cannot opt out of the data-sharing feature, a Meta spokesperson told TechCrunch.
Read more of this story at Slashdot.
VC Summer units 2 and 3, abandoned in 2017, are looking for a buyer; owners say tech industry needs are a perfect fit
Abandoned in 2017, a pair of incomplete South Carolina nuclear reactors may get a new lease on life due to the growing need to power AI datacenters.…
We're not in Kansas anymore
Microsoft has launched a document database platform constructed on a relational PostgreSQL back end.…
Vice President JD Vance said Saturday that "we believe fundamentally that big tech does have too much power," despite the prominent positioning of tech CEOs at President Trump's inauguration earlier this month. From a report: "They can either respect America's constitutional rights, they can stop engaging in censorship, and if they don't, you can be absolutely sure that Donald Trump's leadership is not going to look too kindly on them," Vance said on "Face the Nation with Margaret Brennan."
The comments came in response to the unusual attendance of a slate of tech CEOs at Mr. Trump's inauguration, including Meta's Mark Zuckerberg, Amazon's Jeff Bezos, Tesla's Elon Musk, Apple's Tim Cook, and Google's Sundar Pichai. The tech titans, some of whom are among the richest men in the world and directed donations from their companies to Mr. Trump's inauguration, were seated in some of the most highly sought after seats in the Capitol Rotunda.
Vance noted that the tech CEOs "didn't have as good of seating as my mom and a lot of other people who were there to support us." In an August interview on "Face the Nation", the vice president outlined his thinking on big tech, saying that companies like Google are too powerful and censor American information, while possessing a "monopoly over free speech" that he argued ought to be broken up.
Read more of this story at Slashdot.
Chinese AI startup grapples with consequences of sudden popularity
China's DeepSeek, which shook up American AI makers with the debut of its V3 and reasoning-capable R1 LLM families, has limited new signups to its web-based interface to its models due to what's said to be an ongoing cyberattack.…
Latest trope is tricky enough to fool even the technical crowd… almost
Google says it's now hardening defenses against a sophisticated account takeover scam documented by a programmer last week.…
Meta has set up four war rooms to analyze DeepSeek's technology, including two focusing on how High-Flyer reduced training costs, and one on what data High-Flyer may have used, The Information's Kalley Huang and Stephanie Palazzolo report. China's DeepSeek is a large-language open source model that claims to rival offerings from OpenAI's ChatGPT and Meta Platforms, while using a much smaller budgets.
Read more of this story at Slashdot.
Pages
|