news aggregator
Big Tech is Moving Data Through the Gulf Using Fiber-Optic Cables Alongside Iraq's Oil Pipelines
Major American cloud companies with data centers in the Persian Gulf "are channeling data out of the war zone through fiber-optic cables that an Iraqi telecom has strung alongside crude-oil pipelines," reports RestofWorld.org:
The data centers serve customers in more than 190 countries, processing transactions, storing files, and running applications for businesses and individuals from Latin America to South Asia. When Iranian drones struck Amazon's facilities in the United Arab Emirates and Bahrain on March 1, the effects spread across the region. Apps of major banks in the UAE, including Abu Dhabi Commercial Bank, stopped working. Payment and delivery platforms went offline. Snowflake, a U.S. enterprise software company used by thousands of businesses globally, reported Middle East service disruptions tied directly to the Amazon Web Services outage. Amazon told its customers to migrate their workloads out of the Middle East...
[Data from] banking, payment, and enterprise platforms normally travels to Europe through cables running under the Red Sea and the Strait of Hormuz, then connects onward to users across the world. The war has put those cables at risk. The overland route through Iraq is meant to serve as a backup if the sea cables are disabled. The overland route through Iraq is meant to serve as a backup if the sea cables are disabled... [Martin Frank, strategic adviser for IQ Networks, the company that built the network, told Rest of World this overland route is already carrying live traffic.] The company, based in Iraq's Kurdistan region, runs fiber from the southern tip of Iraq to the Turkish border. It is now extending the network through gas-pipeline corridors across Turkey to the European border, with the first link expected early next year, Frank said. When that extension is complete, cloud providers will — for the first time — have the option of an unbroken land-based fiber path from the Gulf into the European network, connecting onward to Frankfurt, Amsterdam, London, and Marseille, from where their data connects back to U.S. users.
The advantage of this alternative route is that oil and gas pipelines come with their own security perimeters, access roads, and maintenance corridors already built around them, allowing a telecom company to lay fiber without digging new trenches through difficult terrain. Iraq avoided the fate of earlier overland routes that collapsed because of a sustained period of stability, and because existing pipeline infrastructure provided ready-made corridors for laying fiber, Doug Madory, director of internet analysis at network intelligence firm Kentik, told Rest of World... IQ Networks' route, called the Silk Route Transit, has been running since November 2023. The network currently carries enough data to stream about 400,000 high-definition videos simultaneously, Frank said.
The land route is faster. Data traveling through submarine cables from the Gulf to Europe takes about 150 milliseconds. The Iraqi terrestrial route cuts that to roughly 70 milliseconds — a difference that matters for video calls, financial transactions, and applications that run on artificial intelligence, according to IQ Networks.
Read more of this story at Slashdot.
Categories: Linux fréttir
Challenging UPS and FedEx, Amazon Opens Its Shipping Network to All Businesses
This week Amazon opened up its parcel shipping, fulfillment, and distribution "to businesses of all types and sizes." Any business can now ship, store, and deliver "using the same supply chain that supports Amazon," according to Monday's announcement of "Amazon Supply Chain Services."
The move sent shares of UPS and FedEx "tumbling" Monday writes GeekWire. And though both stocks bounced back as the week went on, GeekWire sees this as the latest example of Amazon "turning its internal capabilities into products and services for sale..."
"Amazon had already surpassed both carriers to become the nation's largest parcel shipper by volume, according to parcel-analytics firm ShipMatrix."
Initial customers include Procter & Gamble, which is using Amazon's freight network to transport raw materials; 3M, which is using it to move products to distribution centers; Lands' End, which is fulfilling orders across sales channels from Amazon's warehouses; and American Eagle Outfitters, which is using Amazon's parcel service for last-mile delivery. The service can fulfill orders placed through platforms that compete with Amazon's own marketplace, including Walmart, Shopify, TikTok, and others... Peter Larsen, vice president of Amazon Supply Chain Services, compared the launch to the origins of Amazon's cloud business...
In addition to putting Amazon in competition with existing players in the logistics industry, the move also raises questions about data privacy. Amazon has faced accusations of using nonpublic seller data to compete against merchants on its marketplace, which it has denied. Larsen told the Wall Street Journal that the company prohibits using supply chain customer data for its own marketplace decisions, noting that hundreds of thousands of Amazon sellers already trust the company to fulfill orders placed on rival platforms.
The article notes taht in his annual shareholder letter Amazon's CEO "said the company is also exploring selling its custom AI chips and robotics to outside customers."
Read more of this story at Slashdot.
Categories: Linux fréttir
GM Secretly Sold California Drivers' Data, Agrees to Pay $12.75M In Privacy Settlement
"General Motors sold the data of California drivers without their knowledge or consent," says California's attorney general, "and despite numerous statements reassuring drivers that it would not do so."
In 2024, The New York Times "reported that automakers including GM were sharing information about their customers' driving behavior with insurance companies," remembers TechCrunch, "and that some customers were concerned that their insurance rates had gone up as a result."
Now General Motors "has reached a privacy-related settlement with a group of law enforcement agencies led by California Attorney General Rob Bonta..."
The settlement announcement from Bonta's office similarly alleges that GM sold "the names, contact information, geolocation data, and driving behavior data of hundreds of thousands of Californians" to Verisk Analytics and LexisNexis Risk Solutions, which are both data brokers. Bonta's office further alleges that this data was collected through GM's OnStar program, and that the company made roughly $20 million from data sales.
However, Bonta's office also said the data did not lead to increased insurance prices in California, "likely because under California's insurance laws, insurers are prohibited from using driving data to set insurance rates."
As part of the settlement, GM has agreed to pay $12.75 million in civil penalties and to stop selling driving data to any consumer reporting agencies for five years, Bonta's office said. GM has also agreed to delete any driver data that it still retains within 180 days (unless it obtains consent from customers), and to request that Lexis and Verisk delete that data.
"This trove of information included precise and personal location data that could identify the everyday habits and movements of Californians," according to the attorney general's announcement. The settlement "requires General Motors to abandon these illegal practices, and underscores the importance of the data minimization in California's privacy law — companies can't just hold on to data and use it later for another purpose."
"Modern cars are rolling data collection machines," said San Francisco District Attorney Brooke Jenkins. "Californians must have confidence that they know what data is being collected, how it is being used, and what their opt-out rights are... This case sends a strong message that law enforcement will take action when California privacy laws are not scrupulously followed."
Read more of this story at Slashdot.
Categories: Linux fréttir
Amazon Relents, Lets its Programmers Use OpenAI's Codex and Anthropic's Claude
An anonymous reader shared this report from Futurism:
In November, Amazon leaders sent an internal memo to employees, pushing them to use its in-house code generating tool, Kiro, over third-party alternatives from competitors. "While we continue to support existing tools in use today, we do not plan to support additional third party, AI development tools," the memo read, as quoted by Reuters at the time. "As part of our builder community, you all play a critical role shaping these products and we use your feedback to aggressively improve them."
It was an unusual development, considering the tens of billions of dollars the e-commerce giant has invested in its competitors in the space, including Anthropic and OpenAI... Half a year later, Amazon is singing a dramatically different tune. As Business Insider reports, Amazon is officially throwing in the towel, succumbing to growing calls among employees for access to OpenAI's Codex and Anthropic's Claude... Given the unfortunate optics of opening the floodgates for Codex and Claude Code, an Amazon spokesperson told the publication in a statement that teams are still "primarily using" Kiro, claiming that 83 percent of engineers at the company are leaning on it.
Read more of this story at Slashdot.
Categories: Linux fréttir
Rocket Lab Reports Growing Demand for Commercial Space Products. Stock Surges 34%
For just the first three months of 2026, Rocket Lab's launch business reports $63.7 million in revenue, reports CNBC — plus another $136.7 million from its space systems business. Besides beating Wall Street's expectations, Rocket Lab also announced that its backlog has more than doubled from a year ago to $2.2 billion, and that it's buying space robotics company Motiv Space Systems.
Friday its stock price shot up 34% in one day...
Rocket Lab's stock has more than quadrupled over the past year, benefiting from skyrocketing demand for businesses tied to the space economy ahead of SpaceX's hotly anticipated IPO later this year. Demand for space systems and satellites is also escalating as President Donald Trump pursues his ambitious Golden Dome missile defense project and NASA's crewed Artemis missions rev up.
Rocket Lab said Thursday that it signed its largest contract ever with a confidential customer for its Neutron and Electron rockets through 2029, weeks after landing a $190 million deal for 20 hypersonic test flights... "The demand signal is clear," CEO Peter Beck said on an earnings call with analysts, calling the pace of new product releases from the company this year "relentless".... Rocket Lab's good news lifted other space companies. Firefly Aeropspace and Intuitive Machines both jumped more than 20, while Redwire gained 19%. Voyager Technologies rose 14%.
"The company anticipates revenue between $225 million and $240 million during the second quarter."
Read more of this story at Slashdot.
Categories: Linux fréttir
Unemployment Ticked Up in America's IT Sector
IT sector unemployment "increased to 3.8% in April from 3.6% in March," reports the Wall Street Journal.
But they add that the increase reflects "an ongoing uncertainty in tech as AI continues to play havoc with hiring. That's according to analysis from consulting firm Janco Associates, which bases its findings on data from the U.S. Labor Department."
On Friday, the department said the economy added 115,000 jobs, buoyed by gains in industries including retail, transportation and warehousing and healthcare. The unemployment rate was unchanged at 4.3%. But the information sector lost 13,000 jobs in April.
While it's still too early to say exactly how AI is affecting employment overall, some businesses, especially in the tech industry, have said it's part of the reason they're cutting staff. In April, Meta Platforms said it would lay off 10% of its staff, or roughly 8,000 people, as it seeks to streamline operations and pay for its own massive investments in AI. Nike will reduce its workforce by roughly 1,400 workers, or about 2%, mostly in its tech department, as it simplifies global operations. And Snap is planning to eliminate 16% of its workforce, or about 1,000 positions, as it aims to boost efficiency. In other areas of IT, which includes telecommunications and data-processing, employment is now down 11%, or 342,000 jobs, from its most recent peak in November 2022.
But there's not just AI to blame. Inflation and economic uncertainty linked to the Iran conflict is giving some chief executives and tech leaders reason to pull back or pause their IT hiring, said Janco Chief Executive Victor Janulaitis.
The article even notes that postings for software developer jobs "are up 15% year-over-year on job-search platform Indeed, according to Hannah Calhoon, its vice president of AI". But employers do seem to be looking for experienced developers, which could pose a problem for recent college graduates.
Read more of this story at Slashdot.
Categories: Linux fréttir
Unemployed Ticked Up in America's IT Sector
IT sector unemployment "increased to 3.8% in April from 3.6% in March," reports the Wall Street Journal.
But they add that the increase reflects "an ongoing uncertainty in tech as AI continues to play havoc with hiring. That's according to analysis from consulting firm Janco Associates, which bases its findings on data from the U.S. Labor Department."
On Friday, the department said the economy added 115,000 jobs, buoyed by gains in industries including retail, transportation and warehousing and healthcare. The unemployment rate was unchanged at 4.3%. But the information sector lost 13,000 jobs in April.
While it's still too early to say exactly how AI is affecting employment overall, some businesses, especially in the tech industry, have said it's part of the reason they're cutting staff. In April, Meta Platforms said it would lay off 10% of its staff, or roughly 8,000 people, as it seeks to streamline operations and pay for its own massive investments in AI. Nike will reduce its workforce by roughly 1,400 workers, or about 2%, mostly in its tech department, as it simplifies global operations. And Snap is planning to eliminate 16% of its workforce, or about 1,000 positions, as it aims to boost efficiency. In other areas of IT, which includes telecommunications and data-processing, employment is now down 11%, or 342,000 jobs, from its most recent peak in November 2022.
But there's not just AI to blame. Inflation and economic uncertainty linked to the Iran conflict is giving some chief executives and tech leaders reason to pull back or pause their IT hiring, said Janco Chief Executive Victor Janulaitis.
The article even notes that postings for software developer jobs "are up 15% year-over-year on job-search platform Indeed, according to Hannah Calhoon, its vice president of AI". But employers do seem to be looking for experienced developers, which could pose a problem for recent college graduates.
Read more of this story at Slashdot.
Categories: Linux fréttir
Memory godboxes could offer relief from the RAMpocalypse
In modern datacenters, storage can live anywhere — local to the machine, remotely accessed over the network, and/or shared between systems. The next generation of servers will treat system memory in much the same way. Systems will still have some local DDR5, but the bulk of it will be remotely accessed from what some have taken to calling the memory godbox. The ongoing DRAM shortage has created a perfect storm for the proliferation of the appliances, which not only allow for memory to be pooled, but also data stored in that memory to be shared by multiple machines simultaneously. In effect, memory becomes a fungible resource. More importantly, your next round of servers will probably support the tech, if they don't already. CXL finally has its moment to shine The technology at the heart of these memory godboxes isn’t new. Compute Express Link (CXL) has been slowly gaining traction since its introduction seven years ago. As a quick refresher, CXL defines a common, cache-coherent interface for connecting CPUs, memory, accelerators, and other peripherals. The technology comes in a couple of different flavors: CXL.mem, CXL.cache, and CXL.io, which, as a whole, have implications for disaggregated compute. Imagine a rack with a CPU node, GPU node, memory node, and storage node, which can talk to one another completely independently. That's the core idea behind CXL. CXL piggybacks off the PCIe standard, which means in theory it should be broadly compatible, but, up to this point, it's primarily been used with memory devices. The 1.0 spec opened the door to memory expansion modules, which allow you to add more memory by slotting them into a CXL-compatible PCIe slot. To the operating system — assuming you’re running Linux that is — the extra memory is largely transparent, showing up as if it were attached to another CPU socket, just one without any additional compute. The 2.0 spec, which showed up in 2020, added basic support for switching, which meant memory could be pooled and then allocated to any number of connected systems. AMD and Intel’s current crop of Epycs and Xeons already support these appliances. But while the memory can be partitioned and reallocated to different machines as needed, two machines can’t work on the same data simultaneously. Unless you were memory-constrained, the added complexity of CXL 2.0 didn’t offer much benefit over simply using higher capacity DIMMs in the first place. At least, not until memory prices went through the roof. Where things really get interesting is when the 3.0 spec arrives in AMD and Intel’s next-generation of Epycs and Xeons. In fact, from what we understand, Amazon’s Graviton5 CPUs we looked at in December already support the spec. CXL 3.0 introduces two key capabilities that make it particularly interesting for memory appliances. The first is support for larger topologies: Multiple CXL switches can be stitched together into a fabric. The second is support for memory sharing: Rather than partitioning memory into slices only accessible to one machine at a time, memory can be shared between machines. In theory this could allow two machines running the same set of workloads to use the memory closer to that of one. It’s a bit like deduplication for memory. In fact, we already do this in virtualized environments like KVM, but it now works across machines. There are security and performance implications to all of this. Thankfully in CXL 3.1 and later, the consortium introduced confidential computing capabilities into the spec, allowing for isolation where necessary. On the performance end of things, CXL 3.0 moves to PCIe 6.0 as a baseline, which provides 16 GB/s of bidirectional bandwidth per lane. Assuming 64 lanes of CXL per CPU, that works out to an additional 512 GB/s of bandwidth. So memory bandwidth shouldn’t be too much of an issue for most applications. Latency, on the other hand, is a different story. CXL-attached memory is going to add some latency. However, as we’ve previously discussed, the latency isn’t as bad as you’re probably thinking — on the order of a NUMA hop, or about 170 to 250 nanoseconds of round trip latency. Obviously, the farther the memory appliance is from the host CPU, the worse the latency is going to be. Late last year, the CXL consortium ratified the 4.0 spec, which among other things doubles the bandwidth from 16 GB/s per lane to 32 GB/s by re-basing on PCIe 7.0. However, it'll be a while before we see appliances based on the spec. Where’s my memory godbox? There are several companies developing hardware for these kinds of networked memory appliances. Panmnesia’s CXL 3.2-compatible PanSwitch is one of the most sophisticated examples. The switch features 256 lanes of connectivity for CXL memory modules, devices, or CPUs to connect, pool, or share resources. If you’re okay with memory pooling and don’t need the niceties of CXL 3.0, then there are already several memory appliances available that are compatible with the latest generation of Xeon 6 and Epyc Turin processors. Liqid’s composable memory platform, for example, can provide a pool of up to 100 TB of DDR5 to as many as 32 hosts. Meanwhile, UnifabriX Max systems provide CXL 1.1 or 2.0 connectivity to 16 or more systems with support for CXL 3.2 already in the works. We suspect that as more CXL 3.0 compatible CPUs and GPUs hit the market, more of these memory godboxes will appear. AI eats everything Don’t get too excited. While network attached memory has the potential to reduce an enterprise's infrastructure spend, those same qualities make it attractive for the very thing driving the memory shortage in the first place. AI adoption has driven demand for DRAM off the charts. In addition to the HBM used by GPUs, DDR5 is being used for key value cache offload during inference. These KV caches store model state and can chew significant amounts of memory — often more than the model itself — in multi-tenant serving scenarios. Rather than discard these caches and recompile them when the model state is restored, it’s more efficient to offload them to system memory and eventually flash storage. The problem with using flash storage is that it has a finite write endurance. After a while it wears out. Instead, CXL memory vendors are positioning the tech as a more resilient alternative. That’s bad news for enterprises looking to these memory godboxes for salvation from the RAMpocalypse. ®
Categories: Linux fréttir
Both Fedora and Ubuntu will get AI support – soon
Both Ubuntu and Fedora have made it official: support is coming soon for running local generative AI instances. An epic and still-growing thread in the Fedora forums states one of the goals for the next version: the Fedora AI Developer Desktop Objective. It is causing some discontent, and at least one Fedora contributor, SUSE’s Fernando Mancera, has resigned. Fedora Project Lead Jef Spaleta, who took over the role from Matthew Miller a year ago, remains resolute, saying: I have zero evidence in front of me that users are being driven away from Fedora because of AI. As far as Red Hat’s community distribution goes, while this may be controversial, this should not be a big shock. In October last year, The Register reported that the Fedora council approved a policy allowing AI-assisted contributions, and anyone following the IBM subsidiary’s movements will already know that last June’s RHEL 10 release includes access to an LLM-based online helper chatbot: we tried it out when the product was released. We also reported on the managers of Red Hat’s Global Engineering department being notably keen on the use of AI just last month. Since Red Hat has other offerings for slow-moving stable server OSes – and arguably because Debian, Ubuntu, and their many derivatives have the stable-desktop-distro space nicely covered already – Fedora has a strong focus on providing a distro for developers, and Spaleta’s announcement makes this clear. The goal is: to build a thriving community around AI technologies by focusing on three key areas: equipping developers with the necessary platforms, libraries, and frameworks; ensuring users experience painless deployment and usage of AI applications; and establishing a space to showcase the work being done on Fedora, connecting developers with a wider audience. He also spells out what it doesn’t want to do: Non-goals: The system image will not be pre-configured with applications that inspect or monitor how users interact with the system or otherwise place user privacy at risk. Tools and applications included in the AI Desktop will not be pre-configured to connect to remote AI services. AI tools will not be added to Fedora’s existing system images, Editions, etc, by the AI Desktop initiative. In other words, tools for developers, not for end-users, with a strong emphasis on models that run locally, and which preserve the user's privacy. It’s also worth pointing out that Fedora has had an AI-Assisted Contributions Policy in place for six months, and earlier this month, Fedora community architect Justin Wheeler explained in some detail Why the Fedora AI-Assisted Contributions Policy Matters for Open Source. Our impression is that the Fedora team feels that it needs to keep Fedora relevant for growing interest in LLM-bot assisted tooling, and that it can address concerns from hardcore FOSS types by ensuring that this means local models, built according to FOSS-respecting terms, deployed in privacy-respecting ways. Fedora is not alone in this, though. There are also ructions across the border in Ubuntuland. Right after the release of the Canonical’s new LTS version, Ubuntu 26.04 Resolute Raccoon, Canonical’s veep of engineering Jon Seager laid out the future of AI in Ubuntu. We interviewed Seager last year during the 25.10 Ubuntu Summit, and back in January this year, he published his views on Developing with AI on Ubuntu. Now the plans are firming up. Like Fedora, there’s a strong focus on local models and confidential, privacy-first deployments – and ensuring that the OS and the tools support GPU acceleration from the big hardware players in that space. However, unlike Red Hat, Canonical isn’t pushing its developers towards these tools. In what we see as a veiled jab, Seager’s announcement says: We are not setting shallow metrics on token usage, or percentages of code written with AI, but rather incentivising engineers to experiment and understand where AI tools add value. Initially, the focus is on users instead: AI features in Ubuntu features will come in two forms: first as a means of enhancing existing OS functionality with AI models in the background, and latterly in the form of “AI native” features and workflows for those who want them. As Fernando Marcela’s exit shows, an emphasis on what could be termed FOSS-friendly AI – open models, privacy-centric, local execution and so on – is not enough to placate those who are really strongly averse to these tools. The Reg FOSS desk counts himself firmly in this camp. Back in January, we reported on the rise, fall, and resurrection of OpenSlopware, a list of FOSS projects which contain LLM-generated code, integrate LLMs, or even show the traces of the use of LLM agents. Soon, it seems inevitable that Fedora and Ubuntu will both feature here. Resistance, though, is also rising. Stop Slopware tries to help explain why and how to avoid it, and there’s also The No-AI Software Directory for projects that have explicit LLM-free policies, whether they’re FOSS or not. Bootnote It amuses us to note that both the Ubuntu and Fedora forums use the same software, called Discourse. (It’s a sort of web forum as designed by people who have heard of mailing lists, but don’t know how to use them and find the idea of bottom-posting confusing.) Some could interpret this shared adoption as a sign of underlying similarities between the two projects. ®
Categories: Linux fréttir
The EU Considers Restricting Use of US Cloud Platforms for Sensitive Government Data
CNBC reports:
The European Union is considering rules that would restrict its member governments' use of U.S. cloud providers to handle sensitive data, sources familiar with the talks told CNBC.
The European Commission — the EU's executive branch — is expected to present its "Tech Sovereignty Package" on May 27, which will include a range of measures aimed at bolstering the bloc's strategic autonomy in key digital areas. As part of preparations for that package, discussions are taking place within the Commission around limiting the exposure of sensitive public-sector data to cloud platforms provided by companies outside of the EU, two Commission officials, who asked to remain anonymous as they weren't authorized to discuss private talks, told CNBC... "The core idea is defining sectors that have to be hosted on European cloud capacity," one of the officials said. They added that companies providing cloud solutions from third countries, including the U.S., could be impacted. Proposals would not prohibit overseas companies' cloud platforms from government contracts entirely, but limit their use in processing sensitive data at public sector organizations, depending on the level of sensitivity, they added. The officials said that talks are ongoing and yet to be finalized...
The officials told CNBC there are discussions around proposing that financial, judicial and health data processed by governments and public-sector organizations require high levels of sovereign cloud infrastructure.
Read more of this story at Slashdot.
Categories: Linux fréttir
HP stuffed a PC into a keyboard. We took it for a spin
The early history of personal computers is stacked with systems such as the Apple II and the Commodore 64 that had the components living inside a keyboard. But as technology evolved, the keyboard became a peripheral and the PC itself was either in a separate box or the whole system was a laptop. Now, HP has a new spin on this decades-old idea. It embeds a full-fledged AI PC inside a 101-key keyboard you can carry with you from the office to home. Unlike ‘80s microcomputers or hobbyist-oriented products like the Raspberry Pi 500, the EliteBoard G1a is squarely targeted at business. The system is part of HP’s commercial lineup, alongside its EliteBook laptops, and, for better or worse, it comes with HP Wolf Security preinstalled. The company clearly hopes organizations will buy these in bulk. But to benefit from it, you really have to prefer a mobile keyboard to a traditional laptop, all money aside. Who’s it for? The EliteBoard G1a is trying to create a new niche. When we talked with product managers at HP, they suggested IT departments would buy these computers for two types of workers. The first group is so-called "dual deskers" - knowledge workers who have a desk with a monitor at work and another at home. The second group includes deep-pocketed call centers or environments where desk space is at a premium. From time immemorial, dual-deskers have carried laptops and closed their lids when they docked to a monitor at work. With the EliteBoard, they could simply schlep the keyboard, which weighs a mere 1.49 pounds – about half the weight of a lightweight laptop. To make this situation work in companies with managed systems, we have to assume that either the IT department would give out monitors to use at home or offer some reason (a subsidy? a mandate?) for employees to buy their own for home. The EliteBoard connects to monitors using its USB4 port, so its ideal monitor is one that has Thunderbolt or USB video connectivity built in. Less-expensive and older monitors don’t have this type of connectivity, but select configs of the EliteBoard come with an optional USB-to-HDMI adapter that you can use with other monitors, and it has a USB pass-through for power. That said, HP demonstrated the EliteBoard at numerous press events by showing how much desk space it saves by using a single USB cable to get power, video out, and connectivity to peripherals via the monitor. So if companies want employees to be able to take advantage of this scenario at home, that means shelling out another few hundred bucks for a modern monitor, or making employees do it. Today, companies with limited desk space for a call center or another cramped work area could just buy a tiny desktop to sit behind the monitor or next to it. However, building all of the PC’s guts into the keyboard makes a lot of sense for space savers, because a keyboard is something every PC needs and a desktop chassis is not. If a company wanted to, it could give each employee their own EliteBoard, have them plug it into a monitor during work time and then have them stick it in a drawer when they go off shift and someone else comes on. The problem for call centers is that the HP EliteBoard G1a is much more powerful and much more expensive than what they need. At press time, the G1a was priced at $1,499 for the lowest end config. And most companies probably don’t need employees to each have their own PC that they lock away after they punch out. “The call center angle is probably the stronger pitch, but those buyers are shopping entry-to-mid-market. They want something cheaper and simpler than a mini desktop, not a Copilot+ PC with up to 64GB of RAM,” Kieren Jessop, a research manager with analyst firm Omdia. “HP has built an impressive piece of engineering in search of a problem that most enterprises have already solved with a laptop — or will solve with a thin client.” Configurations HP makes the EliteBoard G1a in a variety of configurations that vary by market. Companies can get it with various AMD Ryzen CPUs, up to 64GB of RAM and an SSD up to 2TB in capacity. It comes with either a detachable or embedded cord, and optionally with a 32 WHr battery that promises up to 3.5 hours of endurance. Why would you need a battery on a product that demands to be used at a desk and plugged in? The most likely reason is to let the keyboard go into sleep mode when it’s in your bag. Employees could also hook the EliteBoard G1a up to a portable monitor and use it unplugged that way, but then why not just buy them a laptop? At press time, prices ranged from $1,499 to $3,423 in the US. The lowest-end config has a Ryzen AI 5 Pro 340, 16GB of RAM, an integrated cable, and a 256GB SSD. Fifty bucks more will get you the same configuration with a 512GB SSD, as per HP.com. The highest-end config listed comes with a Ryzen AI 7 Pro 350 CPU, 32GB of RAM, and a 512GB SSD, and sells for only $1,999 at B&H but a whopping $3,423 at HP.com. Our review config, which sports 64GB of RAM, a Ryzen AI 7 Pro 350 CPU, and a 2TB SSD, has not been listed for sale in the US, and HP didn't answer when we asked how much it would cost. However, we’d assume that it would cost a lot more than $1,999. Price vs a Laptop If all you do is dock your PC at home and at work, you might think, “why pay for a laptop when I don’t need a built-in screen?” But it’s hard to make that argument when the laptop is actually less expensive. Right now, you can get an HP EliteBook 6 G1aN with the same AMD Ryzen AI 7 350 CPU, along with 24GB of RAM and a 512GB SSD, for just $1,299 – that's actually less than the cheapest EliteBoard. A custom configured HP EliteBook 8 G1a with the Ryzen AI 7 Pro 350 CPU, 32GB of RAM, and a 512GB SSD is just $1,799. If you’re comparing the total cost of ownership versus a laptop, also consider the price of a monitor if your users don’t already have one. While you could use an adapter, the ideal use case involves a USB-C monitor that transmits data and power over a single wire. The cheapest HP-branded USB-C monitor I could find at press time was the HP E27k 4K monitor, which was selling for $504. However, I saw a Dell-branded USB-C monitor, the S2725DC, on sale for just $236 at Amazon. If you’re an IT department and you’re kitting out someone for home and office use, you might need to buy them two monitors. Design At 14.1 x 4.7 x 0.7 inches, the EliteBoard G1a is the size of a typical, full-size keyboard complete with numpad. It’s a boring but office-friendly dark gray color with a very thin bezel around the keys. At first glance, there aren’t many ways to know that this is more than just a keyboard. There’s a power button / fingerprint reader that’s located in the upper right corner of the keyboard, though you might easily mistake it for just another key, until you press it and see the blue light turn on. Turn the keyboard around and on the back lip and you’ll notice a thin vent for airflow. This computer definitely has a fan and you can hear it quite prominently at times. There are also two USB-C ports, a USB4 40 Gbps port and a 10 Gbps port, unless you have the embedded cable, in which case, you just have the 10 Gbps port. Clearly, the 40 Gbps port is the one you’ll want to use for docking, but you can use the 10 Gbps port to connect the dongle for the included wireless mouse or other peripherals. There’s also a security cable lock slot on the left side. So if you want to chain this to a desk, you can, but we’d argue that defeats the point of the machine. But how well does it type? Since this is a computer-in-a-keyboard, the most obvious question we need to answer is “how’s the typing experience?” Pretty decent. On the bright side, the EliteBoard G1a has a generous 2 mm of travel, which is more than you’ll find on most laptops, where even 1.5 mm is deep. The keys feel pretty snappy and are in the same feedback league as those on my Lenovo ThinkPad X1 Carbon, but the ThinkPad’s keys have a more curved shape, which is better than the flat tops on the EliteBoard. If you’re burning the midnight oil, there’s a built-in backlight which you can enable by hitting the F9 key. It has two different brightness settings so you can decide just how much you want it to shine through. The layout is pretty standard for a full-size keyboard with a numpad. However, I don’t like how small the arrow keys are, and the Pg Up and Pg Dn are just tiny. There’s no empty space around these keys, which I use a lot when editing documents, so it’s far too easy to miss them. Even on most laptops, these keys are larger. Another downer is the lack of flip-up feet on its bottom. I like to angle my keyboard up at a 15 to 30 degree angle, but this one is short and flat to the desk. To save my wrists, I always use a gel-filled wrist rest when I type and, without feet to elevate the keyboard, I’m typing down onto the keys because it’s so much lower than the gel pad. This won’t be as much of an issue for folks who don’t use wrist rests. In short, if you’re used to laptop keyboards or the low-cost keyboards that come with most desktop computers, the EliteBoard G1a will probably seem like a nice step up. However, if you want the best possible typing experience, there’s an entire ecosystem of mechanical keyboards out there with much deeper travel and more feedback. If you’re not a gamer and you want the best possible typing experience, I recommend a mechanical keyboard with either clicky or tactile switches. Unless you go for a low-profile keyboard, you’ll be getting between 3.6 and 4 mm of travel, so you won’t bottom out as easily when typing. I prefer clicky switches like the Kailh Box White (my favorite) or Cherry MX Blue, but those make some noise so, if you like quiet, Cherry MX Brown switches will do the trick. To see the difference between my daily driver mechanical keyboard, an Akko 3098N with Kailh Box White switches, and the EliteBoard G1a, I performed the 10fastfingers.com typing test on both. On HP’s keyboard, I managed a strong 96 wpm, which is at the lower end of typical for me, with a six percent error rate. On my daily driver, the numbers were a better 101 wpm with a two percent error rate. Your mileage will vary. Speaker and Microphone The EliteBoard G1a has both built-in bottom-facing speakers and a microphone array. In our tests, the speaker was more than loud enough and it was clear enough for voice calls, though we wouldn’t recommend listening to music on it for too long. The drums in AC/DC’s Back in Black sounded a little tinny, though there was a clear separation of sound with the vocals appearing to come from one side while the percussion came from another. The dual-array microphone was also passable, but not good enough for podcasts. When we talked to a coworker using the built-in mic, she said our voice was clearly audible but a little echoey. In the box and preloaded Depending on which config you get, your HP EliteBoard G1a may come with a variety of different accessories in the box. All versions come standard with an HP wireless 675M mouse that connects either by Bluetooth or by an included USB-C wireless 5-GHz dongle. It is not a particularly fancy mouse but it has a couple of side buttons and a scroll wheel. I found myself using my Logitech MX Master 3 mouse instead, because it’s ergonomically shaped and highly programmable. My review unit also came with the optional soft canvas cover sleeve you can use to protect the EliteBoard G1a while you’re carrying it around. I found this add-on to be about as useful as a laptop sleeve. It might offer some protection and padding for when you stick the EliteBoard G1a in an existing backpack, but it’s not going to replace your briefcase or your backpack when you’re commuting. I also got the optional HDMI multiport hub, which is a must-have if you don’t already have a Thunderbolt or USB4 docking station or a monitor with that kind of connectivity built in. The hub connects to the USB4 40 Gbps port on the EliteBoard and features two USB-C ports (one for power, one for connectivity), an HDMI out cable for connecting to a monitor, an Ethernet port for wired networking, and an HDMI-in port for a second monitor. There’s an optional, slim 65W USB-C power adapter that’s helpful if you aren’t connecting to a monitor or docking station that supplies power. If you don’t get one in the box, it’s easy enough to find one for $15 to $30 on Amazon. Also, if your EliteBoard does not have an embedded cable — mine did not — you get a braided USB cable in the box. The less-expensive configs of the EliteBoard all have embedded cables, but we recommend getting a model without one because it’s easier to carry around without a cable hanging off of it. HP does not preload a lot of software onto the EliteBoard but it does come with a three-year subscription to HP Wolf Security, which normally costs $36 a year for individual subscriptions. HP Wolf has a malware/virus scanner, a threat containment feature, a secure browser, OS resiliency (for recovering from corruption and doing a reinstall), and application persistence, which prevents unwanted changes to security software like HP Wolf itself. Since it has an NPU (neural processing unit) that’s capable of more than 45 trillion operations per second (TOPS), the EliteBoard G1a qualifies as one of Microsoft’s Copilot+ PCs. This means that it has some added local AI features that not every PC gets from Windows 11, including Cocreate image generation in Paint, Windows Studio Effects handled locally for your webcam, translated Live Captions from any audio input, and Recall, a controversial feature that takes screenshots of all your work to help you “remember” what you were doing at any given time. Fortunately, Recall is disabled by default. Performance Equipped with an AMD Ryzen AI 7 Pro 350 CPU, 64GB of RAM, and a 2TB SSD, our review configuration of the EliteBoard G1a handled everything I threw at it. I used the system on and off as my daily driver PC for work for a period of several weeks and it was always smooth and responsive, even as I had dozens of Chrome tabs open and Slack running across two 4K monitors I had connected via Thunderbolt 3 docking station. I should note that, no matter what I was doing, the fan on the EliteBoard G1a was frequently running and was often quite audible. It’s no louder than most notebooks I’ve tested, but if you’re expecting total quiet, look elsewhere. My editorial workload is not nearly as demanding as some folks’ day jobs so, to see how the EliteBoard G1a stacks up, I ran it through a series of benchmarks and compared the results to those from two laptops I had access to: a Lenovo Yoga Slim 7x with a Qualcomm Snapdragon X Elite X1E-78-100 CPU, and a Lenovo ThinkPad X1 Carbon with an Intel “Meteor Lake” Core Ultra 7 165U processor. The Ryzen AI 7 PRO in the EliteBoard debuted in 2025 with 8 cores, 16 threads, and a maximum boost clock of 5 GHz. It features built-in AMD Radeon 860M graphics and a Neural Processing Unit (NPU) that’s capable of achieving 50 TOPS for better local AI. Its DDR5 RAM runs at 5,600 MHz. Released in 2024, the Snapdragon X Elite X1E-78-100 has 12 cores and threads with a boost clock that goes up to 3.4 GHz, along with an NPU that does 45 TOPS. It’s an Arm processor so the laptop that runs it uses Windows on Arm. The Yoga Slim 7x laptop that we tested had 16 GB of LPDDR5x RAM running at 8448 MHz. The oldest of our test group, vintage 2023, the Intel Core Ultra 7 165U has 12 cores and 14 threads, but only two of those cores are performance cores that can boost up to 4.9 GHz, while the others are a mix of efficient cores and low-power efficient cores that boost up to 3.8 and 2.1 GHz respectively. The ThinkPad X1 Carbon we tested with it had 64GB of LPDDR5x RAM running at 6400 MHz. In our tests, the EliteBoard G1a always eclipsed the ThinkPad X1 Carbon, which is not a surprise considering its much-older processor. However, the Snapdragon-enabled Yoga Slim 7x outpaced it on some benchmarks. Primesieve This test counts the prime numbers under one trillion and returns a result in millions of prime numbers per second. The benchmark is particularly heavy on SIMD instructions like AVX-512 or Arm’s Neon and SVE vector extensions, making it a good proxy for some of the more workstation-centric tests we’ll look at shortly. It runs across both single thread and multi-thread workloads, with big performance boosts for parallel processing. Using just a single thread, the EliteBoard edged out the competition with 415 million primes per second (MPS), compared to the Slim 7x’s 352. However, the Slim 7x slightly outperformed it when using multithreading, delivering 2,686 MPS to the EliteBoard’s 2,145. One thing to note is that, while the EliteBoard has more threads, it has fewer actual cores. The X1 Carbon wasn’t even in the same ballpark. This will become a theme across our test suite. Blender 3D rendering is always a challenge and, to be honest, it’s hard to imagine somebody buying an EliteBoard for this purpose. However, it’s always worth noting what the system can do. We ran Blender, a very popular 3D modeling app, using three scenes: Monster, Junkshop, and Classroom. As you can see, the Slim 7x and its 12-core Snapdragon processor were anywhere from 34 to 75 percent quicker, depending on the content. Still, the EliteBoard turned in respectable scores on something you wouldn’t expect it to do. Handbrake x265 Video transcoding is another resource-intensive task and one that occurs in many scenarios, including game streaming, video editing, and even video conferencing. To test how the EliteBoard handled video transcoding, we used Handbrake to convert a 4K 60 fps video to 1080p using an x265 encoder at the medium preset with a constant quality of 18. Our results are measured in frames per second (fps). Again, the EliteBoard was far superior to the ThinkPad, but was a good 45 percent behind the Yoga Slim 7x. Still, this is solid performance that’s more than workable. Llama.cpp One local AI task you might want to conduct is running an open-source model as a chatbot on your PC rather than sourcing it from the cloud. This will give you more privacy than using OpenAI, Claude, or Copilot on the web and it’s completely free. So we ran the GPT-OSS 20B open weights model using Llama.cpp as our client and timed the amount of milliseconds it took to generate the first token. Here we see that the Snapdragon processor and faster RAM on the Yoga Slim 7x gave it a definite advantage, taking 39 percent less time than the EliteBoard to get there. The EliteBoard also generated about half as many tokens per second. However, it beat the pants off the ThinkPad X1 Carbon, getting to the first token more than twice as quickly while generating 30 percent more tokens per second. It’s worth noting that these tests were run on the CPU cores and didn’t harness the chip’s integrated GPUs or NPUs. Whisper.cpp One common local AI workload a business person might use is transcription. Let’s say you had an audio file and you wanted to convert it into readable and editable text. You might use a tool based on Whisper, a popular free model from OpenAI. For testing, we used Whisper.cpp, an implementation of Whisper written in C++, with the Whisper Medium EN model transcribing a 10-minute audio clip. Here, the EliteBoard transcribed the audio at 2.4x real-time speed, while the Yoga Slim 7x was faster at 3.4x. Those extra cores are doing a lot of heavy lifting here. That said, if you’re converting 10 minutes of audio in less than five minutes, that’s pretty good. LLVM Compile For those using the EliteBoard for programming, compile times matter. So, we compiled the LLVM toolchain from its source and measured the time. This isn’t a trivial compile job and therefore represents a worst case scenario for developers considering the EliteBoard. Here it took a modest 19 minutes and 44 seconds, which was more than double the time it took the Yoga Slim 7x. On high-end desktop workstation hardware, this same workload can be completed in under five minutes, so if your day job regularly requires compiling large projects, you might want to spring for something more capable, or perhaps not. “My code is compiling” is a pretty good excuse for taking a 20 minute break. 7-Zip Compression and decompression are very taxing on a CPU and are very common scenarios we see today. So we fire up 7zip and measure its ability to do both tasks in both single-threaded and multi-threaded scenarios. With a single thread, the Slim 7x and the EliteBoard basically tie at compression, while HP’s computer holds the edge in decompression. However, when we move to multi-threaded scenarios, the Snapdragon X Elite’s 12 physical cores easily beat out the AMD Ryzen AI 7 Pro 350’s eight cores and 16 threads. LibreOffice: ODT to PDF Conversion We tested how long it takes LibreOffice to convert 50 image-heavy ODT files into PDFs. This workload is lightly threaded so it favors higher clock speeds over more cores. The results bear this out as the EliteBoard, with its Ryzen AI 7’s higher performing cores, beat out the Slim 7x by 22 percent. Despite its older processor, the ThinkPad actually manages to tie the Slim 7x in this test. Repairability For IT departments that do their own service, the EliteBoard G1a has plenty to offer. Its back surface is held on by just four screws and pops off easily. Underneath, you get full access to the motherboard and a number of easily-removable components, including the DDR5 SODIMM RAM, the M.2 SSD, the WLAN card, the fan, the optional battery, and the speakers. You can even replace the keyboard itself and leave the computer part intact. Bottom line The HP EliteBoard G1a delivers strong performance in a unique and compact form factor that saves desk space and reduces the weight you carry back and forth. If you don’t want a laptop but do want a portable computer, this is your best choice. It provides a better typing experience than most laptops and a more space-efficient design than most desktops. However, in the current marketplace, this device does not represent a significant savings over a similarly configured laptop. Depending on what laptop you choose to compare against, you might save a few hundred dollars, but when you add the cost of the monitors you need to pair with it - if you need to purchase those - it’s a wash. HP has set out to make a unique product with the EliteBoard G1a and it has succeeded in building a very competent and capable computer-in-a-keyboard. If you’re an IT decision maker, you’d buy this device for folks who work out of one or two distinct locations (home and office or multiple offices) and never need to get online from the road or from a conference room. Whether that’s a common scenario in your workplace will determine if this product is right for you or your fleet. ®
Categories: Linux fréttir
NYT: 'Meta's Embrace of AI Is Making Its Employees Miserable'
"Meta's embrace of AI is making its employees miserable," reports the New York Times.
And "After Meta said late last month that it would start tracking employees' computer use, hundreds of workers spoke up." (One employee even told Meta's CTO in an internal post, "Your callousness to the concerns of your own employees is concerning."
In an internal post last month, Meta told its U.S. employees that it was making a change that would affect tens of thousands of them. What employees typed into their computer, how they moved their mouse, where they clicked and what they saw on their screen would be tracked, Meta said. The goal, the company said, was to capture employee data so Meta's artificial intelligence models could learn "how people actually complete everyday tasks using computers." Many workers immediately revolted. In online comments, they blasted the tracking as a privacy violation, calling it antisocial and callous... [One engineering manager even asked "How do we opt out?"] "There is no option to opt-out on your corporate laptop," replied Andrew Bosworth, Meta's chief technology officer. Employees reacted by posting more than 100 angry and surprised emoji, according to the messages....
Meta is pushing its 78,000 employees to adopt AI tools and factoring their use of the technology in performance reviews. The company is also tracking employees' computer work to feed and train its AI models. And it is cutting jobs to offset its AI spending, saying last month that it would slash 10% of its workforce. That has led to anger and anxiety as employees await news of whether they are affected by the layoffs, which are slated to be carried out May 20, according to 11 current and former Meta employees. Some said they no longer saw Meta as a place for a long career. Others were looking for new jobs or trying to signal that they wanted to be laid off so they could receive severance pay, the current and former employees said. "It's incredibly demoralizing," an employee who does user research wrote in an internal post, which was reviewed by the Times...
Meta also introduced internal dashboards to track employees' consumption of "tokens," a unit of AI use that is roughly equivalent to four characters of text, four people said. Some said the dashboards were a pressure tactic to encourage competition with colleagues. That led some employees to make so many AI agents that others had to introduce agents to find agents, and agents to rate agents, two people said.
Read more of this story at Slashdot.
Categories: Linux fréttir
'Changing of the Guard'? AMD, Intel, and Micron Soar While Nvidia Lags
While Nvidia has dominated the "infrastructure boom" since 2022's launch of ChatGPT and "the generative AI craze," CNBC writes that "This week offered the starkest illustration yet of what MIzuho analyst Jordan Klein said could be a 'changing of the guard in AI.'"
Chipmakers Advanced Micro Devices and Intel notched gains of about 25%, while memory maker Micron jumped more than 37% and fiber-optic cable maker Corning climbed about 18%. All four of those companies have more than doubled in value this year, with Intel leading the way, up well over 200%. Nvidia, meanwhile, is only slightly ahead of the Nasdaq in 2026, gaining 15% for the year, aided by an 8% rally this week. In spreading the wealth to a wider swath of hardware companies, investors are clearly betting that the bull market in AI has long legs and that data centers are going to need a wider array of advanced components for years to come.
Memory has been the biggest theme of late due to a global shortage that's driven up prices and turned Micron, a 47-year-old company tucked in a sleepy corner of the semiconductor market, into one of the hottest trades over the past 12 months. Micron blew past an $800 billion market capitalization for the first time this week, and the stock is now up over 750% in the past year. CEO Sanjay Mehrotra told CNBC in March that key customers are only getting "50% to two-thirds of their requirements" because of supply issues. The memory market is largely dominated by Micron, along with Korea-based Samsung and SK Hynix, which are also both in the midst of historic rallies...
Bank of America estimates the data center CPU market could more than double from $27 billion in 2025 to $60 billion in 2030. AMD's quarterly results this week underscored the emerging trend, as earnings, revenue and guidance sailed past estimates on strong data center growth. The company has long led the CPU charge, and CEO Lisa Su said on the earnings call that AMD now expects 35% growth over the next three to five years in the server CPU market, up from a forecast of 18% growth that the company provided in November.
The article cites two other big movers:
Intel "is in the midst of a revival sparked by a major investment from the U.S. government last year. Intel's stock had its best month on record in April, more than doubling, and has continued notching massive gains, rising 33% in the early days of May."
Nvidia still remains the world's most valuable company "and is expected to show revenue growth of 70% this fiscal year," the article points out — adding that companies like Corning are also benefiting from Nvidia partnerships. "Glass maker Corning, which celebrated its 175th anniversary this week, signed a massive deal with Nvidia on Wednesday that involves the development of three new U.S. factories dedicated entirely to optical technologies... likely a major step in Nvidia's move away from copper cables and towards fiber-optic cables as it builds out its rack-scale systems."
Read more of this story at Slashdot.
Categories: Linux fréttir
Open Source Registries Join Linux Foundation Working Group to Address Machine-Generated Traffic
Under the nonprofit Linux Foundation, "a new Sustaining Package Registries Working Group will seek to identify concrete funding, governance, and security practices," reports ZDNet, "to keep code flowing as download counts grow.... Because software builds, continuous integration pipelines, and AI systems hammer registries at machine speed rather than human speed, the sites can't keep up.
"That growth has brought a surge in bot traffic, automated publishing, security reports, and outright abuse, exposing what the working group bluntly calls a 'sustainability gap'."
Sonatype CTO Brian Fox, who oversees the Maven Central Java registry, estimates open-source registries saw 10 trillion downloads in 2025. And "The same pattern is appearing across ecosystems. More machine traffic. More automation. More scanning. More expectations around uptime, integrity, provenance, and policy enforcement. More cost. More support burden. More dependency on infrastructure that the industry still talks about as though it runs on goodwill and spare time."
ZDNet reports that "To tackle that, Sonatype has teamed up with the Linux Foundation and other package registry leaders, including Alpha-Omega, Eclipse Foundation (OpenVSX), OpenJS Foundation, OpenSSF, Packagist, Python Software Foundation, Ruby Central (RubyGems), and the Rust Foundation (Crates)."
The idea is to give operators a neutral forum to discuss money, governance, and shared operational burdens openly. Once that's dealt with, they'll coordinate how to explain those realities back to companies and organizations that have long assumed registries are "free." No, they're not. They never were. As the Linux Foundation pointed out, "Registries today run primarily on two things: (1) infrastructure donations and credits; and (2) heroic efforts from small paid teams (themselves funded by donations and grants) and unpaid volunteers that operate and maintain registry services. The bulk of donations and grants comes from a small set of donors and doesn't scale with demands on the registry."
The working group is explicitly positioned as a venue where registry leaders and ecosystem stakeholders can align on "practical, community-minded" ways to sustain that infrastructure, rather than each operator improvising its own survival plan in isolation.
ZDNet says the group will also coordinate security practices and information, and craft frameworks "that make it politically and legally possible to introduce sustainable funding models without fracturing communities." And they will also "align messaging and educational content so developers, companies, and policymakers finally understand what it costs to run these services."
Read more of this story at Slashdot.
Categories: Linux fréttir
Will Maryland's Utility Bills Increase $1.6B to Support Other States' Datacenters?
To upgrade its grid for data centers, PJM Interconnection (which serves 13 states) plans to spend $22 billion — and charge nearly $2 billion of that to customers in Maryland, argues Maryland's Office of People's Counsel. The money "will be recovered in rates for decades" and "drive up Maryland customer bills by $1.6 billion over the next ten years alone," they said Friday, announcing an official complaint filed with America's Federal Energy Regulatory Commission.
Extra demand is expected from Ohio, Pennsylvania, and Illinois "where demands driven by data centers are projected to grow substantially by 2036," they explain. But that means that Maryland customers "are subsidizing data center-driven transmission buildout by virtue of geographic proximity..." Tom's Hardware explains:
That means an extra $823 million for residential (approx. $345 per customer), $146 million for commercial (approx. $673 per customer), and $629 million for industrial customers (approx. $15,074 per customer)... "Maryland customers have neither caused the need for these billions in new transmission projects nor will they meaningfully benefit from them," [according to Maryland People's Counsel David S. Lapp]....
This is one of the biggest reasons why many AI hyperscalers are facing pushback from the communities where they intend to place their data centers. At the moment, around 69 jurisdictions have passed some sort of moratorium on projects like these, and a survey has shown that nearly half of Americans do not want a data center in their neighborhood. Debates around these projects are passionate, with a few cases turning violent and even resulting in shootings (thankfully, without any casualties), especially as many feel that the construction of these power-hungry assets is threatening their lifestyles and quality of life.
Thanks to long-time Slashdot reader noshellswill for sharing the news.
Read more of this story at Slashdot.
Categories: Linux fréttir
Rush Rescue Mission for NASA's $500M Space Telescope Passes Key Milestone
NASA's $500 million Neil Gehrels Swift space observatory was launched in 2004. But it's now "at risk of falling back through the atmosphere and burning up without intervention," reports Spaceflight Now.
Fortunately, a mission to prevent that "just passed a notable prelaunch testing milestone."
On Friday, NASA announced that the Link spacecraft, manufactured by Katalyst Space Technologies to intervene before Swift's fate is sealed, completed its slate of environmental testing at the agency's Goddard Space Flight Center in Greenbelt, Maryland... "Swift will likely re-enter the atmosphere sometime later this year if we don't attempt to lift it to a higher altitude, [said John Van Eepoel, Swift's mission director at NASA Goddard, in a NASA press release]. "Katalyst has gotten to this point in just eight months, and we're glad they were able to use NASA's facilities to test Link and draw on our expertise to help tackle questions that popped up along the way...."
"Given how quickly Swift's orbit is decaying, we are in a race against the clock, but by leveraging commercial technologies that are already in development, we are meeting this challenge head-on," said Shawn Domagal-Goldman, acting director, Astrophysics Division, NASA Headquarters, at the time... Attempting an orbit boost is both more affordable than replacing Swift's capabilities with a new mission, and beneficial to the nation — expanding the use of satellite servicing to a new and broader class of spacecraft...."
Swift is in an orbit inclined 20.6 degrees from the equator, which is why Katalyst selected Northrop Grumman's Pegasus XL air-launched rocket in November to fly the mission. "The versatility offered by Pegasus' unique air-launch capability provides customers with a space launch solution that can be rapidly deployed anywhere on Earth to reach any orbit," said Kurt Eberly, Director of Space Launch for Northrop Grumman.
The mission is set to launch in June.
Read more of this story at Slashdot.
Categories: Linux fréttir
The Trump Phone Either Is Or Isn't Closer To Delivery
September 2025? January 2026? Delivery dates keep slipping for the Trump Organization's "Trump Phone" — a gold-coloured Android smartphone priced at $499 (£370). But in March the Verge spotted signs the phone was moving forward:
FCC listings for a smartphone with the trade name "T1" show that it was tested late last year, and granted certification by the FCC in January... [T]he phone was submitted for testing by another company entirely: Smart Gadgets Global, LLC... Smart Gadgets Global's website promises "Top Quality Electronics created for 'YOUR' customer!"
But in April the Trump phone revised its "Terms and Conditions" for preorders. The new language?
A preorder deposit provides only a conditional opportunity if Trump Mobile later elects, in its sole discretion, to offer the Device for sale. A deposit is not a purchase, does not constitute acceptance of an order, does not create a contract for sale, does not transfer ownership or title interest, does not allocate or reserve specific inventory, and does not guarantee that a Device will be produced or made available for purchase....
Estimated ship dates, launch timelines, or anticipated production schedule are non-binding estimates only. Trump Mobile does not guarantee that: the Device will be commercially released... Trump Mobile will not be responsible for delay, modification, or failure to release a Device due to causes beyond its reasonable control, including but not limited to regulatory review, carrier certification delays, component shortages, labor disruptions, governmental orders, acts of God, transportation interruptions, or third-party supplier failures...
If Trump Mobile cancels or discontinues the Device offering prior to sale, Trump Mobile will issue a full refund of the deposit amount paid... If Trump Mobile cancels, delays, or does not release the Device, your sole and exclusive remedy is a full refund of the deposit amount actually paid, and you waive any claim for equitable, injunctive, or specific performance relief relating to preorder priority or Device allocation.
There was an unconfirmed report on social media that the updated Terms were also emailed to customers (cited by the International Business Times). And the new language also hedges that for the gold T1 phone, "Images, prototypes, beta demonstrations, and marketing renderings are illustrative only and may not reflect final production units...."
But then eight days ago The Verge reported that phone "has just passed another milestone on its slow road to release," described as "a requirement for any phone launching in the US..."
"The phone has received the little-known PTCRB certification, a first step toward being certified to work on major networks and be issued with IMEI numbers."
[A]t least, I think it's been certified. What's actually been certified by the PTCRB is the SGG-06, a smartphone from Smart Gadgets Global, LLC, with support for 5G, 4G, 3G, and 2G networks.
Read more of this story at Slashdot.
Categories: Linux fréttir
Plant Seeds Do Something Incredible When the Sound of Rain Strikes
"Plant seeds can sense the vibrations generated by falling raindrops," reports ScienceAlert, "and respond by waking from their state of dormancy to welcome the water, new research shows.... to germinate in 'anticipation' of the coming deluge."
The finding, discovered by MIT mechanical engineers Nicholas Makris and Cadine Navarro, offers the first direct evidence that seeds and seedlings can sense and respond to sounds in nature... "The energy of the rain sound is enough to accelerate a seed's growth," [explains Markis].
Plants don't have the same aural equipment we do to actually hear sounds, of course. But the study suggests that seeds respond to the same vibrations that can produce a sound experience in our human ears. Across a series of experiments, the researchers submerged nearly 8,000 rice seeds in shallow tubs of water, at a depth of around 3 centimeters (1 inch), and exposed some of them to falling water drops over periods of six days... A hydrophone recorded the acoustic vibrations produced by the drops, confirming that the experiment mimicked the vibrations produced by actual raindrops falling in nature — such as the driving downpours that can sometimes pelt Massachusetts' puddles, ponds, and wetlands... In their study, the researchers observed that seeds exposed to the falling drops germinated up to around 37% faster, compared with seeds that did not receive the simulated rainstorm treatment but were housed in otherwise identical conditions.
More information in Scientific American and Scientific Reports.
Read more of this story at Slashdot.
Categories: Linux fréttir
Cisco Releases Open-Source 'DNA Test for AI Models'
Cisco has released an open-source tool "to trace the origins of AI models," reports SC World, "and compare model similarities for great visibility into the AI supply chain."
[Cisco's Model Provenance Kit] is a Python toolkit and command-line interface (CLI) that looks at signals such as metadata and weights to create a "fingerprint" for AI models that can then be compared to other model fingerprints to determine potential shared origins. "Think of Model Provenance Kit as a DNA test for AI models," Cisco researchers wrote. "[...] Much like a DNA test reveals biological origins, the Model Provenance Kit examines both metadata and the actual learned parameters of a model (like a unique genome that comprises a model), to assess whether models share a common origin and identify signs of modification."
The tool aims to address gaps in visibility into the AI model supply chain. For example, many organizations utilize open-source models from repositories like HuggingFace, where models could potentially be uploaded with incomplete or deceptive documentation. The Model Provenance Kit provides a way for organizations to verify claims about a model's origins, such as claims that a model is trained from scratch, when in reality it may be copied from another model, Cisco said. This may put organizations at risk of using models with unknown biases, vulnerabilities or manipulations and make it more difficult to resolve any incidents that arise from these risks.
Thanks to Slashdot reader spatwei for sharing the news.
Read more of this story at Slashdot.
Categories: Linux fréttir
Google tweaks Chrome AI privacy wording, insists processing stays on-device
Google has changed Chrome's disclosure language about how its on-device AI works, but that doesn't mean the company intends to capture on-device AI interactions. The Chrome menu modification, which isn't universally rolled out yet even in Chrome 148, was noted this week on Reddit. The "On-device AI" message in Chrome's System settings previously read, "To power features like scam detection, Chrome can use AI models that run directly on your device without sending your data to Google servers. When this is off, these features might not work." But the message changed recently – it lost the phrase "without sending your data to Google servers." That prompted privacy advocate Alexander Hanff to question whether the edit signaled an architectural change that would see local AI interactions processed by Google servers instead of remaining on-device. "Why was the sentence 'without sending your data to Google servers' removed from the on-device AI description in Chrome's Settings UI?" Hanff asked. "Was the previous text inaccurate? Has the architecture changed? Was the wording withdrawn on legal advice because Google was unwilling to defend it as a representation?" Asked about this, a Google spokesperson said, "This doesn’t reflect a change to how we handle on-device AI for Chrome. The data that is passed to the model is processed solely on device." It appears this situation deserves a more genteel rendering of Hanlon's Razor – "Never attribute to malice that which is adequately explained by stupidity." In this case, it's "Never attribute to malice that which is adequately explained by bad timing." Word of the menu modification surfaced as Chrome was rolling out the Prompt API, which is designed to provide web pages with a programmatic way to interact with a browser-resident AI model. The API's arrival and public discussion of it drew attention to the fact that Chrome has been silently downloading Google's 4GB Nano model onto users' devices. The coincidence of these events made it seem that Google was preparing to capture on-device prompts and responses, which would be a significant privacy retreat. In fact, Chrome has been letting Nano sleep on the couch for early adopters dating back two years when local AI was implemented in Chrome 126 as a preview program. While Google hasn't yet made model downloading and storage opt-in, the biz did earlier this year implement a way to deactivate and remove the space-hogging model. "We’ve offered Gemini Nano for Chrome since 2024 as a lightweight, on-device model," a Google spokesperson explained, pointing to relevant help documentation. "It powers important security capabilities like scam detection and developer APIs without sending your data to the cloud. While this requires some local space on the desktop to run, the model will automatically uninstall if the device is low on resources. In February, we began rolling out the ability for users to easily turn off and remove the model directly in Chrome settings. Once disabled, the model will no longer download or update." The edit to the "On-device AI" message occurred in early April. According to Google, Gemini Nano in Chrome processes all data on-device. But when websites interact with Gemini Nano in Chrome – via the Prompt API, for example – they can see the inputs and outputs of the model. In such cases, the data handling would fall under the privacy policy of the website interacting with the user's Nano instance. Google decided to change its "On-device AI" message to avoid confusion – and perhaps to preclude legal claims alleging policy violations – when the user is interacting with a Google site that calls out to the Nano model on-device, in support of some service it provides. In that scenario, the Google site would have access to the prompts it sends and responses it gets from the user's on-device model. That interaction would happen "without sending your data to Google servers," at least in the context of a user querying a model running in Google Cloud. But since the user's on-device Chrome-resident Nano model would send data to the Google site in response to that site's API calls, that data transmission might be interpreted as a violation of the local AI commitment language. Hence the edit. Google's decision to have Gemini Nano become a Chrome squatter is a novel way of doing things, given that co-opting people's computing resources has largely been the province of covert crypto-mining scripts. But perhaps after years of offering Gmail and Search at no monetary cost, Google feels entitled to a few gigabytes of Chrome users' local storage and occasional bursts of their on-device compute. ®
Categories: Linux fréttir
