Linux fréttir

Unemployed Ticked Up in America's IT Sector

Slashdot - Sun, 2026-05-10 14:34
IT sector unemployment "increased to 3.8% in April from 3.6% in March," reports the Wall Street Journal. But they add that the increase reflects "an ongoing uncertainty in tech as AI continues to play havoc with hiring. That's according to analysis from consulting firm Janco Associates, which bases its findings on data from the U.S. Labor Department." On Friday, the department said the economy added 115,000 jobs, buoyed by gains in industries including retail, transportation and warehousing and healthcare. The unemployment rate was unchanged at 4.3%. But the information sector lost 13,000 jobs in April. While it's still too early to say exactly how AI is affecting employment overall, some businesses, especially in the tech industry, have said it's part of the reason they're cutting staff. In April, Meta Platforms said it would lay off 10% of its staff, or roughly 8,000 people, as it seeks to streamline operations and pay for its own massive investments in AI. Nike will reduce its workforce by roughly 1,400 workers, or about 2%, mostly in its tech department, as it simplifies global operations. And Snap is planning to eliminate 16% of its workforce, or about 1,000 positions, as it aims to boost efficiency. In other areas of IT, which includes telecommunications and data-processing, employment is now down 11%, or 342,000 jobs, from its most recent peak in November 2022. But there's not just AI to blame. Inflation and economic uncertainty linked to the Iran conflict is giving some chief executives and tech leaders reason to pull back or pause their IT hiring, said Janco Chief Executive Victor Janulaitis. The article even notes that postings for software developer jobs "are up 15% year-over-year on job-search platform Indeed, according to Hannah Calhoon, its vice president of AI". But employers do seem to be looking for experienced developers, which could pose a problem for recent college graduates.

Read more of this story at Slashdot.

Categories: Linux fréttir

Memory godboxes could offer relief from the RAMpocalypse

TheRegister - Sun, 2026-05-10 14:00
In modern datacenters, storage can live anywhere — local to the machine, remotely accessed over the network, and/or shared between systems. The next generation of servers will treat system memory in much the same way. Systems will still have some local DDR5, but the bulk of it will be remotely accessed from what some have taken to calling the memory godbox. The ongoing DRAM shortage has created a perfect storm for the proliferation of the appliances, which not only allow for memory to be pooled, but also data stored in that memory to be shared by multiple machines simultaneously. In effect, memory becomes a fungible resource. More importantly, your next round of servers will probably support the tech, if they don't already. CXL finally has its moment to shine The technology at the heart of these memory godboxes isn’t new. Compute Express Link (CXL) has been slowly gaining traction since its introduction seven years ago. As a quick refresher, CXL defines a common, cache-coherent interface for connecting CPUs, memory, accelerators, and other peripherals. The technology comes in a couple of different flavors: CXL.mem, CXL.cache, and CXL.io, which, as a whole, have implications for disaggregated compute. Imagine a rack with a CPU node, GPU node, memory node, and storage node, which can talk to one another completely independently. That's the core idea behind CXL. CXL piggybacks off the PCIe standard, which means in theory it should be broadly compatible, but, up to this point, it's primarily been used with memory devices. The 1.0 spec opened the door to memory expansion modules, which allow you to add more memory by slotting them into a CXL-compatible PCIe slot. To the operating system — assuming you’re running Linux that is — the extra memory is largely transparent, showing up as if it were attached to another CPU socket, just one without any additional compute. The 2.0 spec, which showed up in 2020, added basic support for switching, which meant memory could be pooled and then allocated to any number of connected systems. AMD and Intel’s current crop of Epycs and Xeons already support these appliances. But while the memory can be partitioned and reallocated to different machines as needed, two machines can’t work on the same data simultaneously. Unless you were memory-constrained, the added complexity of CXL 2.0 didn’t offer much benefit over simply using higher capacity DIMMs in the first place. At least, not until memory prices went through the roof. Where things really get interesting is when the 3.0 spec arrives in AMD and Intel’s next-generation of Epycs and Xeons. In fact, from what we understand, Amazon’s Graviton5 CPUs we looked at in December already support the spec. CXL 3.0 introduces two key capabilities that make it particularly interesting for memory appliances. The first is support for larger topologies: Multiple CXL switches can be stitched together into a fabric. The second is support for memory sharing: Rather than partitioning memory into slices only accessible to one machine at a time, memory can be shared between machines. In theory this could allow two machines running the same set of workloads to use the memory closer to that of one. It’s a bit like deduplication for memory. In fact, we already do this in virtualized environments like KVM, but it now works across machines. There are security and performance implications to all of this. Thankfully in CXL 3.1 and later, the consortium introduced confidential computing capabilities into the spec, allowing for isolation where necessary. On the performance end of things, CXL 3.0 moves to PCIe 6.0 as a baseline, which provides 16 GB/s of bidirectional bandwidth per lane. Assuming 64 lanes of CXL per CPU, that works out to an additional 512 GB/s of bandwidth. So memory bandwidth shouldn’t be too much of an issue for most applications. Latency, on the other hand, is a different story. CXL-attached memory is going to add some latency. However, as we’ve previously discussed, the latency isn’t as bad as you’re probably thinking — on the order of a NUMA hop, or about 170 to 250 nanoseconds of round trip latency. Obviously, the farther the memory appliance is from the host CPU, the worse the latency is going to be. Late last year, the CXL consortium ratified the 4.0 spec, which among other things doubles the bandwidth from 16 GB/s per lane to 32 GB/s by re-basing on PCIe 7.0. However, it'll be a while before we see appliances based on the spec. Where’s my memory godbox? There are several companies developing hardware for these kinds of networked memory appliances. Panmnesia’s CXL 3.2-compatible PanSwitch is one of the most sophisticated examples. The switch features 256 lanes of connectivity for CXL memory modules, devices, or CPUs to connect, pool, or share resources. If you’re okay with memory pooling and don’t need the niceties of CXL 3.0, then there are already several memory appliances available that are compatible with the latest generation of Xeon 6 and Epyc Turin processors. Liqid’s composable memory platform, for example, can provide a pool of up to 100 TB of DDR5 to as many as 32 hosts. Meanwhile, UnifabriX Max systems provide CXL 1.1 or 2.0 connectivity to 16 or more systems with support for CXL 3.2 already in the works. We suspect that as more CXL 3.0 compatible CPUs and GPUs hit the market, more of these memory godboxes will appear. AI eats everything Don’t get too excited. While network attached memory has the potential to reduce an enterprise's infrastructure spend, those same qualities make it attractive for the very thing driving the memory shortage in the first place. AI adoption has driven demand for DRAM off the charts. In addition to the HBM used by GPUs, DDR5 is being used for key value cache offload during inference. These KV caches store model state and can chew significant amounts of memory — often more than the model itself — in multi-tenant serving scenarios. Rather than discard these caches and recompile them when the model state is restored, it’s more efficient to offload them to system memory and eventually flash storage. The problem with using flash storage is that it has a finite write endurance. After a while it wears out. Instead, CXL memory vendors are positioning the tech as a more resilient alternative. That’s bad news for enterprises looking to these memory godboxes for salvation from the RAMpocalypse. ®
Categories: Linux fréttir

Both Fedora and Ubuntu will get AI support – soon

TheRegister - Sun, 2026-05-10 13:00
Both Ubuntu and Fedora have made it official: support is coming soon for running local generative AI instances. An epic and still-growing thread in the Fedora forums states one of the goals for the next version: the Fedora AI Developer Desktop Objective. It is causing some discontent, and at least one Fedora contributor, SUSE’s Fernando Mancera, has resigned. Fedora Project Lead Jef Spaleta, who took over the role from Matthew Miller a year ago, remains resolute, saying: I have zero evidence in front of me that users are being driven away from Fedora because of AI. As far as Red Hat’s community distribution goes, while this may be controversial, this should not be a big shock. In October last year, The Register reported that the Fedora council approved a policy allowing AI-assisted contributions, and anyone following the IBM subsidiary’s movements will already know that last June’s RHEL 10 release includes access to an LLM-based online helper chatbot: we tried it out when the product was released. We also reported on the managers of Red Hat’s Global Engineering department being notably keen on the use of AI just last month. Since Red Hat has other offerings for slow-moving stable server OSes – and arguably because Debian, Ubuntu, and their many derivatives have the stable-desktop-distro space nicely covered already – Fedora has a strong focus on providing a distro for developers, and Spaleta’s announcement makes this clear. The goal is: to build a thriving community around AI technologies by focusing on three key areas: equipping developers with the necessary platforms, libraries, and frameworks; ensuring users experience painless deployment and usage of AI applications; and establishing a space to showcase the work being done on Fedora, connecting developers with a wider audience. He also spells out what it doesn’t want to do: Non-goals: The system image will not be pre-configured with applications that inspect or monitor how users interact with the system or otherwise place user privacy at risk. Tools and applications included in the AI Desktop will not be pre-configured to connect to remote AI services. AI tools will not be added to Fedora’s existing system images, Editions, etc, by the AI Desktop initiative. In other words, tools for developers, not for end-users, with a strong emphasis on models that run locally, and which preserve the user's privacy. It’s also worth pointing out that Fedora has had an AI-Assisted Contributions Policy in place for six months, and earlier this month, Fedora community architect Justin Wheeler explained in some detail Why the Fedora AI-Assisted Contributions Policy Matters for Open Source. Our impression is that the Fedora team feels that it needs to keep Fedora relevant for growing interest in LLM-bot assisted tooling, and that it can address concerns from hardcore FOSS types by ensuring that this means local models, built according to FOSS-respecting terms, deployed in privacy-respecting ways. Fedora is not alone in this, though. There are also ructions across the border in Ubuntuland. Right after the release of the Canonical’s new LTS version, Ubuntu 26.04 Resolute Raccoon, Canonical’s veep of engineering Jon Seager laid out the future of AI in Ubuntu. We interviewed Seager last year during the 25.10 Ubuntu Summit, and back in January this year, he published his views on Developing with AI on Ubuntu. Now the plans are firming up. Like Fedora, there’s a strong focus on local models and confidential, privacy-first deployments – and ensuring that the OS and the tools support GPU acceleration from the big hardware players in that space. However, unlike Red Hat, Canonical isn’t pushing its developers towards these tools. In what we see as a veiled jab, Seager’s announcement says: We are not setting shallow metrics on token usage, or percentages of code written with AI, but rather incentivising engineers to experiment and understand where AI tools add value. Initially, the focus is on users instead: AI features in Ubuntu features will come in two forms: first as a means of enhancing existing OS functionality with AI models in the background, and latterly in the form of “AI native” features and workflows for those who want them. As Fernando Marcela’s exit shows, an emphasis on what could be termed FOSS-friendly AI – open models, privacy-centric, local execution and so on – is not enough to placate those who are really strongly averse to these tools. The Reg FOSS desk counts himself firmly in this camp. Back in January, we reported on the rise, fall, and resurrection of OpenSlopware, a list of FOSS projects which contain LLM-generated code, integrate LLMs, or even show the traces of the use of LLM agents. Soon, it seems inevitable that Fedora and Ubuntu will both feature here. Resistance, though, is also rising. Stop Slopware tries to help explain why and how to avoid it, and there’s also The No-AI Software Directory for projects that have explicit LLM-free policies, whether they’re FOSS or not. Bootnote It amuses us to note that both the Ubuntu and Fedora forums use the same software, called Discourse. (It’s a sort of web forum as designed by people who have heard of mailing lists, but don’t know how to use them and find the idea of bottom-posting confusing.) Some could interpret this shared adoption as a sign of underlying similarities between the two projects. ®
Categories: Linux fréttir

The EU Considers Restricting Use of US Cloud Platforms for Sensitive Government Data

Slashdot - Sun, 2026-05-10 11:34
CNBC reports: The European Union is considering rules that would restrict its member governments' use of U.S. cloud providers to handle sensitive data, sources familiar with the talks told CNBC. The European Commission — the EU's executive branch — is expected to present its "Tech Sovereignty Package" on May 27, which will include a range of measures aimed at bolstering the bloc's strategic autonomy in key digital areas. As part of preparations for that package, discussions are taking place within the Commission around limiting the exposure of sensitive public-sector data to cloud platforms provided by companies outside of the EU, two Commission officials, who asked to remain anonymous as they weren't authorized to discuss private talks, told CNBC... "The core idea is defining sectors that have to be hosted on European cloud capacity," one of the officials said. They added that companies providing cloud solutions from third countries, including the U.S., could be impacted. Proposals would not prohibit overseas companies' cloud platforms from government contracts entirely, but limit their use in processing sensitive data at public sector organizations, depending on the level of sensitivity, they added. The officials said that talks are ongoing and yet to be finalized... The officials told CNBC there are discussions around proposing that financial, judicial and health data processed by governments and public-sector organizations require high levels of sovereign cloud infrastructure.

Read more of this story at Slashdot.

Categories: Linux fréttir

HP stuffed a PC into a keyboard. We took it for a spin

TheRegister - Sun, 2026-05-10 09:30
The early history of personal computers is stacked with systems such as the Apple II and the Commodore 64 that had the components living inside a keyboard. But as technology evolved, the keyboard became a peripheral and the PC itself was either in a separate box or the whole system was a laptop. Now, HP has a new spin on this decades-old idea. It embeds a full-fledged AI PC inside a 101-key keyboard you can carry with you from the office to home. Unlike ‘80s microcomputers or hobbyist-oriented products like the Raspberry Pi 500, the EliteBoard G1a is squarely targeted at business. The system is part of HP’s commercial lineup, alongside its EliteBook laptops, and, for better or worse, it comes with HP Wolf Security preinstalled. The company clearly hopes organizations will buy these in bulk. But to benefit from it, you really have to prefer a mobile keyboard to a traditional laptop, all money aside. Who’s it for? The EliteBoard G1a is trying to create a new niche. When we talked with product managers at HP, they suggested IT departments would buy these computers for two types of workers. The first group is so-called "dual deskers" - knowledge workers who have a desk with a monitor at work and another at home. The second group includes deep-pocketed call centers or environments where desk space is at a premium. From time immemorial, dual-deskers have carried laptops and closed their lids when they docked to a monitor at work. With the EliteBoard, they could simply schlep the keyboard, which weighs a mere 1.49 pounds – about half the weight of a lightweight laptop. To make this situation work in companies with managed systems, we have to assume that either the IT department would give out monitors to use at home or offer some reason (a subsidy? a mandate?) for employees to buy their own for home. The EliteBoard connects to monitors using its USB4 port, so its ideal monitor is one that has Thunderbolt or USB video connectivity built in. Less-expensive and older monitors don’t have this type of connectivity, but select configs of the EliteBoard come with an optional USB-to-HDMI adapter that you can use with other monitors, and it has a USB pass-through for power. That said, HP demonstrated the EliteBoard at numerous press events by showing how much desk space it saves by using a single USB cable to get power, video out, and connectivity to peripherals via the monitor. So if companies want employees to be able to take advantage of this scenario at home, that means shelling out another few hundred bucks for a modern monitor, or making employees do it. Today, companies with limited desk space for a call center or another cramped work area could just buy a tiny desktop to sit behind the monitor or next to it. However, building all of the PC’s guts into the keyboard makes a lot of sense for space savers, because a keyboard is something every PC needs and a desktop chassis is not. If a company wanted to, it could give each employee their own EliteBoard, have them plug it into a monitor during work time and then have them stick it in a drawer when they go off shift and someone else comes on. The problem for call centers is that the HP EliteBoard G1a is much more powerful and much more expensive than what they need. At press time, the G1a was priced at $1,499 for the lowest end config. And most companies probably don’t need employees to each have their own PC that they lock away after they punch out. “The call center angle is probably the stronger pitch, but those buyers are shopping entry-to-mid-market. They want something cheaper and simpler than a mini desktop, not a Copilot+ PC with up to 64GB of RAM,” Kieren Jessop, a research manager with analyst firm Omdia. “HP has built an impressive piece of engineering in search of a problem that most enterprises have already solved with a laptop — or will solve with a thin client.” Configurations HP makes the EliteBoard G1a in a variety of configurations that vary by market. Companies can get it with various AMD Ryzen CPUs, up to 64GB of RAM and an SSD up to 2TB in capacity. It comes with either a detachable or embedded cord, and optionally with a 32 WHr battery that promises up to 3.5 hours of endurance. Why would you need a battery on a product that demands to be used at a desk and plugged in? The most likely reason is to let the keyboard go into sleep mode when it’s in your bag. Employees could also hook the EliteBoard G1a up to a portable monitor and use it unplugged that way, but then why not just buy them a laptop? At press time, prices ranged from $1,499 to $3,423 in the US. The lowest-end config has a Ryzen AI 5 Pro 340, 16GB of RAM, an integrated cable, and a 256GB SSD. Fifty bucks more will get you the same configuration with a 512GB SSD, as per HP.com. The highest-end config listed comes with a Ryzen AI 7 Pro 350 CPU, 32GB of RAM, and a 512GB SSD, and sells for only $1,999 at B&H but a whopping $3,423 at HP.com. Our review config, which sports 64GB of RAM, a Ryzen AI 7 Pro 350 CPU, and a 2TB SSD, has not been listed for sale in the US, and HP didn't answer when we asked how much it would cost. However, we’d assume that it would cost a lot more than $1,999. Price vs a Laptop If all you do is dock your PC at home and at work, you might think, “why pay for a laptop when I don’t need a built-in screen?” But it’s hard to make that argument when the laptop is actually less expensive. Right now, you can get an HP EliteBook 6 G1aN with the same AMD Ryzen AI 7 350 CPU, along with 24GB of RAM and a 512GB SSD, for just $1,299 – that's actually less than the cheapest EliteBoard. A custom configured HP EliteBook 8 G1a with the Ryzen AI 7 Pro 350 CPU, 32GB of RAM, and a 512GB SSD is just $1,799. If you’re comparing the total cost of ownership versus a laptop, also consider the price of a monitor if your users don’t already have one. While you could use an adapter, the ideal use case involves a USB-C monitor that transmits data and power over a single wire. The cheapest HP-branded USB-C monitor I could find at press time was the HP E27k 4K monitor, which was selling for $504. However, I saw a Dell-branded USB-C monitor, the S2725DC, on sale for just $236 at Amazon. If you’re an IT department and you’re kitting out someone for home and office use, you might need to buy them two monitors. Design At 14.1 x 4.7 x 0.7 inches, the EliteBoard G1a is the size of a typical, full-size keyboard complete with numpad. It’s a boring but office-friendly dark gray color with a very thin bezel around the keys. At first glance, there aren’t many ways to know that this is more than just a keyboard. There’s a power button / fingerprint reader that’s located in the upper right corner of the keyboard, though you might easily mistake it for just another key, until you press it and see the blue light turn on. Turn the keyboard around and on the back lip and you’ll notice a thin vent for airflow. This computer definitely has a fan and you can hear it quite prominently at times. There are also two USB-C ports, a USB4 40 Gbps port and a 10 Gbps port, unless you have the embedded cable, in which case, you just have the 10 Gbps port. Clearly, the 40 Gbps port is the one you’ll want to use for docking, but you can use the 10 Gbps port to connect the dongle for the included wireless mouse or other peripherals. There’s also a security cable lock slot on the left side. So if you want to chain this to a desk, you can, but we’d argue that defeats the point of the machine. But how well does it type? Since this is a computer-in-a-keyboard, the most obvious question we need to answer is “how’s the typing experience?” Pretty decent. On the bright side, the EliteBoard G1a has a generous 2 mm of travel, which is more than you’ll find on most laptops, where even 1.5 mm is deep. The keys feel pretty snappy and are in the same feedback league as those on my Lenovo ThinkPad X1 Carbon, but the ThinkPad’s keys have a more curved shape, which is better than the flat tops on the EliteBoard. If you’re burning the midnight oil, there’s a built-in backlight which you can enable by hitting the F9 key. It has two different brightness settings so you can decide just how much you want it to shine through. The layout is pretty standard for a full-size keyboard with a numpad. However, I don’t like how small the arrow keys are, and the Pg Up and Pg Dn are just tiny. There’s no empty space around these keys, which I use a lot when editing documents, so it’s far too easy to miss them. Even on most laptops, these keys are larger. Another downer is the lack of flip-up feet on its bottom. I like to angle my keyboard up at a 15 to 30 degree angle, but this one is short and flat to the desk. To save my wrists, I always use a gel-filled wrist rest when I type and, without feet to elevate the keyboard, I’m typing down onto the keys because it’s so much lower than the gel pad. This won’t be as much of an issue for folks who don’t use wrist rests. In short, if you’re used to laptop keyboards or the low-cost keyboards that come with most desktop computers, the EliteBoard G1a will probably seem like a nice step up. However, if you want the best possible typing experience, there’s an entire ecosystem of mechanical keyboards out there with much deeper travel and more feedback. If you’re not a gamer and you want the best possible typing experience, I recommend a mechanical keyboard with either clicky or tactile switches. Unless you go for a low-profile keyboard, you’ll be getting between 3.6 and 4 mm of travel, so you won’t bottom out as easily when typing. I prefer clicky switches like the Kailh Box White (my favorite) or Cherry MX Blue, but those make some noise so, if you like quiet, Cherry MX Brown switches will do the trick. To see the difference between my daily driver mechanical keyboard, an Akko 3098N with Kailh Box White switches, and the EliteBoard G1a, I performed the 10fastfingers.com typing test on both. On HP’s keyboard, I managed a strong 96 wpm, which is at the lower end of typical for me, with a six percent error rate. On my daily driver, the numbers were a better 101 wpm with a two percent error rate. Your mileage will vary. Speaker and Microphone The EliteBoard G1a has both built-in bottom-facing speakers and a microphone array. In our tests, the speaker was more than loud enough and it was clear enough for voice calls, though we wouldn’t recommend listening to music on it for too long. The drums in AC/DC’s Back in Black sounded a little tinny, though there was a clear separation of sound with the vocals appearing to come from one side while the percussion came from another. The dual-array microphone was also passable, but not good enough for podcasts. When we talked to a coworker using the built-in mic, she said our voice was clearly audible but a little echoey. In the box and preloaded Depending on which config you get, your HP EliteBoard G1a may come with a variety of different accessories in the box. All versions come standard with an HP wireless 675M mouse that connects either by Bluetooth or by an included USB-C wireless 5-GHz dongle. It is not a particularly fancy mouse but it has a couple of side buttons and a scroll wheel. I found myself using my Logitech MX Master 3 mouse instead, because it’s ergonomically shaped and highly programmable. My review unit also came with the optional soft canvas cover sleeve you can use to protect the EliteBoard G1a while you’re carrying it around. I found this add-on to be about as useful as a laptop sleeve. It might offer some protection and padding for when you stick the EliteBoard G1a in an existing backpack, but it’s not going to replace your briefcase or your backpack when you’re commuting. I also got the optional HDMI multiport hub, which is a must-have if you don’t already have a Thunderbolt or USB4 docking station or a monitor with that kind of connectivity built in. The hub connects to the USB4 40 Gbps port on the EliteBoard and features two USB-C ports (one for power, one for connectivity), an HDMI out cable for connecting to a monitor, an Ethernet port for wired networking, and an HDMI-in port for a second monitor. There’s an optional, slim 65W USB-C power adapter that’s helpful if you aren’t connecting to a monitor or docking station that supplies power. If you don’t get one in the box, it’s easy enough to find one for $15 to $30 on Amazon. Also, if your EliteBoard does not have an embedded cable — mine did not — you get a braided USB cable in the box. The less-expensive configs of the EliteBoard all have embedded cables, but we recommend getting a model without one because it’s easier to carry around without a cable hanging off of it. HP does not preload a lot of software onto the EliteBoard but it does come with a three-year subscription to HP Wolf Security, which normally costs $36 a year for individual subscriptions. HP Wolf has a malware/virus scanner, a threat containment feature, a secure browser, OS resiliency (for recovering from corruption and doing a reinstall), and application persistence, which prevents unwanted changes to security software like HP Wolf itself. Since it has an NPU (neural processing unit) that’s capable of more than 45 trillion operations per second (TOPS), the EliteBoard G1a qualifies as one of Microsoft’s Copilot+ PCs. This means that it has some added local AI features that not every PC gets from Windows 11, including Cocreate image generation in Paint, Windows Studio Effects handled locally for your webcam, translated Live Captions from any audio input, and Recall, a controversial feature that takes screenshots of all your work to help you “remember” what you were doing at any given time. Fortunately, Recall is disabled by default. Performance Equipped with an AMD Ryzen AI 7 Pro 350 CPU, 64GB of RAM, and a 2TB SSD, our review configuration of the EliteBoard G1a handled everything I threw at it. I used the system on and off as my daily driver PC for work for a period of several weeks and it was always smooth and responsive, even as I had dozens of Chrome tabs open and Slack running across two 4K monitors I had connected via Thunderbolt 3 docking station. I should note that, no matter what I was doing, the fan on the EliteBoard G1a was frequently running and was often quite audible. It’s no louder than most notebooks I’ve tested, but if you’re expecting total quiet, look elsewhere. My editorial workload is not nearly as demanding as some folks’ day jobs so, to see how the EliteBoard G1a stacks up, I ran it through a series of benchmarks and compared the results to those from two laptops I had access to: a Lenovo Yoga Slim 7x with a Qualcomm Snapdragon X Elite X1E-78-100 CPU, and a Lenovo ThinkPad X1 Carbon with an Intel “Meteor Lake” Core Ultra 7 165U processor. The Ryzen AI 7 PRO in the EliteBoard debuted in 2025 with 8 cores, 16 threads, and a maximum boost clock of 5 GHz. It features built-in AMD Radeon 860M graphics and a Neural Processing Unit (NPU) that’s capable of achieving 50 TOPS for better local AI. Its DDR5 RAM runs at 5,600 MHz. Released in 2024, the Snapdragon X Elite X1E-78-100 has 12 cores and threads with a boost clock that goes up to 3.4 GHz, along with an NPU that does 45 TOPS. It’s an Arm processor so the laptop that runs it uses Windows on Arm. The Yoga Slim 7x laptop that we tested had 16 GB of LPDDR5x RAM running at 8448 MHz. The oldest of our test group, vintage 2023, the Intel Core Ultra 7 165U has 12 cores and 14 threads, but only two of those cores are performance cores that can boost up to 4.9 GHz, while the others are a mix of efficient cores and low-power efficient cores that boost up to 3.8 and 2.1 GHz respectively. The ThinkPad X1 Carbon we tested with it had 64GB of LPDDR5x RAM running at 6400 MHz. In our tests, the EliteBoard G1a always eclipsed the ThinkPad X1 Carbon, which is not a surprise considering its much-older processor. However, the Snapdragon-enabled Yoga Slim 7x outpaced it on some benchmarks. Primesieve This test counts the prime numbers under one trillion and returns a result in millions of prime numbers per second. The benchmark is particularly heavy on SIMD instructions like AVX-512 or Arm’s Neon and SVE vector extensions, making it a good proxy for some of the more workstation-centric tests we’ll look at shortly. It runs across both single thread and multi-thread workloads, with big performance boosts for parallel processing. Using just a single thread, the EliteBoard edged out the competition with 415 million primes per second (MPS), compared to the Slim 7x’s 352. However, the Slim 7x slightly outperformed it when using multithreading, delivering 2,686 MPS to the EliteBoard’s 2,145. One thing to note is that, while the EliteBoard has more threads, it has fewer actual cores. The X1 Carbon wasn’t even in the same ballpark. This will become a theme across our test suite. Blender 3D rendering is always a challenge and, to be honest, it’s hard to imagine somebody buying an EliteBoard for this purpose. However, it’s always worth noting what the system can do. We ran Blender, a very popular 3D modeling app, using three scenes: Monster, Junkshop, and Classroom. As you can see, the Slim 7x and its 12-core Snapdragon processor were anywhere from 34 to 75 percent quicker, depending on the content. Still, the EliteBoard turned in respectable scores on something you wouldn’t expect it to do. Handbrake x265 Video transcoding is another resource-intensive task and one that occurs in many scenarios, including game streaming, video editing, and even video conferencing. To test how the EliteBoard handled video transcoding, we used Handbrake to convert a 4K 60 fps video to 1080p using an x265 encoder at the medium preset with a constant quality of 18. Our results are measured in frames per second (fps). Again, the EliteBoard was far superior to the ThinkPad, but was a good 45 percent behind the Yoga Slim 7x. Still, this is solid performance that’s more than workable. Llama.cpp One local AI task you might want to conduct is running an open-source model as a chatbot on your PC rather than sourcing it from the cloud. This will give you more privacy than using OpenAI, Claude, or Copilot on the web and it’s completely free. So we ran the GPT-OSS 20B open weights model using Llama.cpp as our client and timed the amount of milliseconds it took to generate the first token. Here we see that the Snapdragon processor and faster RAM on the Yoga Slim 7x gave it a definite advantage, taking 39 percent less time than the EliteBoard to get there. The EliteBoard also generated about half as many tokens per second. However, it beat the pants off the ThinkPad X1 Carbon, getting to the first token more than twice as quickly while generating 30 percent more tokens per second. It’s worth noting that these tests were run on the CPU cores and didn’t harness the chip’s integrated GPUs or NPUs. Whisper.cpp One common local AI workload a business person might use is transcription. Let’s say you had an audio file and you wanted to convert it into readable and editable text. You might use a tool based on Whisper, a popular free model from OpenAI. For testing, we used Whisper.cpp, an implementation of Whisper written in C++, with the Whisper Medium EN model transcribing a 10-minute audio clip. Here, the EliteBoard transcribed the audio at 2.4x real-time speed, while the Yoga Slim 7x was faster at 3.4x. Those extra cores are doing a lot of heavy lifting here. That said, if you’re converting 10 minutes of audio in less than five minutes, that’s pretty good. LLVM Compile For those using the EliteBoard for programming, compile times matter. So, we compiled the LLVM toolchain from its source and measured the time. This isn’t a trivial compile job and therefore represents a worst case scenario for developers considering the EliteBoard. Here it took a modest 19 minutes and 44 seconds, which was more than double the time it took the Yoga Slim 7x. On high-end desktop workstation hardware, this same workload can be completed in under five minutes, so if your day job regularly requires compiling large projects, you might want to spring for something more capable, or perhaps not. “My code is compiling” is a pretty good excuse for taking a 20 minute break. 7-Zip Compression and decompression are very taxing on a CPU and are very common scenarios we see today. So we fire up 7zip and measure its ability to do both tasks in both single-threaded and multi-threaded scenarios. With a single thread, the Slim 7x and the EliteBoard basically tie at compression, while HP’s computer holds the edge in decompression. However, when we move to multi-threaded scenarios, the Snapdragon X Elite’s 12 physical cores easily beat out the AMD Ryzen AI 7 Pro 350’s eight cores and 16 threads. LibreOffice: ODT to PDF Conversion We tested how long it takes LibreOffice to convert 50 image-heavy ODT files into PDFs. This workload is lightly threaded so it favors higher clock speeds over more cores. The results bear this out as the EliteBoard, with its Ryzen AI 7’s higher performing cores, beat out the Slim 7x by 22 percent. Despite its older processor, the ThinkPad actually manages to tie the Slim 7x in this test. Repairability For IT departments that do their own service, the EliteBoard G1a has plenty to offer. Its back surface is held on by just four screws and pops off easily. Underneath, you get full access to the motherboard and a number of easily-removable components, including the DDR5 SODIMM RAM, the M.2 SSD, the WLAN card, the fan, the optional battery, and the speakers. You can even replace the keyboard itself and leave the computer part intact. Bottom line The HP EliteBoard G1a delivers strong performance in a unique and compact form factor that saves desk space and reduces the weight you carry back and forth. If you don’t want a laptop but do want a portable computer, this is your best choice. It provides a better typing experience than most laptops and a more space-efficient design than most desktops. However, in the current marketplace, this device does not represent a significant savings over a similarly configured laptop. Depending on what laptop you choose to compare against, you might save a few hundred dollars, but when you add the cost of the monitors you need to pair with it - if you need to purchase those - it’s a wash. HP has set out to make a unique product with the EliteBoard G1a and it has succeeded in building a very competent and capable computer-in-a-keyboard. If you’re an IT decision maker, you’d buy this device for folks who work out of one or two distinct locations (home and office or multiple offices) and never need to get online from the road or from a conference room. Whether that’s a common scenario in your workplace will determine if this product is right for you or your fleet. ®
Categories: Linux fréttir

NYT: 'Meta's Embrace of AI Is Making Its Employees Miserable'

Slashdot - Sun, 2026-05-10 07:34
"Meta's embrace of AI is making its employees miserable," reports the New York Times. And "After Meta said late last month that it would start tracking employees' computer use, hundreds of workers spoke up." (One employee even told Meta's CTO in an internal post, "Your callousness to the concerns of your own employees is concerning." In an internal post last month, Meta told its U.S. employees that it was making a change that would affect tens of thousands of them. What employees typed into their computer, how they moved their mouse, where they clicked and what they saw on their screen would be tracked, Meta said. The goal, the company said, was to capture employee data so Meta's artificial intelligence models could learn "how people actually complete everyday tasks using computers." Many workers immediately revolted. In online comments, they blasted the tracking as a privacy violation, calling it antisocial and callous... [One engineering manager even asked "How do we opt out?"] "There is no option to opt-out on your corporate laptop," replied Andrew Bosworth, Meta's chief technology officer. Employees reacted by posting more than 100 angry and surprised emoji, according to the messages.... Meta is pushing its 78,000 employees to adopt AI tools and factoring their use of the technology in performance reviews. The company is also tracking employees' computer work to feed and train its AI models. And it is cutting jobs to offset its AI spending, saying last month that it would slash 10% of its workforce. That has led to anger and anxiety as employees await news of whether they are affected by the layoffs, which are slated to be carried out May 20, according to 11 current and former Meta employees. Some said they no longer saw Meta as a place for a long career. Others were looking for new jobs or trying to signal that they wanted to be laid off so they could receive severance pay, the current and former employees said. "It's incredibly demoralizing," an employee who does user research wrote in an internal post, which was reviewed by the Times... Meta also introduced internal dashboards to track employees' consumption of "tokens," a unit of AI use that is roughly equivalent to four characters of text, four people said. Some said the dashboards were a pressure tactic to encourage competition with colleagues. That led some employees to make so many AI agents that others had to introduce agents to find agents, and agents to rate agents, two people said.

Read more of this story at Slashdot.

Categories: Linux fréttir

'Changing of the Guard'? AMD, Intel, and Micron Soar While Nvidia Lags

Slashdot - Sun, 2026-05-10 03:34
While Nvidia has dominated the "infrastructure boom" since 2022's launch of ChatGPT and "the generative AI craze," CNBC writes that "This week offered the starkest illustration yet of what MIzuho analyst Jordan Klein said could be a 'changing of the guard in AI.'" Chipmakers Advanced Micro Devices and Intel notched gains of about 25%, while memory maker Micron jumped more than 37% and fiber-optic cable maker Corning climbed about 18%. All four of those companies have more than doubled in value this year, with Intel leading the way, up well over 200%. Nvidia, meanwhile, is only slightly ahead of the Nasdaq in 2026, gaining 15% for the year, aided by an 8% rally this week. In spreading the wealth to a wider swath of hardware companies, investors are clearly betting that the bull market in AI has long legs and that data centers are going to need a wider array of advanced components for years to come. Memory has been the biggest theme of late due to a global shortage that's driven up prices and turned Micron, a 47-year-old company tucked in a sleepy corner of the semiconductor market, into one of the hottest trades over the past 12 months. Micron blew past an $800 billion market capitalization for the first time this week, and the stock is now up over 750% in the past year. CEO Sanjay Mehrotra told CNBC in March that key customers are only getting "50% to two-thirds of their requirements" because of supply issues. The memory market is largely dominated by Micron, along with Korea-based Samsung and SK Hynix, which are also both in the midst of historic rallies... Bank of America estimates the data center CPU market could more than double from $27 billion in 2025 to $60 billion in 2030. AMD's quarterly results this week underscored the emerging trend, as earnings, revenue and guidance sailed past estimates on strong data center growth. The company has long led the CPU charge, and CEO Lisa Su said on the earnings call that AMD now expects 35% growth over the next three to five years in the server CPU market, up from a forecast of 18% growth that the company provided in November. The article cites two other big movers: Intel "is in the midst of a revival sparked by a major investment from the U.S. government last year. Intel's stock had its best month on record in April, more than doubling, and has continued notching massive gains, rising 33% in the early days of May." Nvidia still remains the world's most valuable company "and is expected to show revenue growth of 70% this fiscal year," the article points out — adding that companies like Corning are also benefiting from Nvidia partnerships. "Glass maker Corning, which celebrated its 175th anniversary this week, signed a massive deal with Nvidia on Wednesday that involves the development of three new U.S. factories dedicated entirely to optical technologies... likely a major step in Nvidia's move away from copper cables and towards fiber-optic cables as it builds out its rack-scale systems."

Read more of this story at Slashdot.

Categories: Linux fréttir

Open Source Registries Join Linux Foundation Working Group to Address Machine-Generated Traffic

Slashdot - Sun, 2026-05-10 01:34
Under the nonprofit Linux Foundation, "a new Sustaining Package Registries Working Group will seek to identify concrete funding, governance, and security practices," reports ZDNet, "to keep code flowing as download counts grow.... Because software builds, continuous integration pipelines, and AI systems hammer registries at machine speed rather than human speed, the sites can't keep up. "That growth has brought a surge in bot traffic, automated publishing, security reports, and outright abuse, exposing what the working group bluntly calls a 'sustainability gap'." Sonatype CTO Brian Fox, who oversees the Maven Central Java registry, estimates open-source registries saw 10 trillion downloads in 2025. And "The same pattern is appearing across ecosystems. More machine traffic. More automation. More scanning. More expectations around uptime, integrity, provenance, and policy enforcement. More cost. More support burden. More dependency on infrastructure that the industry still talks about as though it runs on goodwill and spare time." ZDNet reports that "To tackle that, Sonatype has teamed up with the Linux Foundation and other package registry leaders, including Alpha-Omega, Eclipse Foundation (OpenVSX), OpenJS Foundation, OpenSSF, Packagist, Python Software Foundation, Ruby Central (RubyGems), and the Rust Foundation (Crates)." The idea is to give operators a neutral forum to discuss money, governance, and shared operational burdens openly. Once that's dealt with, they'll coordinate how to explain those realities back to companies and organizations that have long assumed registries are "free." No, they're not. They never were. As the Linux Foundation pointed out, "Registries today run primarily on two things: (1) infrastructure donations and credits; and (2) heroic efforts from small paid teams (themselves funded by donations and grants) and unpaid volunteers that operate and maintain registry services. The bulk of donations and grants comes from a small set of donors and doesn't scale with demands on the registry." The working group is explicitly positioned as a venue where registry leaders and ecosystem stakeholders can align on "practical, community-minded" ways to sustain that infrastructure, rather than each operator improvising its own survival plan in isolation. ZDNet says the group will also coordinate security practices and information, and craft frameworks "that make it politically and legally possible to introduce sustainable funding models without fracturing communities." And they will also "align messaging and educational content so developers, companies, and policymakers finally understand what it costs to run these services."

Read more of this story at Slashdot.

Categories: Linux fréttir

Will Maryland's Utility Bills Increase $1.6B to Support Other States' Datacenters?

Slashdot - Sat, 2026-05-09 22:34
To upgrade its grid for data centers, PJM Interconnection (which serves 13 states) plans to spend $22 billion — and charge nearly $2 billion of that to customers in Maryland, argues Maryland's Office of People's Counsel. The money "will be recovered in rates for decades" and "drive up Maryland customer bills by $1.6 billion over the next ten years alone," they said Friday, announcing an official complaint filed with America's Federal Energy Regulatory Commission. Extra demand is expected from Ohio, Pennsylvania, and Illinois "where demands driven by data centers are projected to grow substantially by 2036," they explain. But that means that Maryland customers "are subsidizing data center-driven transmission buildout by virtue of geographic proximity..." Tom's Hardware explains: That means an extra $823 million for residential (approx. $345 per customer), $146 million for commercial (approx. $673 per customer), and $629 million for industrial customers (approx. $15,074 per customer)... "Maryland customers have neither caused the need for these billions in new transmission projects nor will they meaningfully benefit from them," [according to Maryland People's Counsel David S. Lapp].... This is one of the biggest reasons why many AI hyperscalers are facing pushback from the communities where they intend to place their data centers. At the moment, around 69 jurisdictions have passed some sort of moratorium on projects like these, and a survey has shown that nearly half of Americans do not want a data center in their neighborhood. Debates around these projects are passionate, with a few cases turning violent and even resulting in shootings (thankfully, without any casualties), especially as many feel that the construction of these power-hungry assets is threatening their lifestyles and quality of life. Thanks to long-time Slashdot reader noshellswill for sharing the news.

Read more of this story at Slashdot.

Categories: Linux fréttir

Rush Rescue Mission for NASA's $500M Space Telescope Passes Key Milestone

Slashdot - Sat, 2026-05-09 21:34
NASA's $500 million Neil Gehrels Swift space observatory was launched in 2004. But it's now "at risk of falling back through the atmosphere and burning up without intervention," reports Spaceflight Now. Fortunately, a mission to prevent that "just passed a notable prelaunch testing milestone." On Friday, NASA announced that the Link spacecraft, manufactured by Katalyst Space Technologies to intervene before Swift's fate is sealed, completed its slate of environmental testing at the agency's Goddard Space Flight Center in Greenbelt, Maryland... "Swift will likely re-enter the atmosphere sometime later this year if we don't attempt to lift it to a higher altitude, [said John Van Eepoel, Swift's mission director at NASA Goddard, in a NASA press release]. "Katalyst has gotten to this point in just eight months, and we're glad they were able to use NASA's facilities to test Link and draw on our expertise to help tackle questions that popped up along the way...." "Given how quickly Swift's orbit is decaying, we are in a race against the clock, but by leveraging commercial technologies that are already in development, we are meeting this challenge head-on," said Shawn Domagal-Goldman, acting director, Astrophysics Division, NASA Headquarters, at the time... Attempting an orbit boost is both more affordable than replacing Swift's capabilities with a new mission, and beneficial to the nation — expanding the use of satellite servicing to a new and broader class of spacecraft...." Swift is in an orbit inclined 20.6 degrees from the equator, which is why Katalyst selected Northrop Grumman's Pegasus XL air-launched rocket in November to fly the mission. "The versatility offered by Pegasus' unique air-launch capability provides customers with a space launch solution that can be rapidly deployed anywhere on Earth to reach any orbit," said Kurt Eberly, Director of Space Launch for Northrop Grumman. The mission is set to launch in June.

Read more of this story at Slashdot.

Categories: Linux fréttir

The Trump Phone Either Is Or Isn't Closer To Delivery

Slashdot - Sat, 2026-05-09 20:34
September 2025? January 2026? Delivery dates keep slipping for the Trump Organization's "Trump Phone" — a gold-coloured Android smartphone priced at $499 (£370). But in March the Verge spotted signs the phone was moving forward: FCC listings for a smartphone with the trade name "T1" show that it was tested late last year, and granted certification by the FCC in January... [T]he phone was submitted for testing by another company entirely: Smart Gadgets Global, LLC... Smart Gadgets Global's website promises "Top Quality Electronics created for 'YOUR' customer!" But in April the Trump phone revised its "Terms and Conditions" for preorders. The new language? A preorder deposit provides only a conditional opportunity if Trump Mobile later elects, in its sole discretion, to offer the Device for sale. A deposit is not a purchase, does not constitute acceptance of an order, does not create a contract for sale, does not transfer ownership or title interest, does not allocate or reserve specific inventory, and does not guarantee that a Device will be produced or made available for purchase.... Estimated ship dates, launch timelines, or anticipated production schedule are non-binding estimates only. Trump Mobile does not guarantee that: the Device will be commercially released... Trump Mobile will not be responsible for delay, modification, or failure to release a Device due to causes beyond its reasonable control, including but not limited to regulatory review, carrier certification delays, component shortages, labor disruptions, governmental orders, acts of God, transportation interruptions, or third-party supplier failures... If Trump Mobile cancels or discontinues the Device offering prior to sale, Trump Mobile will issue a full refund of the deposit amount paid... If Trump Mobile cancels, delays, or does not release the Device, your sole and exclusive remedy is a full refund of the deposit amount actually paid, and you waive any claim for equitable, injunctive, or specific performance relief relating to preorder priority or Device allocation. There was an unconfirmed report on social media that the updated Terms were also emailed to customers (cited by the International Business Times). And the new language also hedges that for the gold T1 phone, "Images, prototypes, beta demonstrations, and marketing renderings are illustrative only and may not reflect final production units...." But then eight days ago The Verge reported that phone "has just passed another milestone on its slow road to release," described as "a requirement for any phone launching in the US..." "The phone has received the little-known PTCRB certification, a first step toward being certified to work on major networks and be issued with IMEI numbers." [A]t least, I think it's been certified. What's actually been certified by the PTCRB is the SGG-06, a smartphone from Smart Gadgets Global, LLC, with support for 5G, 4G, 3G, and 2G networks.

Read more of this story at Slashdot.

Categories: Linux fréttir

Plant Seeds Do Something Incredible When the Sound of Rain Strikes

Slashdot - Sat, 2026-05-09 19:34
"Plant seeds can sense the vibrations generated by falling raindrops," reports ScienceAlert, "and respond by waking from their state of dormancy to welcome the water, new research shows.... to germinate in 'anticipation' of the coming deluge." The finding, discovered by MIT mechanical engineers Nicholas Makris and Cadine Navarro, offers the first direct evidence that seeds and seedlings can sense and respond to sounds in nature... "The energy of the rain sound is enough to accelerate a seed's growth," [explains Markis]. Plants don't have the same aural equipment we do to actually hear sounds, of course. But the study suggests that seeds respond to the same vibrations that can produce a sound experience in our human ears. Across a series of experiments, the researchers submerged nearly 8,000 rice seeds in shallow tubs of water, at a depth of around 3 centimeters (1 inch), and exposed some of them to falling water drops over periods of six days... A hydrophone recorded the acoustic vibrations produced by the drops, confirming that the experiment mimicked the vibrations produced by actual raindrops falling in nature — such as the driving downpours that can sometimes pelt Massachusetts' puddles, ponds, and wetlands... In their study, the researchers observed that seeds exposed to the falling drops germinated up to around 37% faster, compared with seeds that did not receive the simulated rainstorm treatment but were housed in otherwise identical conditions. More information in Scientific American and Scientific Reports.

Read more of this story at Slashdot.

Categories: Linux fréttir

Cisco Releases Open-Source 'DNA Test for AI Models'

Slashdot - Sat, 2026-05-09 18:34
Cisco has released an open-source tool "to trace the origins of AI models," reports SC World, "and compare model similarities for great visibility into the AI supply chain." [Cisco's Model Provenance Kit] is a Python toolkit and command-line interface (CLI) that looks at signals such as metadata and weights to create a "fingerprint" for AI models that can then be compared to other model fingerprints to determine potential shared origins. "Think of Model Provenance Kit as a DNA test for AI models," Cisco researchers wrote. "[...] Much like a DNA test reveals biological origins, the Model Provenance Kit examines both metadata and the actual learned parameters of a model (like a unique genome that comprises a model), to assess whether models share a common origin and identify signs of modification." The tool aims to address gaps in visibility into the AI model supply chain. For example, many organizations utilize open-source models from repositories like HuggingFace, where models could potentially be uploaded with incomplete or deceptive documentation. The Model Provenance Kit provides a way for organizations to verify claims about a model's origins, such as claims that a model is trained from scratch, when in reality it may be copied from another model, Cisco said. This may put organizations at risk of using models with unknown biases, vulnerabilities or manipulations and make it more difficult to resolve any incidents that arise from these risks. Thanks to Slashdot reader spatwei for sharing the news.

Read more of this story at Slashdot.

Categories: Linux fréttir

Google tweaks Chrome AI privacy wording, insists processing stays on-device

TheRegister - Sat, 2026-05-09 17:57
Google has changed Chrome's disclosure language about how its on-device AI works, but that doesn't mean the company intends to capture on-device AI interactions. The Chrome menu modification, which isn't universally rolled out yet even in Chrome 148, was noted this week on Reddit. The "On-device AI" message in Chrome's System settings previously read, "To power features like scam detection, Chrome can use AI models that run directly on your device without sending your data to Google servers. When this is off, these features might not work." But the message changed recently – it lost the phrase "without sending your data to Google servers." That prompted privacy advocate Alexander Hanff to question whether the edit signaled an architectural change that would see local AI interactions processed by Google servers instead of remaining on-device. "Why was the sentence 'without sending your data to Google servers' removed from the on-device AI description in Chrome's Settings UI?" Hanff asked. "Was the previous text inaccurate? Has the architecture changed? Was the wording withdrawn on legal advice because Google was unwilling to defend it as a representation?" Asked about this, a Google spokesperson said, "This doesn’t reflect a change to how we handle on-device AI for Chrome. The data that is passed to the model is processed solely on device." It appears this situation deserves a more genteel rendering of Hanlon's Razor – "Never attribute to malice that which is adequately explained by stupidity." In this case, it's "Never attribute to malice that which is adequately explained by bad timing." Word of the menu modification surfaced as Chrome was rolling out the Prompt API, which is designed to provide web pages with a programmatic way to interact with a browser-resident AI model. The API's arrival and public discussion of it drew attention to the fact that Chrome has been silently downloading Google's 4GB Nano model onto users' devices. The coincidence of these events made it seem that Google was preparing to capture on-device prompts and responses, which would be a significant privacy retreat. In fact, Chrome has been letting Nano sleep on the couch for early adopters dating back two years when local AI was implemented in Chrome 126 as a preview program. While Google hasn't yet made model downloading and storage opt-in, the biz did earlier this year implement a way to deactivate and remove the space-hogging model. "We’ve offered Gemini Nano for Chrome since 2024 as a lightweight, on-device model," a Google spokesperson explained, pointing to relevant help documentation. "It powers important security capabilities like scam detection and developer APIs without sending your data to the cloud. While this requires some local space on the desktop to run, the model will automatically uninstall if the device is low on resources. In February, we began rolling out the ability for users to easily turn off and remove the model directly in Chrome settings. Once disabled, the model will no longer download or update." The edit to the "On-device AI" message occurred in early April. According to Google, Gemini Nano in Chrome processes all data on-device. But when websites interact with Gemini Nano in Chrome – via the Prompt API, for example – they can see the inputs and outputs of the model. In such cases, the data handling would fall under the privacy policy of the website interacting with the user's Nano instance. Google decided to change its "On-device AI" message to avoid confusion – and perhaps to preclude legal claims alleging policy violations – when the user is interacting with a Google site that calls out to the Nano model on-device, in support of some service it provides. In that scenario, the Google site would have access to the prompts it sends and responses it gets from the user's on-device model. That interaction would happen "without sending your data to Google servers," at least in the context of a user querying a model running in Google Cloud. But since the user's on-device Chrome-resident Nano model would send data to the Google site in response to that site's API calls, that data transmission might be interpreted as a violation of the local AI commitment language. Hence the edit. Google's decision to have Gemini Nano become a Chrome squatter is a novel way of doing things, given that co-opting people's computing resources has largely been the province of covert crypto-mining scripts. But perhaps after years of offering Gmail and Search at no monetary cost, Google feels entitled to a few gigabytes of Chrome users' local storage and occasional bursts of their on-device compute. ®
Categories: Linux fréttir

Social Media Sites Got Information from Ad Trackers on US State Health Insurance Sites

Slashdot - Sat, 2026-05-09 17:34
All 20 of America's state-run healthcare marketplace sites "include advertising trackers that share information with Big Tech companies," reports Gizmodo, citing a report from Bloomberg: Per the report, seven million Americans bought their health insurance through state exchanges in 2026, and many of them may have had personal information shared with companies, including Meta, TikTok, Snap, Google, Nextdoor, and LinkedIn, among others. Some of the data collected and shared with those companies included ZIP codes, a person's sex and citizenship status, and race. In addition to potentially sensitive biographical details about a person, the trackers also may reveal additional details about their life based on the sites they visit. For instance, Bloomberg found trackers on Medicaid-related web pages in Rhode Island, which could reveal information about a person's financial status and need for assistance. In Maryland, a Spanish-language page titled "Good News for Noncitizen Pregnant Marylanders" and a page designed to help DACA recipients navigate their healthcare options were found to be transmitting data to Big Tech firms... Per Bloomberg, several states have already removed some trackers from their exchange websites following the report. Thanks to Slashdot reader JoeyRox for sharing the news.

Read more of this story at Slashdot.

Categories: Linux fréttir

10 People Called Police to Report Bigfoot Sighting in Ohio

Slashdot - Sat, 2026-05-09 16:34
CNN reports on a "sudden surge of claimed sightings" of "unidentified figures averaging 8 feet tall in wooded areas" along Ohio's Mahoning River. "And it stopped just as quickly as it started," says Jeremiah Byron, host of the Bigfoot Society Podcast, which collected and mapped the reports .... Byron doesn't take every report at face value, making sure he talks to people directly before publicizing their claims. Once word got out about the reports in Ohio, so did the obvious fakes. "I started to get a lot of AI-generated reports in my email. It got up to the point where I was probably getting about 1,000 emails a day," he says. But when Byron spoke by phone with people who made the initial reports, they convinced him they weren't making anything up. "It was obvious they weren't just wanting to get their name out there," says Byron. "They were just freaked out by what they experienced, and they didn't want anything else to do with it." [...] Local law enforcement in Ohio also seem to be enjoying the publicity. Portage County Sheriff Bruce D. Zuchowski made a series of gag posts purporting to show the arrest of Bigfoot and his detention by Immigration and Customs Enforcement, only for the creature to escape from custody at the Canadian border... Despite the levity, the sheriff's office really did get some calls from concerned residents, Zuchowski says. "Ten individual people were like, 'Yeah I was walking my dog at 4 a.m. and I saw this hairy figure and I smelled this musty odor and there was this big thing and all of a sudden it ran,'" the sheriff told CNN affiliate WOIO in March.

Read more of this story at Slashdot.

Categories: Linux fréttir

Newspaper Chain's Reporters Withhold Their Bylines to Protest 'AI-Assisted' Articles

Slashdot - Sat, 2026-05-09 15:34
A chain of 30 U.S. newspapers including the Sacramento Bee, the Miami Herald and the Idaho Statesman "has started to use a new AI tool that can summarize traditional articles and spit out different versions for different audiences," reports the New York Times. And the chain's reporters "are not happy about it." Journalists in many of the company's newsrooms are now withholding their bylines from articles created by the new tool, meaning that those articles will run with a generic credit rather than a reporter's name, as is customary. They are also labeled AI-assisted. "We don't want to put our bylines on stories we did not actually write even if they're based on our work," said Ariane Lange, an investigative reporter at the Sacramento Bee and the vice chair of the Sacramento Bee News Guild. "That in itself feels like a lie." The reporters' byline strike is one of the sharpest conflicts yet between journalists and their companies over the use of AI. Related debates are playing out in newsrooms across the country, as publishers experiment with new AI tools to streamline work that used to take hours, and some even use it to write full articles... [E]xecutives have promoted the tool internally as a way to increase the number of articles published and ultimately gain new subscribers... [Eric Nelson, the vice president of local news] said using reporters' bylines on the AI-generated articles was a way to show "authority" on Google so the search engine would rank the articles higher in the results. He also said the company was experimenting with feeding in reporters' notes to create articles. "Journalists who embrace and experiment with this tool are going to win," Nelson said in the meeting. "Journalists who are defiant will fall behind".... McClatchy's public AI policy states that the company uses AI tools to summarize articles to "help readers quickly understand the main points of a single story or catch up on multiple stories about a larger topic," and that editors review the output before publication.

Read more of this story at Slashdot.

Categories: Linux fréttir

Why Some US Schools Are Cutting Back On the Technology They Spent Billions On

Slashdot - Sat, 2026-05-09 14:34
America's school districts "spent billions on technology during the pandemic," reports the Washington Post. "But now some states are limiting in-school screen time because of concerns about its impact on children." Nationwide [U.S.] schools invested at least $15 billion and possibly as much as $35 billion from federal pandemic relief funds on laptops, learning software and other technology between 2020 and 2024, according to an estimate by the Edunomics Lab, an education think tank. By last school year, 88% of public schools reported in a federal survey they had given every child a laptop, tablet or similar device. Now, some states and school districts are walking back their technology use following pressure from parents who claim too much in-school screen time has zapped children's attention spans and left them worse off academically. At least a dozen states introduced or adopted policies this year that attempt to regulate screen time in schools — from prescribing limits to allowing families to opt out of virtual instruction... In Missouri, a bill would require every school district in that state to come up with a screen time policy is making its way through the state legislature. "Ed tech is just big tech in a sweater vest," said Missouri state Rep. Tricia Byrnes (R), who introduced the legislation and blames what she described as the overuse of technology for middling test scores... Complicating the issue is research that shows students do not see any academic gains when provided with laptops. A meta-analysis of studies on reading comprehension suggests paper-based texts are better than digital-based reading... A body of research has established that excessive or unstructured screen time can have detrimental effects on children, including harming language development, weakening social skills and triggering anxiety and depression. But the effects of school-issued devices and in-school usage on children's development are less understood, said Tiffany Munzer, a developmental behavioral pediatrician and digital media researcher at the University of Michigan. Some studies report that high-quality digital tools can support students' learning goals, Munzer said. But "a lot of the apps that are marketed as educational ... are not actually educational and contain a lot of commercialized content."

Read more of this story at Slashdot.

Categories: Linux fréttir

macOS 27 threatens to bury Time Capsule, FOSS brings a shovel

TheRegister - Sat, 2026-05-09 12:25
The next major release of macOS looks likely to remove Apple Filing Protocol (AFP) support, stopping Time Capsules from working… but life FOSS, uh, finds a way. The current version of macOS "Tahoe" 26.4 already has network Time Machine issues, especially for folks using Apple Time Capsules. It looks like macOS 27 may completely remove the network protocol they need. However, the Time Capsules run NetBSD under the hood, and that means that the FOSS world has been able to come up with a workaround. It's called TimeCapsuleSMB, and it aims to keep older Time Capsules usable with modern macOS. It's eight months since Apple released macOS 26, and the company's annual release schedule means that macOS 27 is looming. Although Cupertino hasn't told the world much about it yet, it is warning sysadmins to "prepare your network environment for stricter security requirements." Reading the bulletin, we found it rather clixby: while it firmly warns that security checks will become stricter, it doesn't spell out what products will change or how. Happily, there are elder Mac gurus out there who interpret Apple's sometimes Delphic utterances, and Howard Oakley is one of the greatest. In a post about networking changes coming in macOS 27, he translates that it will require TLS 1.2 or above. (The Register explained TLS back in 2002, and version 1.2 appeared about six years later.) However, he also warns that it could mean the end of AFP, which is basically Appletalk-over-TCP/IP version 3.4. AppleTalk was the Mac network protocol for file sharing from System 6 onward. In 2013, OS X 10.9 "Mavericks" made Microsoft's SMB the default file-sharing protocol in place of AFP, and it looks like AFP now faces the ax: it was officially deprecated in macOS 15.5. To be fair, macOS 26 Macs started displaying a warning to Time Capsule users nearly a year ago. Apple introduced the first model of Time Capsule in 2008, and the fifth-generation version in 2013. The company discontinued the whole AirPort product line in 2018. All generations only support AFP and SMB version 1. That’s the original version that appeared with LAN Manager in 1987, and we reported on Samba dropping SMB1 back in 2022. The good news is that even if Apple kills its original file-sharing protocol next year, the FOSS community is on the case and won't let working kit die. The Time Capsule hardware is essentially a box containing a Wi-Fi access point and a hard disk, and an Arm chip with just enough software to share that HDD as network-attached storage. Apple didn't write this software from scratch: it picked up and customized NetBSD for the job. The first four generations of Time Capsule (flat square boxes) run NetBSD 4, and the fifth-gen devices – the tall tower-shaped models from 2013 onward – run NetBSD 6. That gave Microsoft's James Chang an opening. Since the devices run NetBSD, it's possible to compile a newer version of Samba, and copy it somewhere that the tiny embedded Arm computer can find it. Teaching such old kit a new trick is never that easy, though, and he faced a number of challenges, which he details in the design section of the project README. Among them are machines that only have about 900 KB of available disk space – less than 1 MB – and a tiny 16 MB RAMdisk. He settled on Samba 4.8, which dates back to 2018, the same year Apple discontinued the product line, but which includes the necessary Time Machine support, via a module named vfs_fruit. The TimeCapsuleSMB docs are worth a read. We found his descriptions of how he worked around the hardware's very significant limitations impressive. Notably, on the early models, you'll need to manually reload the software every time you reboot the Time Capsule. The final model can do this automatically. Don't fret at the thought of backing up to such an elderly spinning hard disk: iFixit has descriptions of how to replace the drive in both the early models and the later ones too. ®
Categories: Linux fréttir

Humanoid Robot Becomes Buddhist Monk In South Korea

Slashdot - Sat, 2026-05-09 11:00
A four-foot humanoid robot named Gabi has become a monk at a Buddhist temple in Seoul, participating in a modified initiation ceremony where it pledged to respect life, obey humans, act peacefully toward other robots and objects. "Robots are destined to collaborate with humans in every field in the future," Hong Min-suk, a manager at the Jogye Order, the largest sect of Buddhism in South Korea, tells the New York Times. "It will only be natural for them to be part of our festival." Smithsonian Magazine reports: For the temple, this marks the first time a robot has participated in the sugye initiation ceremony, when followers pledge their devotion to the Buddha and his teachings. Gabi -- a Buddhist name that refers to mercy, Yonhap News Agency reports -- was made by Unitree Robotics, a Chinese civilian robotics company. The model, G1, retails starting at $13,500. During the ceremony, Gabi agreed to five vows usually recited by human monks and slightly altered for the humanoid. The robot pledged to respect life, act with peace toward other robots and objects, listen to humans, refrain from acting or speaking in a deceptive manner and save energy. Gabi participated in a modified yeonbi purification ritual. While a human monk normally receives a small incense burn on the arm, instead Gabi received a lotus lantern festival sticker and a prayer bead necklace. The landmark event aligns with the promise made during a New Year's address by the Venerable Jinwoo, president of the Jogye Order of Korean Buddhism, to incorporate artificial intelligence into the Buddhist tradition. "We aim to fearlessly lead the A.I. era and redirect its achievements toward the path of attaining peace of mind and enlightenment," he said, per a statement.

Read more of this story at Slashdot.

Categories: Linux fréttir

Pages

Subscribe to www.netserv.is aggregator - Linux fréttir