TheRegister
HP stuffed a PC into a keyboard. We took it for a spin
The early history of personal computers is stacked with systems such as the Apple II and the Commodore 64 that had the components living inside a keyboard. But as technology evolved, the keyboard became a peripheral and the PC itself was either in a separate box or the whole system was a laptop. Now, HP has a new spin on this decades-old idea. It embeds a full-fledged AI PC inside a 101-key keyboard you can carry with you from the office to home. Unlike ‘80s microcomputers or hobbyist-oriented products like the Raspberry Pi 500, the EliteBoard G1a is squarely targeted at business. The system is part of HP’s commercial lineup, alongside its EliteBook laptops, and, for better or worse, it comes with HP Wolf Security preinstalled. The company clearly hopes organizations will buy these in bulk. But to benefit from it, you really have to prefer a mobile keyboard to a traditional laptop, all money aside. Who’s it for? The EliteBoard G1a is trying to create a new niche. When we talked with product managers at HP, they suggested IT departments would buy these computers for two types of workers. The first group is so-called "dual deskers" - knowledge workers who have a desk with a monitor at work and another at home. The second group includes deep-pocketed call centers or environments where desk space is at a premium. From time immemorial, dual-deskers have carried laptops and closed their lids when they docked to a monitor at work. With the EliteBoard, they could simply schlep the keyboard, which weighs a mere 1.49 pounds – about half the weight of a lightweight laptop. To make this situation work in companies with managed systems, we have to assume that either the IT department would give out monitors to use at home or offer some reason (a subsidy? a mandate?) for employees to buy their own for home. The EliteBoard connects to monitors using its USB4 port, so its ideal monitor is one that has Thunderbolt or USB video connectivity built in. Less-expensive and older monitors don’t have this type of connectivity, but select configs of the EliteBoard come with an optional USB-to-HDMI adapter that you can use with other monitors, and it has a USB pass-through for power. That said, HP demonstrated the EliteBoard at numerous press events by showing how much desk space it saves by using a single USB cable to get power, video out, and connectivity to peripherals via the monitor. So if companies want employees to be able to take advantage of this scenario at home, that means shelling out another few hundred bucks for a modern monitor, or making employees do it. Today, companies with limited desk space for a call center or another cramped work area could just buy a tiny desktop to sit behind the monitor or next to it. However, building all of the PC’s guts into the keyboard makes a lot of sense for space savers, because a keyboard is something every PC needs and a desktop chassis is not. If a company wanted to, it could give each employee their own EliteBoard, have them plug it into a monitor during work time and then have them stick it in a drawer when they go off shift and someone else comes on. The problem for call centers is that the HP EliteBoard G1a is much more powerful and much more expensive than what they need. At press time, the G1a was priced at $1,499 for the lowest end config. And most companies probably don’t need employees to each have their own PC that they lock away after they punch out. “The call center angle is probably the stronger pitch, but those buyers are shopping entry-to-mid-market. They want something cheaper and simpler than a mini desktop, not a Copilot+ PC with up to 64GB of RAM,” Kieren Jessop, a research manager with analyst firm Omdia. “HP has built an impressive piece of engineering in search of a problem that most enterprises have already solved with a laptop — or will solve with a thin client.” Configurations HP makes the EliteBoard G1a in a variety of configurations that vary by market. Companies can get it with various AMD Ryzen CPUs, up to 64GB of RAM and an SSD up to 2TB in capacity. It comes with either a detachable or embedded cord, and optionally with a 32 WHr battery that promises up to 3.5 hours of endurance. Why would you need a battery on a product that demands to be used at a desk and plugged in? The most likely reason is to let the keyboard go into sleep mode when it’s in your bag. Employees could also hook the EliteBoard G1a up to a portable monitor and use it unplugged that way, but then why not just buy them a laptop? At press time, prices ranged from $1,499 to $3,423 in the US. The lowest-end config has a Ryzen AI 5 Pro 340, 16GB of RAM, an integrated cable, and a 256GB SSD. Fifty bucks more will get you the same configuration with a 512GB SSD, as per HP.com. The highest-end config listed comes with a Ryzen AI 7 Pro 350 CPU, 32GB of RAM, and a 512GB SSD, and sells for only $1,999 at B&H but a whopping $3,423 at HP.com. Our review config, which sports 64GB of RAM, a Ryzen AI 7 Pro 350 CPU, and a 2TB SSD, has not been listed for sale in the US, and HP didn't answer when we asked how much it would cost. However, we’d assume that it would cost a lot more than $1,999. Price vs a Laptop If all you do is dock your PC at home and at work, you might think, “why pay for a laptop when I don’t need a built-in screen?” But it’s hard to make that argument when the laptop is actually less expensive. Right now, you can get an HP EliteBook 6 G1aN with the same AMD Ryzen AI 7 350 CPU, along with 24GB of RAM and a 512GB SSD, for just $1,299 – that's actually less than the cheapest EliteBoard. A custom configured HP EliteBook 8 G1a with the Ryzen AI 7 Pro 350 CPU, 32GB of RAM, and a 512GB SSD is just $1,799. If you’re comparing the total cost of ownership versus a laptop, also consider the price of a monitor if your users don’t already have one. While you could use an adapter, the ideal use case involves a USB-C monitor that transmits data and power over a single wire. The cheapest HP-branded USB-C monitor I could find at press time was the HP E27k 4K monitor, which was selling for $504. However, I saw a Dell-branded USB-C monitor, the S2725DC, on sale for just $236 at Amazon. If you’re an IT department and you’re kitting out someone for home and office use, you might need to buy them two monitors. Design At 14.1 x 4.7 x 0.7 inches, the EliteBoard G1a is the size of a typical, full-size keyboard complete with numpad. It’s a boring but office-friendly dark gray color with a very thin bezel around the keys. At first glance, there aren’t many ways to know that this is more than just a keyboard. There’s a power button / fingerprint reader that’s located in the upper right corner of the keyboard, though you might easily mistake it for just another key, until you press it and see the blue light turn on. Turn the keyboard around and on the back lip and you’ll notice a thin vent for airflow. This computer definitely has a fan and you can hear it quite prominently at times. There are also two USB-C ports, a USB4 40 Gbps port and a 10 Gbps port, unless you have the embedded cable, in which case, you just have the 10 Gbps port. Clearly, the 40 Gbps port is the one you’ll want to use for docking, but you can use the 10 Gbps port to connect the dongle for the included wireless mouse or other peripherals. There’s also a security cable lock slot on the left side. So if you want to chain this to a desk, you can, but we’d argue that defeats the point of the machine. But how well does it type? Since this is a computer-in-a-keyboard, the most obvious question we need to answer is “how’s the typing experience?” Pretty decent. On the bright side, the EliteBoard G1a has a generous 2 mm of travel, which is more than you’ll find on most laptops, where even 1.5 mm is deep. The keys feel pretty snappy and are in the same feedback league as those on my Lenovo ThinkPad X1 Carbon, but the ThinkPad’s keys have a more curved shape, which is better than the flat tops on the EliteBoard. If you’re burning the midnight oil, there’s a built-in backlight which you can enable by hitting the F9 key. It has two different brightness settings so you can decide just how much you want it to shine through. The layout is pretty standard for a full-size keyboard with a numpad. However, I don’t like how small the arrow keys are, and the Pg Up and Pg Dn are just tiny. There’s no empty space around these keys, which I use a lot when editing documents, so it’s far too easy to miss them. Even on most laptops, these keys are larger. Another downer is the lack of flip-up feet on its bottom. I like to angle my keyboard up at a 15 to 30 degree angle, but this one is short and flat to the desk. To save my wrists, I always use a gel-filled wrist rest when I type and, without feet to elevate the keyboard, I’m typing down onto the keys because it’s so much lower than the gel pad. This won’t be as much of an issue for folks who don’t use wrist rests. In short, if you’re used to laptop keyboards or the low-cost keyboards that come with most desktop computers, the EliteBoard G1a will probably seem like a nice step up. However, if you want the best possible typing experience, there’s an entire ecosystem of mechanical keyboards out there with much deeper travel and more feedback. If you’re not a gamer and you want the best possible typing experience, I recommend a mechanical keyboard with either clicky or tactile switches. Unless you go for a low-profile keyboard, you’ll be getting between 3.6 and 4 mm of travel, so you won’t bottom out as easily when typing. I prefer clicky switches like the Kailh Box White (my favorite) or Cherry MX Blue, but those make some noise so, if you like quiet, Cherry MX Brown switches will do the trick. To see the difference between my daily driver mechanical keyboard, an Akko 3098N with Kailh Box White switches, and the EliteBoard G1a, I performed the 10fastfingers.com typing test on both. On HP’s keyboard, I managed a strong 96 wpm, which is at the lower end of typical for me, with a six percent error rate. On my daily driver, the numbers were a better 101 wpm with a two percent error rate. Your mileage will vary. Speaker and Microphone The EliteBoard G1a has both built-in bottom-facing speakers and a microphone array. In our tests, the speaker was more than loud enough and it was clear enough for voice calls, though we wouldn’t recommend listening to music on it for too long. The drums in AC/DC’s Back in Black sounded a little tinny, though there was a clear separation of sound with the vocals appearing to come from one side while the percussion came from another. The dual-array microphone was also passable, but not good enough for podcasts. When we talked to a coworker using the built-in mic, she said our voice was clearly audible but a little echoey. In the box and preloaded Depending on which config you get, your HP EliteBoard G1a may come with a variety of different accessories in the box. All versions come standard with an HP wireless 675M mouse that connects either by Bluetooth or by an included USB-C wireless 5-GHz dongle. It is not a particularly fancy mouse but it has a couple of side buttons and a scroll wheel. I found myself using my Logitech MX Master 3 mouse instead, because it’s ergonomically shaped and highly programmable. My review unit also came with the optional soft canvas cover sleeve you can use to protect the EliteBoard G1a while you’re carrying it around. I found this add-on to be about as useful as a laptop sleeve. It might offer some protection and padding for when you stick the EliteBoard G1a in an existing backpack, but it’s not going to replace your briefcase or your backpack when you’re commuting. I also got the optional HDMI multiport hub, which is a must-have if you don’t already have a Thunderbolt or USB4 docking station or a monitor with that kind of connectivity built in. The hub connects to the USB4 40 Gbps port on the EliteBoard and features two USB-C ports (one for power, one for connectivity), an HDMI out cable for connecting to a monitor, an Ethernet port for wired networking, and an HDMI-in port for a second monitor. There’s an optional, slim 65W USB-C power adapter that’s helpful if you aren’t connecting to a monitor or docking station that supplies power. If you don’t get one in the box, it’s easy enough to find one for $15 to $30 on Amazon. Also, if your EliteBoard does not have an embedded cable — mine did not — you get a braided USB cable in the box. The less-expensive configs of the EliteBoard all have embedded cables, but we recommend getting a model without one because it’s easier to carry around without a cable hanging off of it. HP does not preload a lot of software onto the EliteBoard but it does come with a three-year subscription to HP Wolf Security, which normally costs $36 a year for individual subscriptions. HP Wolf has a malware/virus scanner, a threat containment feature, a secure browser, OS resiliency (for recovering from corruption and doing a reinstall), and application persistence, which prevents unwanted changes to security software like HP Wolf itself. Since it has an NPU (neural processing unit) that’s capable of more than 45 trillion operations per second (TOPS), the EliteBoard G1a qualifies as one of Microsoft’s Copilot+ PCs. This means that it has some added local AI features that not every PC gets from Windows 11, including Cocreate image generation in Paint, Windows Studio Effects handled locally for your webcam, translated Live Captions from any audio input, and Recall, a controversial feature that takes screenshots of all your work to help you “remember” what you were doing at any given time. Fortunately, Recall is disabled by default. Performance Equipped with an AMD Ryzen AI 7 Pro 350 CPU, 64GB of RAM, and a 2TB SSD, our review configuration of the EliteBoard G1a handled everything I threw at it. I used the system on and off as my daily driver PC for work for a period of several weeks and it was always smooth and responsive, even as I had dozens of Chrome tabs open and Slack running across two 4K monitors I had connected via Thunderbolt 3 docking station. I should note that, no matter what I was doing, the fan on the EliteBoard G1a was frequently running and was often quite audible. It’s no louder than most notebooks I’ve tested, but if you’re expecting total quiet, look elsewhere. My editorial workload is not nearly as demanding as some folks’ day jobs so, to see how the EliteBoard G1a stacks up, I ran it through a series of benchmarks and compared the results to those from two laptops I had access to: a Lenovo Yoga Slim 7x with a Qualcomm Snapdragon X Elite X1E-78-100 CPU, and a Lenovo ThinkPad X1 Carbon with an Intel “Meteor Lake” Core Ultra 7 165U processor. The Ryzen AI 7 PRO in the EliteBoard debuted in 2025 with 8 cores, 16 threads, and a maximum boost clock of 5 GHz. It features built-in AMD Radeon 860M graphics and a Neural Processing Unit (NPU) that’s capable of achieving 50 TOPS for better local AI. Its DDR5 RAM runs at 5,600 MHz. Released in 2024, the Snapdragon X Elite X1E-78-100 has 12 cores and threads with a boost clock that goes up to 3.4 GHz, along with an NPU that does 45 TOPS. It’s an Arm processor so the laptop that runs it uses Windows on Arm. The Yoga Slim 7x laptop that we tested had 16 GB of LPDDR5x RAM running at 8448 MHz. The oldest of our test group, vintage 2023, the Intel Core Ultra 7 165U has 12 cores and 14 threads, but only two of those cores are performance cores that can boost up to 4.9 GHz, while the others are a mix of efficient cores and low-power efficient cores that boost up to 3.8 and 2.1 GHz respectively. The ThinkPad X1 Carbon we tested with it had 64GB of LPDDR5x RAM running at 6400 MHz. In our tests, the EliteBoard G1a always eclipsed the ThinkPad X1 Carbon, which is not a surprise considering its much-older processor. However, the Snapdragon-enabled Yoga Slim 7x outpaced it on some benchmarks. Primesieve This test counts the prime numbers under one trillion and returns a result in millions of prime numbers per second. The benchmark is particularly heavy on SIMD instructions like AVX-512 or Arm’s Neon and SVE vector extensions, making it a good proxy for some of the more workstation-centric tests we’ll look at shortly. It runs across both single thread and multi-thread workloads, with big performance boosts for parallel processing. Using just a single thread, the EliteBoard edged out the competition with 415 million primes per second (MPS), compared to the Slim 7x’s 352. However, the Slim 7x slightly outperformed it when using multithreading, delivering 2,686 MPS to the EliteBoard’s 2,145. One thing to note is that, while the EliteBoard has more threads, it has fewer actual cores. The X1 Carbon wasn’t even in the same ballpark. This will become a theme across our test suite. Blender 3D rendering is always a challenge and, to be honest, it’s hard to imagine somebody buying an EliteBoard for this purpose. However, it’s always worth noting what the system can do. We ran Blender, a very popular 3D modeling app, using three scenes: Monster, Junkshop, and Classroom. As you can see, the Slim 7x and its 12-core Snapdragon processor were anywhere from 34 to 75 percent quicker, depending on the content. Still, the EliteBoard turned in respectable scores on something you wouldn’t expect it to do. Handbrake x265 Video transcoding is another resource-intensive task and one that occurs in many scenarios, including game streaming, video editing, and even video conferencing. To test how the EliteBoard handled video transcoding, we used Handbrake to convert a 4K 60 fps video to 1080p using an x265 encoder at the medium preset with a constant quality of 18. Our results are measured in frames per second (fps). Again, the EliteBoard was far superior to the ThinkPad, but was a good 45 percent behind the Yoga Slim 7x. Still, this is solid performance that’s more than workable. Llama.cpp One local AI task you might want to conduct is running an open-source model as a chatbot on your PC rather than sourcing it from the cloud. This will give you more privacy than using OpenAI, Claude, or Copilot on the web and it’s completely free. So we ran the GPT-OSS 20B open weights model using Llama.cpp as our client and timed the amount of milliseconds it took to generate the first token. Here we see that the Snapdragon processor and faster RAM on the Yoga Slim 7x gave it a definite advantage, taking 39 percent less time than the EliteBoard to get there. The EliteBoard also generated about half as many tokens per second. However, it beat the pants off the ThinkPad X1 Carbon, getting to the first token more than twice as quickly while generating 30 percent more tokens per second. It’s worth noting that these tests were run on the CPU cores and didn’t harness the chip’s integrated GPUs or NPUs. Whisper.cpp One common local AI workload a business person might use is transcription. Let’s say you had an audio file and you wanted to convert it into readable and editable text. You might use a tool based on Whisper, a popular free model from OpenAI. For testing, we used Whisper.cpp, an implementation of Whisper written in C++, with the Whisper Medium EN model transcribing a 10-minute audio clip. Here, the EliteBoard transcribed the audio at 2.4x real-time speed, while the Yoga Slim 7x was faster at 3.4x. Those extra cores are doing a lot of heavy lifting here. That said, if you’re converting 10 minutes of audio in less than five minutes, that’s pretty good. LLVM Compile For those using the EliteBoard for programming, compile times matter. So, we compiled the LLVM toolchain from its source and measured the time. This isn’t a trivial compile job and therefore represents a worst case scenario for developers considering the EliteBoard. Here it took a modest 19 minutes and 44 seconds, which was more than double the time it took the Yoga Slim 7x. On high-end desktop workstation hardware, this same workload can be completed in under five minutes, so if your day job regularly requires compiling large projects, you might want to spring for something more capable, or perhaps not. “My code is compiling” is a pretty good excuse for taking a 20 minute break. 7-Zip Compression and decompression are very taxing on a CPU and are very common scenarios we see today. So we fire up 7zip and measure its ability to do both tasks in both single-threaded and multi-threaded scenarios. With a single thread, the Slim 7x and the EliteBoard basically tie at compression, while HP’s computer holds the edge in decompression. However, when we move to multi-threaded scenarios, the Snapdragon X Elite’s 12 physical cores easily beat out the AMD Ryzen AI 7 Pro 350’s eight cores and 16 threads. LibreOffice: ODT to PDF Conversion We tested how long it takes LibreOffice to convert 50 image-heavy ODT files into PDFs. This workload is lightly threaded so it favors higher clock speeds over more cores. The results bear this out as the EliteBoard, with its Ryzen AI 7’s higher performing cores, beat out the Slim 7x by 22 percent. Despite its older processor, the ThinkPad actually manages to tie the Slim 7x in this test. Repairability For IT departments that do their own service, the EliteBoard G1a has plenty to offer. Its back surface is held on by just four screws and pops off easily. Underneath, you get full access to the motherboard and a number of easily-removable components, including the DDR5 SODIMM RAM, the M.2 SSD, the WLAN card, the fan, the optional battery, and the speakers. You can even replace the keyboard itself and leave the computer part intact. Bottom line The HP EliteBoard G1a delivers strong performance in a unique and compact form factor that saves desk space and reduces the weight you carry back and forth. If you don’t want a laptop but do want a portable computer, this is your best choice. It provides a better typing experience than most laptops and a more space-efficient design than most desktops. However, in the current marketplace, this device does not represent a significant savings over a similarly configured laptop. Depending on what laptop you choose to compare against, you might save a few hundred dollars, but when you add the cost of the monitors you need to pair with it - if you need to purchase those - it’s a wash. HP has set out to make a unique product with the EliteBoard G1a and it has succeeded in building a very competent and capable computer-in-a-keyboard. If you’re an IT decision maker, you’d buy this device for folks who work out of one or two distinct locations (home and office or multiple offices) and never need to get online from the road or from a conference room. Whether that’s a common scenario in your workplace will determine if this product is right for you or your fleet. ®
Categories: Linux fréttir
Google tweaks Chrome AI privacy wording, insists processing stays on-device
Google has changed Chrome's disclosure language about how its on-device AI works, but that doesn't mean the company intends to capture on-device AI interactions. The Chrome menu modification, which isn't universally rolled out yet even in Chrome 148, was noted this week on Reddit. The "On-device AI" message in Chrome's System settings previously read, "To power features like scam detection, Chrome can use AI models that run directly on your device without sending your data to Google servers. When this is off, these features might not work." But the message changed recently – it lost the phrase "without sending your data to Google servers." That prompted privacy advocate Alexander Hanff to question whether the edit signaled an architectural change that would see local AI interactions processed by Google servers instead of remaining on-device. "Why was the sentence 'without sending your data to Google servers' removed from the on-device AI description in Chrome's Settings UI?" Hanff asked. "Was the previous text inaccurate? Has the architecture changed? Was the wording withdrawn on legal advice because Google was unwilling to defend it as a representation?" Asked about this, a Google spokesperson said, "This doesn’t reflect a change to how we handle on-device AI for Chrome. The data that is passed to the model is processed solely on device." It appears this situation deserves a more genteel rendering of Hanlon's Razor – "Never attribute to malice that which is adequately explained by stupidity." In this case, it's "Never attribute to malice that which is adequately explained by bad timing." Word of the menu modification surfaced as Chrome was rolling out the Prompt API, which is designed to provide web pages with a programmatic way to interact with a browser-resident AI model. The API's arrival and public discussion of it drew attention to the fact that Chrome has been silently downloading Google's 4GB Nano model onto users' devices. The coincidence of these events made it seem that Google was preparing to capture on-device prompts and responses, which would be a significant privacy retreat. In fact, Chrome has been letting Nano sleep on the couch for early adopters dating back two years when local AI was implemented in Chrome 126 as a preview program. While Google hasn't yet made model downloading and storage opt-in, the biz did earlier this year implement a way to deactivate and remove the space-hogging model. "We’ve offered Gemini Nano for Chrome since 2024 as a lightweight, on-device model," a Google spokesperson explained, pointing to relevant help documentation. "It powers important security capabilities like scam detection and developer APIs without sending your data to the cloud. While this requires some local space on the desktop to run, the model will automatically uninstall if the device is low on resources. In February, we began rolling out the ability for users to easily turn off and remove the model directly in Chrome settings. Once disabled, the model will no longer download or update." The edit to the "On-device AI" message occurred in early April. According to Google, Gemini Nano in Chrome processes all data on-device. But when websites interact with Gemini Nano in Chrome – via the Prompt API, for example – they can see the inputs and outputs of the model. In such cases, the data handling would fall under the privacy policy of the website interacting with the user's Nano instance. Google decided to change its "On-device AI" message to avoid confusion – and perhaps to preclude legal claims alleging policy violations – when the user is interacting with a Google site that calls out to the Nano model on-device, in support of some service it provides. In that scenario, the Google site would have access to the prompts it sends and responses it gets from the user's on-device model. That interaction would happen "without sending your data to Google servers," at least in the context of a user querying a model running in Google Cloud. But since the user's on-device Chrome-resident Nano model would send data to the Google site in response to that site's API calls, that data transmission might be interpreted as a violation of the local AI commitment language. Hence the edit. Google's decision to have Gemini Nano become a Chrome squatter is a novel way of doing things, given that co-opting people's computing resources has largely been the province of covert crypto-mining scripts. But perhaps after years of offering Gmail and Search at no monetary cost, Google feels entitled to a few gigabytes of Chrome users' local storage and occasional bursts of their on-device compute. ®
Categories: Linux fréttir
macOS 27 threatens to bury Time Capsule, FOSS brings a shovel
The next major release of macOS looks likely to remove Apple Filing Protocol (AFP) support, stopping Time Capsules from working… but life FOSS, uh, finds a way. The current version of macOS "Tahoe" 26.4 already has network Time Machine issues, especially for folks using Apple Time Capsules. It looks like macOS 27 may completely remove the network protocol they need. However, the Time Capsules run NetBSD under the hood, and that means that the FOSS world has been able to come up with a workaround. It's called TimeCapsuleSMB, and it aims to keep older Time Capsules usable with modern macOS. It's eight months since Apple released macOS 26, and the company's annual release schedule means that macOS 27 is looming. Although Cupertino hasn't told the world much about it yet, it is warning sysadmins to "prepare your network environment for stricter security requirements." Reading the bulletin, we found it rather clixby: while it firmly warns that security checks will become stricter, it doesn't spell out what products will change or how. Happily, there are elder Mac gurus out there who interpret Apple's sometimes Delphic utterances, and Howard Oakley is one of the greatest. In a post about networking changes coming in macOS 27, he translates that it will require TLS 1.2 or above. (The Register explained TLS back in 2002, and version 1.2 appeared about six years later.) However, he also warns that it could mean the end of AFP, which is basically Appletalk-over-TCP/IP version 3.4. AppleTalk was the Mac network protocol for file sharing from System 6 onward. In 2013, OS X 10.9 "Mavericks" made Microsoft's SMB the default file-sharing protocol in place of AFP, and it looks like AFP now faces the ax: it was officially deprecated in macOS 15.5. To be fair, macOS 26 Macs started displaying a warning to Time Capsule users nearly a year ago. Apple introduced the first model of Time Capsule in 2008, and the fifth-generation version in 2013. The company discontinued the whole AirPort product line in 2018. All generations only support AFP and SMB version 1. That’s the original version that appeared with LAN Manager in 1987, and we reported on Samba dropping SMB1 back in 2022. The good news is that even if Apple kills its original file-sharing protocol next year, the FOSS community is on the case and won't let working kit die. The Time Capsule hardware is essentially a box containing a Wi-Fi access point and a hard disk, and an Arm chip with just enough software to share that HDD as network-attached storage. Apple didn't write this software from scratch: it picked up and customized NetBSD for the job. The first four generations of Time Capsule (flat square boxes) run NetBSD 4, and the fifth-gen devices – the tall tower-shaped models from 2013 onward – run NetBSD 6. That gave Microsoft's James Chang an opening. Since the devices run NetBSD, it's possible to compile a newer version of Samba, and copy it somewhere that the tiny embedded Arm computer can find it. Teaching such old kit a new trick is never that easy, though, and he faced a number of challenges, which he details in the design section of the project README. Among them are machines that only have about 900 KB of available disk space – less than 1 MB – and a tiny 16 MB RAMdisk. He settled on Samba 4.8, which dates back to 2018, the same year Apple discontinued the product line, but which includes the necessary Time Machine support, via a module named vfs_fruit. The TimeCapsuleSMB docs are worth a read. We found his descriptions of how he worked around the hardware's very significant limitations impressive. Notably, on the early models, you'll need to manually reload the software every time you reboot the Time Capsule. The final model can do this automatically. Don't fret at the thought of backing up to such an elderly spinning hard disk: iFixit has descriptions of how to replace the drive in both the early models and the later ones too. ®
Categories: Linux fréttir
London’s BT Tower to get rooftop swimming pool
Visitors to London’s iconic Telecom Tower might soon be able to go for a rooftop swim, according to plans revealed by the developer turning the building into a hotel. The iconic 177 meter (581 ft) high structure in Fitzrovia in London’s West End was sold off by BT Group in 2024 to US-based hotel owner-operator MCR Hotels for £275 million ($346 million). At the time, the firm said it wanted to preserve the Grade II listed building, while converting it into a hostelry. Now, MCR has announced a small number of public consultation events it is holding on May 11, 12, and 16 where those interested can view the emerging proposals for the site, meet the project team, and share any feedback on the plans. Those proposals include public access to the top of the tower and its podium buildings for the first time in almost half a century. The 34th floor was famously home to a revolving restaurant that gave diners a panoramic view of Britain’s capital as it slowly turned once every 22 mins, but this was closed in 1980. Also part of the proposals are a new publicly accessible square plus retail shops and restaurants at ground level, and a rooftop swimming pool. London is home to a number of high-rise swimming venues already. There is the vertigo-inducing Sky Pool which spans two apartment buildings ten stories up at the Embassy Gardens development in the Nine Elms region of Wandsworth. You will find an infinity pool at the Shangri-La hotel on the 52nd-floor of the Shard building near London Bridge, and there is also a pool on the roof of the Berkeley Hotel, overlooking Knightsbridge. The BT Tower was originally known as the Post Office Tower when it was first built in 1964, and its main purpose was to support microwave antennas used to beam telecom signals between London and the rest of the country. The tower will not be turned into a vertical hotel immediately. BT said payment for the site is spread over six years to 2030, during which time the company will gradually remove all of its telecoms equipment from the building. As we reported previously, the BT Tower also famously fell victim to a giant kitten in an episode of the British 1970s TV comedy series The Goodies. ®
Categories: Linux fréttir
UK wants fresh fingerprints on £300M biometrics platform
The UK Home Office wants to talk to suppliers about its plans for two potential procurements for the Strategic Central and Bureau Platform (SCBP), its core biometrics system, worth up to £300 million. The department said the procurements could cover support, development, and ongoing modernization of SCBP after it shifted much of the platform to "more modern and widely adopted technology stacks." It said this could allow a broader range of suppliers to undertake support and development work, and split up the work ("potential disaggregation"), according to a preliminary market engagement notice. The notice quotes a total estimated value for the contracts of £296 million including VAT over up to 11 years from October 2027, although it adds that this is based on current annual charges – suggesting these are around £27 million – and should be seen as indicative. The Home Office is holding an event with TechUK on May 15 to start the discussion, with participants required to sign a non-disclosure agreement first. SCBP is part of the long-running Home Office Biometrics (HOB) program to bring together the government's collections of fingerprints, DNA profiles, and facial images. SCBP provides the core components of the Immigration and Asylum Biometrics System (IABS) used for passports, immigration and borders, and the corresponding Ident1 service used by law enforcement. The department's most recent assessment of the HOB program in December 2024 referred to a cost increase of £47.8 million, including £34 million of this covering Ident1 modernization "to deal with urgent obsolescence issues and security vulnerabilities" and £4.4 million for an upgrade to support Livescan, through which police officers collect fingerprints and facial images following arrests. The assessment said the overall cost of the HOB program from 2014-15 to 2034-35 then stood at £1.55 billion. According to Home Office permanent secretary Matthew Rycroft, benefits include searching crime marks (such as fingerprints left at crime scenes) against immigration databases, the police's mobile fingerprint identification service, and the ability to collaborate with other countries. ®
Categories: Linux fréttir
Akamai surges on big LLM deal as Cloudflare dims
This week was the best of times for Akamai and the worst of times for Cloudflare. On the same evening, content delivery network mainstay Cloudflare announced it was cutting about a fifth of its staff in a realignment around AI, its competitor Akamai announced a seven-year, $1.8 billion deal with a leading LLM provider that Bloomberg identified as Anthropic. Akamai CEO Tom Leighton said this was the largest deal in the company’s history and that it came after another large, unidentified frontier-model developer signed a $200 million deal last quarter. “These leaders in AI have chosen Akamai because their AI workloads need the scale, performance and reliability that our cloud platform provides,” he said during the company’s first quarter earnings call on Thursday. Akamai, which has 4,300 locations in 700 cities across 130 countries, won the deal against stiff competition from hyperscalers and neoclouds. He said Akamai’s ability to manage and scale complex distributed systems, as well as its low latency, tipped the scales in its favor. Given the supply chain constraints in datacenter space, especially as it relates to memory costs and the infrastructure needed inside of large datacenter buildouts, one analyst asked if Akamai planned any increase to its capital expenditures this year to pay for it. Akamai executive vice president and CFO Ed McGowan said that was not likely. “We’ve been able to get the supply chain ready. We anticipate receiving all the goods that we need to deliver this services over the next seven years within the next 12 months,” he said. “Now there’s always potential for slippage and delays, but we have mechanisms in our contracts to deal with, if, in say six months from now, prices were to go up. So we’ve taken that into consideration.” McGowan said it is a consumption-based contract over seven years, so as soon as Akamai ramps the necessary capacity, it will start taking revenue, which he expects to begin happening later this year. Winning this deal and ones like it has been Akamai’s goal in the AI era, Leighton said. “This has been the strategy all along. So we’re very pleased to be executing against it,” he said. “The goal has been to be deploying a distributed inference platform, distributed compute platform that would be desired by enterprises across the spectrum … The platform is to a point where we can do that, and I think you'll see more of this going forward.” On the same day, across the country, Cloudflare was spelling out the bad news to its employees that it planned to cut the workforce by 1,100, roughly 20 percent. Cloudflare co-founders Matthew Prince and Michelle Zatlyn said it was not about cutting costs, but about building a company that meets the AI moment. “We have to be intentional in how we architect our company for the agentic AI era in order to supercharge the value we deliver to our customers and to honor our mission to help build a better Internet for everyone, everywhere,” they wrote in a blog post. Cloudflare’s revenues grew 34 percent year over year to reach $639.8 million in the first quarter. It posted a net loss of $22.9 million. It expects to pay up to $150 million in severance and benefit payments related to the layoffs. While Akamai’s stock price surged 26 percent on Friday, Cloudflare dropped 23 percent. With a market cap of over $69 billion, Cloudflare still has more than three times Akamai’s market cap. ®
Categories: Linux fréttir
GPT-5.5 may burn fewer tokens, but it always burns more cash
It's getting more expensive to use the latest models. OpenAI last month bumped the version number of its GPT model family to 5.5, and per-token prices rose too, in some cases doubling compared to its predecessor. For 1 million tokens, GPT-5.5 is priced at $5 (input), $0.50 (cached input), and $30 (output). Its predecessor GPT-5.4 charges $2.50 (input), $0.25 (cached input), and $15 (output) per 1 million tokens. The AI biz claims that the cost increase is offset to some extent by token processing efficiency – delivering better results using fewer tokens. "While GPT‑5.5 is priced higher than GPT‑5.4, it is both more intelligent and much more token efficient," the company said during the rollout. But the cost is still going up, more than efficiency improvements are reducing costs. According to an analysis conducted by OpenRouter, GPT-5.5 is anywhere from 50 percent more expensive to nearly twice as expensive, depending on prompt length. "Our analysis shows that GPT-5.5 actual costs increased 49 percent to 92 percent," OpenRouter said. "Longer prompts, over 10k tokens, saw costs offset by shorter completions. Shorter prompts, under 10k, experience a higher cost increase where completions did not get shorter." That range – 49 percent to 92 percent – factors in the model's token efficiency improvements, which are more relevant for longer prompts. According to OpenRouter's measurements, GPT-5.5 generates between 19 percent and 34 percent fewer completion tokens for longer prompts (10,000 tokens and up). If reports of OpenAI's projected $14 billion loss in 2026 prove accurate, costs will have to rise much more to balance its insistent spending. But this is a problem also faced by rival Anthropic, set to lose a reported $11 billion in 2026. Anthropic's Claude Opus 4.7 arrived without a visible list price change amid claims about an improved tokenizer. The result, according to OpenRouter, is potential savings for shorter prompts but larger bills for longer ones. "Our study of real Opus 4.7 usage shows that actual costs increased 12–27 percent for prompts above 2K tokens when cache absorption is taken into account," the biz said. "Short prompts under 2K were the exception, where significantly shorter completions offset the tokenizer overhead entirely." Expect further price increases for premium models. ®
Categories: Linux fréttir
Tech is now rolling out the old grievance grift
OPINION Twice this week we've been told by our would-be tech oligarchs that people are being paid to hate them. Alex Karp said 10 percent of the world professionally hates Palantir and Kevin O’Leary said paid agitators were protesting his Utah data center. This is the familiar language we hear in politics when someone needs a ready scapegoat to explain away unpopularity. We should pay attention to this argument as it transfers from politics to technology, because it is revealing the alleged victims' intent. They have done, and are going to do, some very unpopular things and, when people push back, those complaints need to be quickly marginalized. Just like in a mob trial when the defense lawyer tells the court what the rat is getting from the government to squeal, calling people paid agitators or professional haters invites us to question their motives. Just when we are shocked by Karp's audacity, saying that he doesn't care if the Iran war is unpopular and that Palantir will support it, with the next breath, he tells us to check the pockets of the people picketing. Just when the public stirs and demands to know why a Shark Tank star wants to gobble up all the electricity and water in a state, he says the only green initiative they care about is in their wallet. It's also a dog whistle to those like-minded souls who hunger for enemies and grievance, that even a simple billionaire – doing nothing wrong other than selling software that helps the government select targets to bomb – can be picked on by people that hate America. It lets their compatriots in Podcastistan know how to feel and where to stand if they want to be on the right side of the argument. Are you pro-Iran nuclear weapons blowing up Christian orphanages? Or are you going to be nice to Alex Karp? Are you in favor of being a boss and not even looking at the electricity bill, just sliding your titanium card across the meter like a baller? Or are you going to whine that the data center uses more electricity than everything that is plugged into an electrical outlet in the state of Utah? It makes the argument simple, not for everyone, but for just enough. For the loudest, the angriest, and the most ... patriotic. As those voices rise, reason gets drowned out. The debate gets confused and crucially, people move on. At this moment, the public's attention is fractured and emotions are spent. Focusing outrage on anything complex is next to impossible when there are so many unambiguous abuses of power. All you need to do is muddy the waters a bit and let them swim away. Anyone who sticks around? They're probably getting paid to do it. Getting paid is what this is all about after all. In the last 90 days, Palantir got $858 million from working on unpopular wars with unpopular governments. I mean look, like they say, it was all done to protect the ideal of western liberal democracies, of course. Nothing says you understand the teaching of democracy better than ignoring how the public feels when the warlord is jingling his purse. Why should haters be the only ones getting paid? When we published Karp’s comments about 10 percent professionally hating Palantir, savvy readers at The Register showed they are on to the grift. “I mean, it's the wording itself — 10% “professionally” hates Palantir," wrote commenter Pulled Tea. "Could be right! Bet you many of the people who are doing the hating are dedicated, high-level amateurs." ®
Categories: Linux fréttir
Worm rubs out competitor's malware, then takes control
There’s a mysterious framework worming its way through exposed cloud instances removing all traces of TeamPCP infections, but it’s not benevolent by a long shot: Whoever is behind this bit of malware may be cleaning up who came before, but only so they can take their place. Discovered by security outfit SentinelOne’s SentinelLabs researchers and dubbed PCPJack for its habit of stealing previously compromised systems from TeamPCP, the worm was first spotted in late April hiding among a Kubernetes-focused VirusTotal hunting rule. It stood out from known cloud hacktools, said SentinelLabs, because the first action it always takes is to eliminate tools associated with TeamPCP attacks. The script didn’t stop there, though. “We initially considered that this toolset could be a researcher removing TeamPCP’s infections,” SentielLabs said. “Analysis of the later-stage payloads indicates otherwise.” “Analyzing this script led us to discover a full framework dedicated to cloud credential harvesting and propagating onto other systems, both internal and external to the victim’s environment,” SentinelLabs continued. In other words, this thing will harvest credentials from everywhere it can get its hands on, and then find new, unsecured cloud environment targets to spread itself to. TeamPCP came onto the scene late last year, and since then has made a name for itself primarily by undertaking a successful compromise of the Trivy vulnerability scanner. That act spread credential-harvesting malware which attackers then used to pivot to more valuable targets, and became one of the most notable supply chain attacks in recent memory. Unlike TeamPCP’s campaign, which relied on the spread of compromised software by human actors, this one spreads on its own accord. Infections start when already-infected systems look for exposed services, including Docker, Kubernetes, Redis, MongoDB, and RayML, as well as exposed web applications. Once it finds a vulnerable environment, it runs a shell script on the target system that sets up an environment to download additional payloads and searches for TeamPCP processes and artifacts to kill. That part of the infection downloads the worm itself, along with modules to enable lateral movement, parse credentials and encrypt them for exfiltration, and for scanning the web for new environments to infect. From there, the worm goes to work with the second module in its kit that conducts the actual credential thefts. This portion of the infection targets environment variables, config files, SSH keys, Docker secrets, Kubernetes tokens, and credentials from a list of finance, enterprise, messaging, and cloud service targets so long that we recommend taking a look at it here, or just assuming whatever you’re using is probably being targeted. SentinelLabs noted that the lack of a cryptominer in the malware package is unusual, and said the particular services it targeted suggests its goal is either conduct its own spam campaigns and financial fraud with the stolen data, or to make the data it harvests available to those planning similar crimes. The worm's practice of removing TeamPCP files could be opportunistic, or could mean there’s drama going on in the cybercrime world. “We have no evidence to suggest whether this toolset represents someone associated with the group or familiar with their activities,” SentinelLabs noted. “However, the first toolset’s focus on disabling and replacing TeamPCP’s services implies a direct focus on the threat actor’s activities rather than pure cloud attack opportunism.” Because this is a worm relying on unsecured cloud and web app instances ripe for targeting, mitigation recommendations are pretty simple: Keep your cloud platforms secure, and ensure authentication is required even for instances of things like Docker and Kubernetes that aren’t exposed to the internet. ®
Categories: Linux fréttir
Disgraced US gov software contractor found guilty of database destruction
A Virginia man, Sohaib Akhter, faces decades in prison after a jury convicted him of being involved in a scheme to delete approximately 96 databases containing US government data. The events of the case transpired around two weeks before the twin brothers allegedly involved were fired from their jobs at a software supplier to the US government. Sohaib and Muneeb Akhter, both 34, allegedly worked together on February 1, 2025, to access the account of an unnamed individual who submitted a complaint through the Equal Employment Opportunity Commission’s public portal. According to the Justice Department, Muneeb asked Sohaib for the individual’s plaintext password. Prosecutors say Sohaib provided the credential, which Muneeb then used to gain unauthorized access to the account. Court documents do not say why the brothers wanted access to the account, but the pair were both fired on February 18, 2025, after the company, which provided software to at least 45 government agencies, learned that Sohaib had a prior felony conviction. The superseding indictment [PDF] goes on to describe the timeline of events leading up to the database manipulation. Within five minutes of being fired via remote meeting, the twins sought to inflict damage on their employer. At approximately 16:55, Sohaib tried to access the software supplier’s network but couldn’t because his VPN connection was severed and his Windows account was deactivated while he was sitting in the firing meeting. However, Muneeb allegedly still had access and told his brother the same. A minute later, at approximately 16:56, officials say Muneeb issued commands preventing other users from reading or writing to the database, before issuing a command to delete it. Over the following 56 minutes, Muneeb allegedly deleted approximately 96 databases, the indictment states, which contained data related to Freedom of Information Act matters and sensitive investigative files belonging to federal departments and agencies. One of the 96 was also described as “a DHS production database containing US government information,” hosted in the Eastern District of Virginia. After the deletions, Muneeb allegedly set about covering his tracks. According to the indictment, Muneeb queried an AI tool: “How do I clear system logs from SQL servers after deleting databases,” and later: “How do you clear all event and application logs from Microsoft Windows Server 2012.” The twins then discussed how to proceed. Sohaib allegedly stated aloud: “They’re gonna probably raid this place,” to which Muneeb replied, “I’ll clean this shit up.” Sohaib added: “We also gotta clean stuff up from the other house, man.” Per the timeline of events heard in court, Muneeb then set about copying EEOC files to a USB stick, around 1,805 of them per court documents, all while using a laptop issued by his former employer. Muneeb allegedly also stole IRS documents stored on virtual machines, including tax information and personally identifiable information belonging to at least 450 individuals. Over the following week, Muneeb unsuccessfully attempted to gain access to a DHS-owned laptop, and the twins sought the help of another unnamed individual to wipe their company-issued devices by reinstalling Windows. Finally, the court heard that Muneeb drove to Texas, transporting his personal laptop, mobile device, and a Personal Identity Verification card issued by a US government agency. They were both arrested on December 3, 2025. Muneeb Akhter has not yet been convicted. Further firearms charges Sohaib was in double trouble for not only computer fraud and password trafficking, but for possessing seven firearms, which police found in March 2025, roughly a month after his brother allegedly deleted the databases. After a search warrant was authorized, police found roughly 378 .30 caliber rounds of ammunition, as well as a selection of firearms, including M1 and M1A rifles, a Glenfield Model 60, a Ruger .22 automatic pistol, and a Colt Police .38 Special revolver, among others. Officials said Sohaib took steps to sell the guns after the search warrant was executed, which involved threatening and intimidating his domestic partner to sign transaction documents since he, a convicted felon who served prison time in 2015 for over a year, was not legally allowed to own any firearms. Sohaib, then 23, was sentenced to two years in prison and three years of supervised release after pleading guilty to accessing sensitive data, including that belonging to co-workers, acquaintances, and a former employer, held on State Department systems while he was working as a contractor. The court heard at the time that he also devised a scheme, along with Muneeb and others, to maintain perpetual access to these systems by installing “an electronic collection device inside a State Department building.” This plan failed, however, as he broke the device while trying to install it behind a wall at a State Department facility in Washington, DC. Muneeb got 39 months in prison and three years of supervised release as a result of his role in the scheme. Sohaib’s sentencing is scheduled for September 9. Muneeb’s additional charges Muneeb, who is yet to be convicted, allegedly downloaded approximately 5,400 username and password combinations from the EEOC’s servers, storing them on multiple devices and in the cloud. In hundreds of cases, according to the indictment, Muneeb successfully accessed the corresponding email accounts without authorization, and created Python scripts to determine which combinations were valid when testing against the servers of an unidentified US hotel chain. During this time, Muneeb allegedly tested the stolen username-password combinations against various companies, including other hotel chains, airlines, and financial services companies. In multiple cases where Muneeb successfully logged into these accounts, court documents state that he changed the email address associated with the account to one he controlled, keeping the victim’s name in the address. The typical format was [victim name]@wardensys.com or [victim name]@wardensystems.com. The domain belongs to a small, Virginia-based company called Warden Systems, which describes itself as an embedded systems and cybersecurity research company. The company’s Crunchbase profile lists Sohaib as vice president, and an X account bearing the name Muneeb Akhter lists itself as CEO at Warden Systems. Its website is no longer reachable, and it stopped posting to social media around 2014, a year before the pair were convicted of earlier felonies. Neither Sohaib nor Muneeb is explicitly connected to “Warden Systems” in court documents, although Muneeb is said to control both the wardensys.com and wardensystems.com domains. In at least one case involving the alleged stolen username-password combinations, prosecutors say Muneeb used one victim’s air miles balance to successfully book a flight. Muneeb faces a maximum prison sentence of 45 years, if convicted. ®
Categories: Linux fréttir
Iran war hits datacenter building supply chains, upping costs
The Iran conflict is adding to supply-chain disruption for datacenter construction projects, bumping up material costs and causing shortages due to the closure of the Strait of Hormuz. So says server hall project specialist BCS Consultancy, which claims construction firms are seeing increases of up to 20 percent in the cost of certain building materials, while in some cases, the quantity available for delivery has been reduced to a quarter of the required amount on order. The firm’s regional director Oskar Lampe says that oil-based building materials are becoming scarcer and more expensive, as about a fifth of the global supply flows through the Strait of Hormuz in the Middle East. Because producing materials such as steel, aluminum, and cement is very energy-intensive, the construction industry is starting to feel the effects of the blockade, he claims. “For datacenter construction, the key components of which consist of exactly these materials, this is a turning point,” he stated. That pressure predates the current conflict, according to IDC. Andrew Buss, senior research director at the analyst house, told The Register: “We’re hearing some reports of broader supply chain disruption and availability issues – particularly for things like high-voltage transformers and copper supply - around datacenter builds from even before the war in the Middle East and the resulting closure of the Straits of Hormuz. “So the closing of the Straits is certainly not helping, but this has been an issue for some time resulting in more frailty and susceptibility to disruption and therefore likely to have a disproportionate impact as a result of the closure.” Last month, IDC warned that IT equipment supplies are facing further volatility as the Iran war has strained global logistics through rising energy costs and freight routes being disrupted. It isn’t just bit barn projects that are suffering, of course. The wider construction industry is experiencing some of the steepest cost increases in nearly 30 years as the ongoing Iran crisis drives up the price of fuel and raw materials, according to The Guardian. These new effects come on top of existing challenges facing the datacenter construction industry, such as the availability of suitable land, getting planning permission, being able to get a grid connection for power, skills shortages and the cost of equipment. Segro, one of the UK's major commercial property developers, revealed a while back that it would invest "hundreds of millions and more" in building new server farms, except that it faced delays often running into years getting such projects wired up to the national grid. Lampe says that the current situation is unlikely to ease quickly, as it will take a while for disrupted transport routes, energy price inflation and volatile raw material markets to recover, even if the Strait of Hormuz were to reopen tomorrow. He advises development teams to follow a few measures to try and minimize the impact on their project timelines, including submitting orders for long lead items early, building clear price escalation rules into contracts, and diversifying supply chains where possible. For example, delivery times can vary between 5 and 38 months for chillers, transformers, generators and other critical plant equipment, even under normal conditions. “Those who only start the procurement process when the project plan dictates will order at a higher price and wait longer,” he notes. Also, dependence on a single supplier is a structural risk for builders even before this conflict and can seriously endanger projects. Known alternatives are needed. “For several oil-based materials, technically equivalent, non-oil-based variants exist. Potentially more expensive to procure, but available and in many cases already geared towards future sustainability requirements, which makes them the more sensible choice in the medium term anyway,” Lampe says. ®
Categories: Linux fréttir
Raspberry Pi wants Windows admins to Connect – or it might pull the plug
Administrators who want a Windows version of Raspberry Pi Connect need to register their interest, or else the Pi team might ditch the concept. Raspberry Pi Connect is a tool that lets admins remotely access a Raspberry Pi device from a web browser. It launched in 2024 as a free service for individuals and was later joined by Raspberry Pi Connect for Organizations, aimed at commercial customers with fleets of devices, costing $0.50 per device per month. $0.50 per device per month is cheap for a commercial remote access solution on Windows. In response to queries on the subject, the Raspberry Pi team made a Windows version of the service available, albeit as a highly experimental demo not intended for production, at the end of April. Gordon Hollingworth, CTO of Software at Raspberry Pi, told The Register: "The Raspberry Pi Connect daemon implementation is currently closed source, but we intend to open source it eventually so it can be added to other architectures." Hollingworth noted that the Windows version was working in early beta form, saying: "We think it may be useful for our customers to control all their devices from one place. But we are still investigating the concept and may remove this capability if there's insufficient interest." For admins managing mixed fleets of devices, it's an interesting option, though the company would face stiff competition in the Windows market if it decides to proceed. Raspberry Pi has quietly added more enterprise-friendly features over time. Tags can now be applied to devices (for example, to show their location or purpose), and it is possible to require two-factor authentication for members of Connect for Organizations. The company's computers have long been a low-cost option for businesses considering thin clients, and its most recent crop of hardware releases, such as the computer-in-a-keyboard Pi 500 and Pi 500+ devices, could replace existing desktops. Should the Windows version of Raspberry Pi Connect attract enough interest to progress beyond its early beta state, it will represent another inroad into the enterprise computing space. ®
Categories: Linux fréttir
'Dirty Frag' Linux flaw one-ups CopyFail with no patches and public root exploit
A fresh Linux privilege escalation bug dubbed "Dirty Frag" has dropped into the wild with no patches, no CVE, and a public exploit that hands attackers root access across major distributions. Security researcher Hyunwoo Kim disclosed the local privilege escalation flaw on Friday after what he said was a broken embargo forced the issue into the open. Kim described Dirty Frag as a "universal LPE" affecting "all major distributions" and warned that it delivers the same kind of immediate root access as the recent CopyFail mess – only this time, defenders do not even have patches to throw at the problem. "As with the previous Copy Fail vulnerability, Dirty Frag likewise allows immediate root privilege escalation on all major distributions," Kim said. "Because the responsible disclosure schedule and embargo have been broken, no patches exist for any distribution." Dirty Frag works by chaining together two separate Linux kernel flaws. One sits in the xfrm-ESP subsystem and dates back to a January 2017 kernel commit, according to Kim, while the second vulnerability affects RxRPC functionality introduced in 2023. Together, the two bugs allegedly let unprivileged local users overwrite protected files in memory and claw their way to root. A long list of distributions in the firing line, according to Kim, including Ubuntu, Red Hat Enterprise Linux, CentOS Stream, Fedora, AlmaLinux, and openSUSE Tumbleweed. Separately, researchers appear to have independently reverse-engineered part of the bug chain from a publicly visible kernel fix commit before the embargo expired, adding to the disclosure mess already surrounding the flaw. One GitHub project titled "Copy Fail 2: Electric Boogaloo" claims to weaponize the ESP/xfrm side of the issue separately from Kim's full Dirty Frag chain. Kim said maintainers signed off on the disclosure of the flaw after somebody else dumped exploit details online first, collapsing the embargo before patches were finished. So now the exploit is public, the fixes are not, and Linux admins get another long week. The disclosure comes as the industry is still dealing with the fallout from CopyFail, another Linux privilege escalation bug that recently landed in CISA's Known Exploited Vulnerabilities catalog after attackers started cashing in on it in the wild. But Dirty Frag makes the recent CopyFail chaos look relatively organized. There's still no CVE, no coordinated patch rollout, and not much in the way of mitigation. Kim published a temporary workaround that disables affected ESP and RxRPC modules before clearing the system page cache. Useful, perhaps, although "turn bits of the kernel off and hope for the best" is not usually the sort of guidance admins enjoy seeing. ®
Categories: Linux fréttir
Trump jumps from 'anything goes' to 'strict regulation' AI policy
OPINION When President Donald Trump returned to power, he cast himself as the anti‑Biden on AI. First, he tore up Biden's Executive Order 14110, which had demanded "safe, secure, and trustworthy" AI. He then replaced it with his own "Removing Barriers to American Leadership in Artificial Intelligence" directive, ordering agencies to rescind or dilute rules seen as obstacles to innovation. In short, American AI vendors could do anything they wanted. That was then. This is now. While Trump has yet to issue a new AI Executive Order, we know his crew is forming an AI working group of tech execs and government officials to bring oversight to AI. Specifically, they're considering requiring all new "high‑risk" AI frontier models to undergo a formal government review before they can be used. That's going to go over well. What we do know is that National Economic Council Director Kevin Hassett has said: "We're studying possibly an executive order to give a clear roadmap to everybody about how this is gonna go, and how future AIs that also potentially create vulnerabilities should go through a process so that they’re released into the wild after they've been proven safe – just like an FDA drug." Considering that people who ignore evidence now regulate healthcare in the United States, that doesn’t fill me with much confidence. Indeed, we now know the FDA blocked the publication of studies showing that COVID-19 and shingles vaccines were safe. Are these the kinds of people we want calling the shots on AI? Be that as it may, the Trump yes-men are framing this shift as a response to escalating cybersecurity and national‑security risks rather than as a broader embrace of EU‑style AI regulation. Yes, they're looking at Anthropic's Mythos and its potential use by hackers. At the same time, they emphasize that they want to avoid "onerous" controls on everyday AI applications. Frontier models that could supercharge cyberwarfare, bio‑threats, or other strategic dangers are another matter. That's quite a change from last summer when Trump babbled: "We have to grow that [AI] baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules." Now he seems to think rules would be a good thing. Darrell West, a senior fellow at the Center for Technology Innovation at the Brookings Institution, has suggested that Trump is returning to Biden's policy. Just don't tell him that; he'll have a fit. While Trump and company are still contemplating exactly how they want to rule – sorry, regulate – AI, the Department of Commerce's Center for AI Standards and Innovation (CAISI) announced new agreements with Google DeepMind, Microsoft, and xAI. According to these new policy statements, CAISI will conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security. CAISI director Chris Fall said: "Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications." How to do this? Who will do this? What will it look like? Good question! Too bad we don’t have any answers yet. You may have noticed that Anthropic was not invited to this cozy policy get-together. Funny, that, since most observers think that Mythos was the model that broke the "do anything you want" AI camel's back in Trump's White House. That's because the months‑long feud between the administration and Anthropic is still simmering. Trump's team moved to block federal agencies from using the company's tools, and Anthropic is now challenging that policy in court. Recently, however, Trump's tone has softened. Trump told CNBC that Anthropic was "shaping up." If he can't get peace with Iran, maybe peace with Anthropic will please him. On the other hand, we also know that the Trumpies are considering forbidding companies from "interfering" with the government's use of AI models. You hear that, Anthropic? You will toe the line! Meanwhile, Gregory Falco, a Cornell assistant professor of mechanical and aerospace engineering, pointed out the obvious: "The federal government does not currently have the in-house technical expertise, infrastructure, or day-to-day insight needed to directly evaluate these systems on its own." Expertise is something Trump's cast of characters sorely lacks across any and all subjects. "At the same time," Falco continued, "a purely voluntary model of self-governance is not enough." After all, foxes are notorious guardians of chicken houses. What I think is going to happen is that AI vendors who play ball with Trump will end up "governing" AI alongside some Trump loyalists. It's going to be ugly. Some regulation is needed, but these are not the people who will do a good job of it. I won't be surprised if one of Trump's goals isn't so much to make AI safer as it is to ensure that the answers AI gives are the ones he and his regime want people to see. Today, for example, when I asked a variety of chatbots who lost the 2020 election, they all agreed Trump had lost. Funnily enough, when the Senate Judiciary Committee asked numerous Trump nominees for federal judgeships the same question, they universally refused to say he lost. For better or worse, most Americans don't pay attention to legal news. What they do, however, is ask AI chatbots for answers. Foolish of them, considering how inaccurate they can be, but there it is. If Trump's allowed to call the shots, I've little doubt that the approved bots will follow in the footsteps of his obedient judges and give the answers he wants and not the truth. ®
Categories: Linux fréttir
Meta U-turns on encryption push for Instagram as DMs go plaintext
Meta has quietly pulled the plug on encrypted Instagram DMs, meaning private messages on one of the world’s biggest social networks are no longer especially private. The change took effect today, according to a revised Meta post first published in 2022. In a statement to The Register, Meta said the feature saw limited adoption and pointed users toward WhatsApp instead. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," the spokesperson said. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp." It’s quite the reversal for a corporation that spent years telling everyone that encryption was the future of online communications, even as governments pushed back against the company’s wider rollout plans. Much of that pressure centered on child protection. Campaigners and agencies, including the NSPCC UK’s National Crime Agency, argued wider encryption would make it harder to detect grooming, child abuse material, and other criminal activity taking place over private messaging services. Privacy advocates, however, say Meta has just blown a hole in one of the few genuinely private corners of the platform. The Center for Democracy & Technology said it had urged Meta to reverse the decision, alongside members of the Global Encryption Coalition Steering Committee. “Without default encryption, millions of Instagram users are left exposed to surveillance, interception, and misuse of their private communications,” the group said. “These risks fall hardest on people who rely on secure messaging for their safety, including journalists, human rights defenders, and survivors of abuse.” Swiss privacy outfit Proton also questioned what exactly happens to existing chats once encryption disappears. Because properly implemented E2EE prevents platforms from reading message contents, the company noted that Meta has not clarified whether previously encrypted conversations will remain inaccessible, get deleted, or become readable. “For Instagram, dropping E2EE is just an example of how little regard Meta has for the privacy and safety of its community,” Proton said in a blog post. Meta has become increasingly aggressive about monetizing and analyzing user interactions. Last year, the company confirmed that interactions with Meta AI tools, including those inside private conversations, could be used for ad targeting. The company has not publicly said whether ordinary Instagram messages could eventually feed into similar systems now that encryption is gone. ®
Categories: Linux fréttir
Vi clone written in BASIC proves old habits :wq hard
The veteran editor Vi turns 50 this year, and what better way to celebrate than to write a version in BASIC? The code was created by Lee Tusman, who likes to be a little out of step with the latest IT industry fads. Not strictly a professional programmer, Tusman, whose background is in art, began looking at BASIC in 2025. Specifically, Yabasic, an open source BASIC interpreter for Unix and Windows. "For a modern BASIC, it's quite fun to use," Tusman wrote. "I made my own cyber-hoss racing game, a command line game inspired by the UFO50 and Flash game Quibble Race. I also tinkered with the internals of the text version of The Oregon Trail, and built a clone, a simple version of Dope Wars economic simulation game." All of which brought Tusman to code up a version of the veteran text editor Vi, using BASIC because… well… it was there. "I've been using Neovim (and before that, Vim) for years and years. I've never made a text editor before. But I decided it could be fun to try to implement my own." Inspired by tools such as Offpunk, a text-based browser, "I thought I could likely build an ULTRA simple editor with a minimum of Vim commands. How hard could it be?" In this instance, not too hard at all. It only took a few hundred lines of Yabasic code to get a minimal blank page working before Tusman began adding simple commands. Before long, the editor had reached the point where it was possible to open a file, start a new one, and save. "This was satisfying as I was now able to open the actual code for my vi.bas program and poke around and edit it." There's no wrapping in Tusman's editor – 80 characters is the limit – but fire up the code from the GitHub repository, and a reasonable simulacrum of the venerable editor, along with a lot of its sometimes esoteric shortcuts, runs up. The Register asked Tusman why he chose Vi. "I chose Vi because I already use it, and of course, once you're addicted to it, it's hard to want to use any other style of editor." So what's missing? "Many things! But I'm purposefully not trying to rebuild a complete Vim. I just wanted something usable with as much functionality as I could build in as short and straightforward a program as I could write. Notably, most of it is 'if this key is pressed, do this.'" As for future development, "I don't know how much I'd add," Tusman said. "I've only been using the program a week or so, but haven't found much I completely miss from Neovim. I'm speculating here, but maybe I'd optionally add back in line numbers, and I haven't found a way to prevent errors when the screen gets resized that works cross-platform." In his post, Tusman notes that while the code won't win any prizes for its beauty, it is functional and can be tinkered with. It's also in the public domain, and so could be forked if there's a function that a BASIC wrangler can't do without. A look at the source certainly brought back some memories for this hack, who cut his teeth on TI BASIC in the very early 1980s and hasn't gone near the language since uninstalling Visual BASIC 6 decades ago. "It's not only the best Vi clone I've found written in a BASIC implementation," Tusman wrote. "I think it's the only one!" ®
Categories: Linux fréttir
UK abandons police database cloud move after £35M transformation stalls
The UK Home Office is bringing the Police National Database (PND) cloud migration in-house after a transformation program faced an additional £26 million in costs and an 18 months delay. The PND shares information across all police forces, law enforcement agencies, and regulatory bodies. The crucial system was meant to shift to the cloud, but the procurement project was delayed by more than a year, as The Register reported. In a letter to MPs, Home Office Permanent Secretary Gareth Davies said the cloud transition had been based on "delivery assumptions" that had proven incorrect. Davies said the Home Office had expected 80 percent of the code from the system, which went live in 2011, could be reused. In fact, only 20 percent was reusable. As a result, it would miss its June 2025 migration target without significant extra time and funding. "With the support contract expiring in March 2026 and no further direct award available, the programme explored contingency options, but analysis concluded continuation was not value for money ," Davies said in the written response to Parliament's Home Affairs Committee. "The programme decided it would exit the contract, bringing the service into Home Office control and in-house support." The PND was proposed following the 2002 murders of two 10-year-old girls in Soham. The subsequent Bichard Inquiry identified serious weaknesses in police intelligence, including the inability of forces to access potentially important information held outside their own geographic jurisdictions. Those gaps contributed to poor information-sharing about Ian Huntley, who murdered the girls. CGI won the contract in 2009 and the system was launched in April 2011. Elements of the current PND transformation program include a transition to cloud-native architecture, improved usability, and the replacement or updating of obsolete Oracle databases and middleware. A transparency notice published in 2024 said that since 2016, investment in the system was limited to "keeping the lights on" because of the introduction of the National Law Enforcement Data Programme (NLEDP). NLEDP imagined the Police National Computer (PNC) would be combined with the PND, creating a single system. "However, between 2016 and 2020 NLEDP faced some significant challenges that impacted progression and delivery ," the notice said. "Upon various reviews of NLEDP the decision was made for a complete reset of the programme, with PND being removed from the scope of work." "The PND transformation is being delivered to address the technological debt in PND which is causing a failing service." According to Davies' letter, the PND program was set up in 2021 but did not commence until January 2024. "By May 2025, around £35.1m had been spent before the transformation was paused," it said. Running, sustaining, and maintaining the live service cost about £24 million a year, amounting to £111.5 million since FY2021/22. Total PND spend over FY2021/22 to FY2025/26 was £146.6 million. Despite the money invested in the program, the Home Office and CGI were unable to agree a revised plan to move it forward. "Both the Home Office and the supplier worked closely together for many months to understand the depth of the challenges ," the letter said. "We [the Home Office] ultimately put our trust in the supplier's expertise and track record in providing and maintaining PND since 23 June 2011. From July to December 2024, the Home Office held workshops with the supplier to agree a realistic revised Initial Implementation Plan… The two sides could not come to an agreement, however, in particular about the contracted scope, time required for testing and allocation of residual risk." The Home Office said it reached a settlement with the supplier but did not disclose the terms. Davies admitted that the cloud migration work did not result in any improvements to the PND because the project was incomplete, although "upgrades have been made to the live system to ensure its security and stability." The Home Office now plans to move the PND from a CGI site to its datacenter, promising "robust governance drawing on prior transfer experience." It promised to mitigate disruption risks resulting from the "age and complexity of the legacy infrastructure." It is promising to make the on-prem system more secure, stable, and available at a cost of £20.3 million. "These upgrades are expected to extend service continuity by 5-10 years by tackling technical debt, improving resilience and capacity, and supporting enhanced analytics and safeguarding," the letter said. "The service remains stable, with customer-facing availability above 99 percent over the past six months, and the team proactively monitors servers and responds quickly to issues, including known legacy software risks. With the control in place with the addition of the stabilisation plans, the risk of major failure is anticipated to be low." ®
Categories: Linux fréttir
GameStop CEO's eBay account reinstated following takeover PR stunt
GameStop CEO Ryan Cohen has had his eBay account reinstated after the platform suspended him for selling personal items to help fund his takeover bid for the digital auction house. Less than 12 hours after Cohen announced he was selling various memorabilia and vintage wares to fund the proposed $55.5 billion buyout offer made earlier in the week, he shared a screenshot of an email informing him that his PR stunt had landed him a platform ban. "We wanted to let you know that your eBay account has been permanently suspended because of activity that we believe was putting the eBay community at risk," the email read. "We understand that this must be frustrating, but this decision was not made lightly and it's important that we keep our marketplace safe for everyone. For more information, see our article on how and why accounts can be suspended or review our User Agreement." The Register asked eBay why it suspended and reinstated Cohen's account, but the company did not immediately respond. Supporters of GameStop's bid for the auction site can show it via Cohen's page, where they can pick up rare games, tech, and other valuables. Highlights among the 36 listings include genuine GameStop storefront signs, which are currently going for just under $15,000 with bidding still open, and a Halo 2 Master Chief statue going for a similar amount. Cohen's original Apple iPhone is also up for grabs, with bidding now topping $9,100, and he has some baseball trading cards going for several thousand dollars too. He said that the winning bidder on each item will receive a hand-signed "Letter to eBay" as thanks for their support. GameStop's bid GameStop announced its offer to buy eBay on May 3 at $125 per share - a 46 percent increase on its February 4 closing price - which is the date GameStop first started buying eBay shares, taking its ownership stake to 5 percent. Cohen said if the bid is successful, GameStop and eBay will operate as a combined company, and the CEO, who took over the gaming retail business in 2021, would pursue $2 billion in cost reductions in the first year. This would include slashing eBay's current $2.4 billion marketing budget in half, as well as reductions across product development and general administration. While eBay share price rallied following the announcement, GameStop stock fell by around 10 percent following an interview Cohen gave to CNBC. In it, he shied away from calling the buyout proposal "hostile," instead opting to call it "unsolicited," and failed to robustly answer questions about the structure of the deal. Asked about the value in the context of GameStop's $12 billion market cap, Cohen repeated the information previously stated in the initial announcement: the purchase will comprise half cash and half GameStop stock, and it had secured a $20 billion financing letter from TD Bank. He declined to elaborate further. Already widely reported, investor Michael Burry of The Big Short fame dumped his GameStop shares after the company outlined its proposed deal structure, telling his Substack subscribers that it was over-leveraged. eBay acknowledged GameStop's bid on Monday, and said it would discuss it at board level. "Until the Board has further carefully and thoroughly considered the proposal, the company does not intend to comment further at this time." ®
Categories: Linux fréttir
Hackers ate my homework: Educational SaaS Canvas down after cyberattack
Students around the world have an excuse to bunk off after hacking crew ShinyHunters did something nasty to educational SaaS Canvas. Canvas is widely used by schools and universities to communicate with students, publish and store course material, and collect assignments. An outfit called Instructure develops the software and an entry on its Status Page dated May 2 features Chief Information Security Officer Steve Proud stating the org "recently experienced a cybersecurity incident perpetrated by a criminal threat actor." "We are actively investigating this incident with the help of outside forensics experts. We are working quickly to understand the extent of the incident and actively taking steps to minimize its impact," he added. Numerous posts report that attempts to log into Canvas earlier this week failed, but did produce a notice from an entity claiming to be the notorious hacking crew ShinyHunters, who claimed the outage was only possible due to lax patching. The crew also claimed to have stolen data from institutions that use Canvas and threatened to leak it unless a "settlement" is reached by May 12. Canvas has thousands of customers, meaning any confirmed breach could have wide impact. As of Thursday evening US time, Canvas says its wares are now available "for most users" and won't offer further comment. A student of The Register's acquaintance – OK, one of my kids – shared an email advising that his uni has prevented access to Canvas while it tries to understand the situation and the risk of data leakage. We've seen multiple universities posting notices about the incident that say more or less the same thing. Most also warn students of heightened phishing risk and urge caution. Several also advise that as they require students to lodge assignments in Canvas, students can assume they have an extension on deadlines. Your correspondent's offspring does not mind this one little bit. This is an evolving story. The Register will update it as more information becomes available. ®
Categories: Linux fréttir
Meta fights Ofcom over how many billions count as billions
Meta appears to have decided Britain's Online Safety Act would be much easier to swallow if Ofcom stopped counting all the money the social media giant makes everywhere else. The Facebook and Instagram owner has launched a legal challenge against the UK comms regulator, arguing that the way Ofcom calculates fees and potential penalties under the Online Safety Act is fundamentally wrong because it relies on global turnover rather than UK-specific revenue. The law allows Ofcom to fine companies for up to 10 percent of their qualifying worldwide revenue, or £18 million, whichever is higher. For Meta, which brought in about $201 billion last year, that means the numbers stop sounding like regulatory penalties and start sounding like national infrastructure projects. Meta is now seeking a judicial review in the High Court over how Ofcom defines "qualifying worldwide revenue." The dispute boils down to three complaints. First, Meta argues that Ofcom should only consider UK revenue tied to regulated services, not the company’s global income. Second, it objects to rules that treat multiple services under the same corporate umbrella as jointly liable, potentially exposing the wider organization to larger penalties. Third, it is challenging how Ofcom aggregates revenue across services rather than assessing them individually. An Ofcom spokesperson told The Register: "Meta have initiated a judicial review in relation to online safety fees and penalties. Under the Online Safety Act, these are to be set with reference to a provider's 'Qualifying Worldwide Revenue', which we have defined based on a plain reading of the law. "Disappointingly, Meta are objecting to the payment of fees, and any penalties that could be levied on companies in future, that are calculated on this basis. We will robustly defend our reasoning and decisions." A Meta spokesperson told The Register: "We are committed to cooperating constructively with Ofcom as it enforces the Online Safety Act. However, we and others in the tech industry believe its decisions on the methodology to calculate fees and potential fines are disproportionate. We believe fees and penalties should be based on the services being regulated in the countries they're being regulated in. This would still allow Ofcom to impose the largest fines in UK corporate history." The case marks the latest flare-up between Silicon Valley and Britain over the Online Safety Act, which has already triggered complaints from US politicians, free speech campaigners, and tech firms unhappy about the scale of Ofcom’s new powers. The regulator has not been shy about flexing them either. It has already threatened action against Elon Musk's X over sexually explicit AI-generated images linked to Grok and, in March, issued its first fine under the regime against 4chan. Meta appears to have looked at where that enforcement road leads and decided now was the time to argue about the math. ®
Categories: Linux fréttir
