news aggregator
Tony Hoare, the Turing Award-winning pioneer who created the Quicksort algorithm, developed Hoare logic, and advanced theories of concurrency and structured programming, has died at age 92.
News of his passing was shared today in a blog post. The site I Programmer also commemorated Hoare in a post highlighting his contributions to computer science and the lasting impact of his work. Personal accounts have been shared on Hacker News and Reddit.
Many Slashdotters may know Hoare for his aphorism regarding software design: "There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult."
Read more of this story at Slashdot.
Cohesity, ServiceNow and Datadog team on recoverability suite
Three more vendors have decided that the world needs tools to roll back mistakes made by AI, after Cohesity teamed with ServiceNow and Datadog on a recoverability service that will hunt down all the files and data corrupted by bad AI actors and restore systems to a “trusted state.”…
An anonymous reader quotes a report from IEEE Spectrum: Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer? There is a way to do computing on encrypted data without ever having it decrypted. It's called fully homomorphic encryption, or FHE. But there's a rather large catch. It can take thousands -- even tens of thousands -- of times longer to compute on today's CPUs and GPUs than simply working with the decrypted data. So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.
Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. "Heracles is the first hardware that works at scale," he says. The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel's most advanced, 3-nanometer FinFET technology. And it's flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips—a configuration usually seen only in GPUs for training AI.
In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side. On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn't something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.
Read more of this story at Slashdot.
Last November, Amazon sued Perplexity demanding that the AI search startup stop allowing its AI browser agent, Comet, to make purchases for users online. Today, a judge ruled in favor of the tech giant, granting it a temporary court injunction blocking the scraping of Amazon's website. According to court filings, the judge found strong evidence the tool accessed the retailer's systems "without authorization." CNBC reports: In a ruling dated Monday, U.S. District Judge Maxine Chesney wrote that Amazon has provided "strong evidence" that Perplexity's Comet browser accessed its website at the user's direction, but "without authorization" from the e-commerce giant. Chesney said Amazon submitted "essentially undisputed evidence" that it spent more than $5,000 to respond to the issue, including "numerous hours" where its employees worked to develop tools to block Comet from accessing its private customer tools and to prevent the tool from "future unauthorized access." "Given such evidence, the Court finds Amazon has shown a likelihood of success on the merits of its claim," Chesney wrote.
Chesney's ruling includes a weeklong stay to allow Perplexity to appeal the order. Amazon wrote in its original complaint that Perplexity's agents posed security risks to customer data because they "can act within protected computer systems, including private customer accounts requiring a password." The company also said Perplexity's agents created challenges for the company's advertising business, because when AI systems generate ad traffic, the impressions have to be detected and filtered out before advertisers can be billed. "This requires modifications to Amazon's advertising systems, including developing new detection mechanisms to identify and exclude automated traffic," Amazon wrote in its complaint. "These system adaptations are necessary to maintain contractual obligations with advertisers who pay only for legitimate human impressions."
Read more of this story at Slashdot.
sziring shares a report from Business Insider: Silicon Valley has long competed for talent with ever-richer pay packages built around salary, bonus, and equity. Now, a fourth line item is creeping into the mix: AI inference. As generative AI tools become embedded in software development, the cost of running the underlying models -- known as inference -- is emerging as a productivity driver and a budget line that finance chiefs can't ignore.
Software engineers and AI researchers inside tech companies have already been jousting for access to GPUs, with this AI compute capacity being carefully parceled out based on which projects are most important. Now, some tech job candidates have begun asking about what AI compute budget they will have access to if they decide to join.
"I am increasingly asked during candidate interviews how much dedicated inference compute they will have to build with Codex," Thibault Sottiaux, engineering lead at OpenAI's Codex, the startup's AI coding service, wrote on X recently. He added that usage per user is growing much faster than overall user growth, a sign that AI compute is becoming even scarcer and more valuable. That scarcity is reshaping how engineers think about their work and pay. "The inference compute available to you is increasingly going to drive overall software productivity," said OpenAI President Greg Brockman.
The report cites a recent compensation submission from a software engineer that listed "Copilot subscription" as part of the pay and benefits. "OpenAI and Anthropic should create recruitment sites where their clients can advertise roles, listing the token budget for the job alongside the salary range," said Peter Gostev, AI capability lead at Arena, a startup that measures the performance of models.
Tomasz Tunguz of Theory Ventures predicts AI inference will be the fourth component of engineering compensation, alongside salary, bonus, and equity. "Will you be paid in tokens? In 2026, you likely will start to be," Tunguz said.
Read more of this story at Slashdot.
Could steal sensitive personal and financial data
After a whopper of a Patch Tuesday last month, with six Microsoft flaws exploited as zero-days, March didn't exactly roar in like a lion. Just two of the 83 Microsoft CVEs released on Tuesday are listed as publicly known, and none is under active exploitation, which we're sure is a welcome change to sysadmins.…
AT&T plans to invest more than $250 billion over the next five years to expand U.S. telecom infrastructure for the AI age. The company says it will also hire thousands of technicians while partnering with AST SpaceMobile to extend coverage to remote areas. Reuters reports: Rapid adoption of artificial intelligence, cloud computing and connected devices has prompted telecom operators to invest heavily in fiber and 5G networks as they also seek to fend off intensifying competition from cable broadband providers. AT&T, which has about 110,000 employees in the U.S., said the new hires will help build and maintain its infrastructure. The outlay includes capital expenditure and other spending, the company said.
The spending will focus on expanding its fiber and wireless networks, including accelerating deployment of fiber broadband, 5G home internet and satellite connectivity to extend coverage across urban, suburban and rural areas. [...] AT&T is also working with satellite partner AST SpaceMobile to expand connectivity to remote regions where traditional network infrastructure is difficult to deploy. The company said it would continue spending on the FirstNet network built for first responders and bolster investment in network security and artificial intelligence-driven threat detection.
Read more of this story at Slashdot.
E-souk disputes report linking 'Gen-AI assisted changes' to recent high-impact incidents
Amazon's weekly operations meeting today reportedly focused on recent service outages and on the role that code changes attributed to generative AI may have played. However, the company is downplaying the possibility of problems with AI.…
Think it's hard to tell bot from human on Facebook now?
The biggest generator of AI slop on the internet has a new home, as Meta has reportedly acquired Moltbook and hired the team behind the social network for AI agents.…
Since 1999, Slashdot has been covering the annual Ig Nobel prize ceremonies -- which honor real scientific research into strange or surprising subjects. "After 35 years in Boston, the annual prize ceremony will take place in Zurich, Switzerland, this year and will continue to be held in a European city for the foreseeable future," reports Ars Technica. "The reason: concerns about the safety of international travelers, who are increasingly reluctant to travel to the U.S. to participate."
"During the past year, it has become unsafe for our guests to visit the country," Marc Abrahams, master of ceremonies and editor of The Annals of Improbable Research magazine, told The Associated Press. "We cannot in good conscience ask the new winners, or the international journalists who cover the event, to travel to the U.S. this year." It comes on the heels of our recent story that many international game developers are opting to skip this year's weeklong Game Developers Conference in San Francisco, citing similar concerns. Ars Technica reports: Established in 1991, the Ig Nobels are a good-natured parody of the Nobel Prizes; they honor "achievements that first make people laugh and then make them think." As the motto implies, the research being honored might seem ridiculous at first glance, but that doesn't mean it's devoid of scientific merit. The unapologetically campy awards ceremony features miniature operas, scientific demos, and the 24/7 lectures, in which experts must explain their work twice: once in 24 seconds and again in just seven words.
Traditionally, the awards ceremony and related Ig Nobel events have taken place in Boston at Harvard University, Massachusetts Institute of Technology, and Boston University. However, four of last year's 10 winners opted to skip the ceremony rather than travel to the U.S., and the situation has not improved. [...] [T]his year, the Ig Nobel organizers are joining forces with the ETH Domain and the University of Zurich for hosting duties. "Switzerland has nurtured many unexpected good things -- Albert Einstein's physics, the world economy, and the cuckoo clock leap to mind -- and is again helping the world appreciate improbable people and ideas," Abraham said.
The Ig Nobels will not be returning to the U.S. any time soon. Instead, the plan is for Zurich to host every second year; every odd-numbered year, the ceremony will be hosted by a different European city. Abraham likened the arrangement to the Eurovision Song Contest.
Read more of this story at Slashdot.
Ransomware, malware-as-a-service, infostealers benefit MOIS, too
Iranian government-backed snoops are increasingly using cybercrime malware and ransomware infrastructure in their operations - not just hiding behind criminal masks as a cover for destructive cyber activity, according to security researchers.…
OpenAI is reportedly backing away from expanding its AI data center partnership with Oracle because newer generations of Nvidia GPUs may arrive before the facility is even operational. CNBC reports: Artificial intelligence chips are getting upgraded more quickly than data centers can be built, a market reality that exposes a key risk to the AI trade and Oracle's debt-fueled expansion. OpenAI is no longer planning to expand its partnership with Oracle in Abilene, Texas, home to the Stargate data center, because it wants clusters with newer generations of Nvidia graphics processing units, according to a person familiar with the matter.
The current Abilene site is expected to use Nvidia's Blackwell processors, and the power isn't projected to come online for a year. By then, OpenAI is hoping to have expanded access to Nvidia's next-generation chips in bigger clusters elsewhere, said the person, who asked not to be named due to confidentiality. In a post on X, Oracle called the reports "false and incorrect." However, it only said existing projects are on track and didn't address expansion plans.
CNBC notes: "Oracle secured the site, ordered the hardware, and spent billions of dollars on construction and staff, with the expectation of going bigger."
Read more of this story at Slashdot.
OpenAI is reportedly backing away from expanding its AI data center partnership with Oracle because newer generations of Nvidia GPUs may arrive before the facility is even operational. CNBC reports: Artificial intelligence chips are getting upgraded more quickly than data centers can be built, a market reality that exposes a key risk to the AI trade and Oracle's debt-fueled expansion. OpenAI is no longer planning to expand its partnership with Oracle in Abilene, Texas, home to the Stargate data center, because it wants clusters with newer generations of Nvidia graphics processing units, according to a person familiar with the matter.
The current Abilene site is expected to use Nvidia's Blackwell processors, and the power isn't projected to come online for a year. By then, OpenAI is hoping to have expanded access to Nvidia's next-generation chips in bigger clusters elsewhere, said the person, who asked not to be named due to confidentiality. In a post on X, Oracle called the reports "false and incorrect." However, it only said existing projects are on track and didn't address expansion plans.
CNBC notes: "Oracle secured the site, ordered the hardware, and spent billions of dollars on construction and staff, with the expectation of going bigger."
Read more of this story at Slashdot.
Study warns peak cooling demand could strain US water systems by 2030
Public water supplies in America will need billions invested to meet the peak requirements of datacenters during the hottest periods of the year, even if their overall annual consumption is relatively modest.…
An anonymous reader quotes a report from The Register: AI can reverse engineer machine code and find vulnerabilities in ancient legacy architectures, says Microsoft Azure CTO Mark Russinovich, who used his own Apple II code from 40 years ago as an example. Russinovich wrote: "We are entering an era of automated, AI-accelerated vulnerability discovery that will be leveraged by both defenders and attackers."
In May 1986, Russinovich wrote a utility called Enhancer for the Apple II personal computer. The utility, written in 6502 machine language, added the ability to use a variable or BASIC expression for the destination of a GOTO, GOSUB, or RESTORE command, whereas without modification Applesoft BASIC would only accept a line number. Russinovich had Claude Opus 4.6, released early last month, look over the code. It decompiled the machine language and found several security issues, including a case of "silent incorrect behavior" where, if the destination line was not found, the program would set the pointer to the following line or past the end of the program, instead of reporting an error. The fix would be to check the carry flag, which is set if the line is not found, and branch to an error.
The existence of the vulnerability in Apple II type-in code has only amusement value, but the ability of AI to decompile embedded code and find vulnerabilities is a concern. "Billions of legacy microcontrollers exist globally, many likely running fragile or poorly audited firmware like this," said one comment to Russinovich's post.
Read more of this story at Slashdot.
Agentic 'Air' lets multiple AI agents run tasks concurrently, while loyal IntelliJ users wonder what's in it for them
JetBrains has previewed Air, a tool for agentic AI development which it describes as a new wave of dev tooling.…
Rapid7 says crims broke into more than 250 sites globally, including a US Senate candidate’s campaign page
Cyber baddies quietly compromised legitimate WordPress websites, including the campaign site of a US Senate candidate, turning them into launchpads for a global infostealer operation.…
FAA launches pilot projects starting this summer
The skies over parts of the US could soon get busier, as the Federal Aviation Administration launches pilot projects spanning 26 states to test electric air taxis and other next-gen aircraft, with operations expected to begin by summer 2026.…
Axios reports that Meta has acquired Moltbook, the viral, Reddit-like social network designed for AI agents. Humans are welcome, but only to observe. Axios reports: The deal brings Moltbook's creators -- Matt Schlicht and Ben Parr -- into Meta Superintelligence Labs (MSL), the unit run by former Scale AI CEO Alexandr Wang. Meta did not disclose Moltbook's purchase price. The deal is expected to close mid-March, Meta says, with the pair starting at MSL on March 16. When it launched in late January, Moltbook was labeled the "most interesting place on the internet" by open-source developer and writer Simon Willison. "Browsing around Moltbook is so much fun. A lot of it is the expected science fiction slop, with agents pondering consciousness and identity. There's also a ton of genuinely useful information, especially on m/todayilearned."
In an internal post seen by Axios, Meta's Vishal Shah said existing Moltbook customers can temporarily continue using the platform. "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf," Shah says. "This establishes a registry where agents are verified and tethered to human owners." He added: "Their team has unlocked new ways for agents to interact, share content, and coordinate complex tasks."
Read more of this story at Slashdot.
Launch predictions continue to be optimistic as 2027 and Artemis III near
SpaceX has rolled another Starship super heavy booster to the launch pad as the company's boss, Elon Musk, admits the first launch of Starship V3 had slipped.…
Pages
|