Linux fréttir
13 governments sound the alarm about ongoing unpleasantness
China's Salt Typhoon cyberspies continue their years-long hacking campaign targeting critical industries around the world, according to a joint security alert from cyber and law enforcement agencies across 13 countries.…
Japan has launched its first entirely homegrown quantum computer, built with domestic superconducting qubits and components, and running on the country's own open-source software toolchain, OQTOPUS. "The system is now ready to take on workloads from its base at the University of Osaka's Center for Quantum Information and Quantum Biology (QIQB)," reports LiveScience. From the report: The system uses a quantum chip with superconducting qubits -- quantum bits derived from metals that exhibit zero electrical resistance when cooled to temperatures close to absolute zero (minus 459.67 degrees Fahrenheit, or minus 273.15 degrees Celsius). The quantum processing unit (QPU) was developed at the Japanese research institute RIKEN. Other components that make up the "chandelier" -- the main body of the quantum computer -- include the chip package, delivered by Seiken, the magnetic shield, infrared filters, bandpass filters, a low-noise amplifier and various cables.
These are all housed in a dilution refrigerator (a specialized cryogenic device that cools the quantum computing components) to allow for those extremely low temperatures. It also comes alongside a pulse tube refrigerator (which again cools various components in use), controllers and a low-noise power source. OQTOPUS, meanwhile, is a collection of open-source tools that include everything required to run quantum programs. It includes the core engine and cloud module, as well as graphical user interface (GUI) elements, and is designed to be built on top of a QPU and quantum control hardware.
Read more of this story at Slashdot.
Stolen painting still mising, sadly
Police in Argentina reportedly raided a home in a coastal town on Monday after someone spotted a real estate ad that included images of art the Nazis looted in the Second World War.…
An anonymous reader quotes a report from TechCrunch: SpaceX has long marketed Starship as a fully and rapidly reusable rocket that's designed to deliver thousands of pounds of cargo to Mars and make life multiplanetary. But reusability at scale means a space vehicle that can tolerate mishaps and faults, so that a single failure doesn't spell a mission-ending catastrophe. The 10th test flight on Tuesday evening demonstrated SpaceX's focus on fault tolerance. In a post-flight update, SpaceX said the test stressed "the limits of vehicle capabilities." Understanding these edges will be critical for the company's plans to eventually use Starship to launch Starlink satellites, commercial payloads, and eventually astronauts.
When the massive Starship rocket lifted off on its 10th test flight Tuesday evening, SpaceX did more than achieve new milestones. It purposefully introduced several faults to test the heat shield, propulsion redundancy, and the relighting of its Raptor engine. The heat shield is among the toughest engineering challenges facing SpaceX. As Elon Musk acknowledged on X in May 2024, a reusable orbital return heat shield is the "biggest remaining problem" to 100% rocket reusability. The belly of the upper stage, also called Starship, is covered in thousands of hexagonal ceramic and metallic tiles, which make up the heat shield. Flight 10 was all about learning how much damage the ship can accept and survive when it goes through atmospheric heating. During the tenth test, engineers intentionally removed tiles from some sections of the ship, and experimented with a new type of actively cooled tile, to gather real-world data and refine designs. [...]
Propulsion redundancy was also put to the test. The Super Heavy booster's landing burn configuration appeared to be a rehearsal for engine failure. Engineers intentionally disabled one of the three center Raptor engines during the final phase of the burn and used a backup engine in its place. That was a successful rehearsal for an engine-out event. Finally, SpaceX reported the in-space relight of a Raptor engine, described on the launch broadcast as the second time SpaceX has pulled this off. Reliable engine restarts will be necessary for deep-space missions, propellant transfers, and possibly some payload deployment missions. [...] The next step is translating Flight 10 data into future hardware upgrades to move closer to routine operations and days when, as Musk envisioned, "Starship launches more than 24 times in 24 hours."
Read more of this story at Slashdot.
If regulators heed the lessons of Fukushima, testing will have to jump Godzilla-sized hurdles
Japan’s Nuclear Regulation Authority has requested extra funds to experiment with AI-powered nuclear plant inspectors.…
samleecole shares a report from 404 Media: An app developer has jailbroken Echelon exercise bikes to restore functionality that the company put behind a paywall last month, but copyright laws prevent him from being allowed to legally release it. Last month, Peloton competitor Echelon pushed a firmware update to its exercise equipment that forces its machines to connect to the company's servers in order to work properly. Echelon was popular in part because it was possible to connect Echelon bikes, treadmills, and rowing machines to free or cheap third-party apps and collect information like pedaling power, distance traveled, and other basic functionality that one might want from a piece of exercise equipment. With the new firmware update, the machines work only with constant internet access and getting anything beyond extremely basic functionality requires an Echelon subscription, which can cost hundreds of dollars a year.
App engineer Ricky Witherspoon, who makes an app called SyncSpin that used to work with Echelon bikes, told 404 Media that he successfully restored offline functionality to Echelon equipment and won the Fulu Foundation bounty. But he and the foundation said that he cannot open source or release it because doing so would run afoul of Section 1201 of the Digital Millennium Copyright Act, the wide-ranging copyright law that in part governs reverse engineering. There are various exemptions to Section 1201, but most of them allow for jailbreaks like the one Witherspoon developed to only be used for personal use. [...] "I don't feel like going down a legal rabbit hole, so for now it's just about spreading awareness that this is possible, and that there's another example of egregious behavior from a company like this [...] if one day releasing this was made legal, I would absolutely open source this. I can legally talk about how I did this to a certain degree, and if someone else wants to do this, they can open source it if they want to."
Read more of this story at Slashdot.
Harvard researchers find model guardrails tailor query responses to user's inferred politics and other affiliations
OpenAI's ChatGPT appears to be more likely to refuse to respond to questions posed by fans of the Los Angeles Chargers football team than to followers of other teams.…
China would be a $50 billion a year market for Nvidia if Uncle Sam would let us sell competitive products, says Jensen Huang
Nvidia's top brass urged Washington to approve the sale of Blackwell accelerators to China during the GPU giant's Q2 earnings call on Wednesday.…
Nevada has been crippled by a cyberattack that began on August 24, taking down state websites, intermittently disabling phone lines, and forcing offices like the DMV to close. The Register reports: The Office of Governor Joseph Lombardo announced the attack via social media on Monday, saying that a "network security incident" took hold in the early hours of August 24. Official state websites remain unavailable, and Lombardo's office warned that phone lines will be intermittently down, although emergency services lines remain operational. State offices are also closed until further notice, including Department of Motor Vehicles (DMV) buildings. The state said any missed appointments will be honored on a walk-in basis.
"The Office of the Governor and Governor's Technology Office (GTO) are working continuously with state, local, tribal, and federal partners to restore services safely," the announcement read. "GTO is using temporary routing and operational workarounds to maintain public access where it is feasible. Additionally, GTO is validating systems before returning them to normal operation and sharing updates as needed." Local media outlets are reporting that, further to the original announcement, state offices will remain closed on Tuesday after officials previously expected them to reopen. The state's new cybersecurity office says there is currently no evidence to suggest that any Nevadans' personal information was compromised during the attack.
Read more of this story at Slashdot.
Virtzilla also helping banks to sink and re-float software-defined infrastructure to stop stealthy malware
VMware has tweaked its software licensing so submarines can keep their computers running when they’re beneath the waves.…
A widely used Node.js utility called fast-glob, relied on by thousands of projectsâ"including over 30 U.S. Department of Defense systems -- is maintained solely by a Russian developer linked to Yandex. While there's no evidence of malicious activity, cybersecurity experts warn that the lack of oversight in such critical open-source projects leaves them vulnerable to potential exploitation by state-backed actors. The Register reports: US cybersecurity firm Hunted Labs reported the revelations on Wednesday. The utility in question is fast-glob, which is used to find files and folders that match specific patterns. Its maintainer goes by the handle "mrmlnc", and the Github profile associated with that handle identifies its owner as a Yandex developer named Denis Malinochkin living in a suburb of Moscow. A website associated with that handle also identifies its owner as the same person, as Hunted Labs pointed out.
Hunted Labs told us that it didn't speak to Malinochkin prior to publication of its report today, and that it found no ties between him and any threat actor. According to Hunted Labs, fast-glob is downloaded more than 79 million times a week and is currently used by more than 5,000 public projects in addition to the DoD systems and Node.js container images that include it. That's not to mention private projects that might use it, meaning that the actual number of at-risk projects could be far greater.
While fast-glob has no known CVEs, the utility has deep access to systems that use it, potentially giving Russia a number of attack vectors to exploit. Fast-glob could attack filesystems directly to expose and steal info, launch a DoS or glob-injection attack, include a kill switch to stop downstream software from functioning properly, or inject additional malware, a list Hunted Labs said is hardly exhaustive. [...] Hunted Labs cofounder Haden Smith told The Register that the ties are cause for concern. "Every piece of code written by Russians isn't automatically suspect, but popular packages with no external oversight are ripe for the taking by state or state-backed actors looking to further their aims," Smith told us in an email. "As a whole, the open source community should be paying more attention to this risk and mitigating it." [...]
Hunted Labs said that the simplest solution for the thousands of projects using fast-glob would be for Malinochkin to add additional maintainers and enhance project oversight, as the only other alternative would be for anyone using it to find a suitable replacement. "Open source software doesn't need a CVE to be dangerous," Hunted Labs said of the matter. "It only needs access, obscurity, and complacency," something we've noted before is an ongoing problem for open source projects. This serves as another powerful reminder that knowing who writes your code is just as critical as understanding what the code does," Hunted Labs concluded.
Read more of this story at Slashdot.
An anonymous reader quotes a report from 404 Media: 4chan and Kiwi Farms sued the United Kingdom's Office of Communications (Ofcom) over its age verification law in U.S. federal court Wednesday, fulfilling a promise it announced on August 23. In the lawsuit, 4chan and Kiwi Farms claim that threats and fines they have received from Ofcom "constitute foreign judgments that would restrict speech under U.S. law." Both entities say in the lawsuit that they are wholly based in the U.S. and that they do not have any operations in the United Kingdom and are therefore not subject to local laws. Ofcom's attempts to fine and block 4chan and Kiwi Farms, and the lawsuit against Ofcom, highlight the messiness involved with trying to restrict access to specific websites or to force companies to comply with age verification laws.
The lawsuit calls Ofcom an "industry-funded global censorship bureau." "Ofcom's ambitions are to regulate Internet communications for the entire world, regardless of where these websites are based or whether they have any connection to the UK," the lawsuit states. "On its website, Ofcom states that 'over 100,000 online services are likely to be in scope of the Online Safety Act -- from the largest social media platforms to the smallest community forum.'" [...] Ofcom began investigating 4chan over alleged violations of the Online Safety Act in June. On August 13, it announced a provisional decision and stated that 4chan had "contravened its duties" and then began to charge the site a penalty of [roughly $26,000] a day. Kiwi Farms has also been threatened with fines, the lawsuit states. "American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail. In the face of these foreign demands, our clients have bravely chosen to assert their constitutional rights," said Preston Byrne, one of the lawyers representing 4chan and Kiwi Farms.
"We are aware of the lawsuit," an Ofcom spokesperson told 404 Media. "Under the Online Safety Act, any service that has links with the UK now has duties to protect UK users, no matter where in the world it is based. The Act does not, however, require them to protect users based anywhere else in the world."
Read more of this story at Slashdot.
Starting with Word for Windows version 2509, Microsoft is making cloud saving the default behavior. New documents will automatically save to OneDrive (or another cloud destination), with dated filenames, unless users manually revert to local saving in the settings. From the report: "Anything new you create will be saved automatically to OneDrive or your preferred cloud destination", writes Raul Munoz, product manager at Microsoft on the Office Shared Services and Experiences team. Munoz backs up the decision with half a dozen advantages for saving documents to the cloud. From never losing progress and access anywhere to easy collaboration and increased security and compliance. While cloud saving is without doubt beneficial in some cases, Munoz fails to address the elephant in the room. Some users may not want that their documents are stored in the cloud. There are good reasons for that, including privacy.
Summed up:
- If you do not mind that Word documents are stored in the cloud, you do not need to become active.
- If you mind that Word documents are stored in the cloud by default, you need to modify the default setting.
Read more of this story at Slashdot.
Starting at $2,999, tiny doesn't mean cheap
Hot Chips Back in 2023, Nvidia's superchip architecture introduced a new programming model for accelerated workloads by coupling the CPU to the GPU via a high-speed NVLink fabric that makes PCIe feel positively glacial.…
Google has eliminated more than one-third of its managers overseeing small teams, an executive told employees last week, as the company continues its focus on efficiencies across the organization. From a report: "Right now, we have 35% fewer managers, with fewer direct reports" than at this time a year ago, said Brian Welle, vice president of people analytics and performance, according to audio of an all-hands meeting reviewed by CNBC. "So a lot of fast progress there."
At the meeting, employees asked Welle and other executives about job security, "internal barriers" and Google's culture after several recent rounds of layoffs, buyouts and reorganizations. Welle said the idea is to reduce bureaucracy and run the company more efficiently. "When we look across our entire leadership population, that['s mangers, directors and VPs, we want them to be a smaller percentage of our overall workforce over time," he said.
Read more of this story at Slashdot.
There's also a rogue Russian on the list
The US Treasury Department has announced sanctions against two Asian companies and two individuals for allegedly helping North Korean IT workers fake their way into US jobs.…
After losing his job in 2024, Eric Thompson spearheaded a working group to push for federal legislation banning "ghost jobs" -- openings posted with no intent to hire. The proposed Truth in Job Advertising and Accountability Act would require transparency around job postings, set limits on how long ads can remain up, and fine companies that violate the rules. CNBC reports: "There's nothing illegal about posting a job, currently, and never filling it," says Thompson, a network engineering leader in Warrenton, Virginia. Not to mention, it's "really hard to prove, and so that's one of the reasons that legally, it's been kind of this gray area." As Thompson researched more into the phenomenon, he connected with former colleagues and professional connections across the country experiencing the same thing. Together, the eight of them decided to form the TJAAA working group to spearhead efforts for federal legislation to officially ban businesses from posting ghost jobs.
In May, the group drafted its first proposal: The TJAAA aims to require that all public job listings include information such as:
- The intended hire and start dates
- Whether it's a new role or backfill
- If it's being offered internally with preference to current employees
- The number of times the position has been posted in the last two years, and other factors, according to the draft language.
It also sets guidelines for how long a post is required to be up (no more than 90 calendar days) and how long the submission period can be (at least four calendar days) before applications can be reviewed. The proposed legislation applies to businesses with more than 50 employees, and violators can be fined a minimum of $2,500 for each infraction. The proposal provides a framework at the federal level, Thompson says, because state-level policies won't apply to employers who post listings across multiple states, or who use third-party platforms that operate beyond state borders.
Read more of this story at Slashdot.
AI lowers the bar for cybercrime, Anthropic admits
comment Anthropic, a maker of AI tools, says that AI tools are now commonly used to commit cybercrime and facilitate remote worker fraud.…
Not a disaster recovery option, but good enough for a migration
Microsoft continues to take what's familiar to ordinary users and offer it to enterprises. The latest functionality is Windows Backup for Organizations.…
An anonymous reader quotes a report from The Hill: Republicans on the House Oversight and Government Reform Committee opened a probe into alleged organized efforts to inject bias into Wikipedia entries and the organization's responses. Chair James Comer (R-Ky.) and Rep. Nancy Mace (R-S.C.), chair of the panel's subcommittee on cybersecurity, information technology, and government innovation, on Wednesday sent an information request on the matter to Maryana Iskander, chief executive officer of the Wikimedia Foundation, the nonprofit that hosts Wikipedia. The request, the lawmakers said in the letter (PDF), is part of an investigation into "foreign operations and individuals at academic institutions subsidized by U.S. taxpayer dollars to influence U.S. public opinion."
The panel is seeking documents and communications about Wikipedia volunteer editors who violated the platform's policies, as well as the Wikimedia Foundation's efforts to "thwart intentional, organized efforts to inject bias into important and sensitive topics." "Multiple studies and reports have highlighted efforts to manipulate information on the Wikipedia platform for propaganda aimed at Western audiences," Comer and Mace wrote in the letter. They referenced a report from the Anti-Defamation League about anti-Israel bias on Wikipedia that detailed a coordinated campaign to manipulate content related to the Israel-Palestine conflict and similar issues, as well as an Atlantic Council report on pro-Russia actors using Wikipedia to push pro-Kremlin and anti-Ukrainian messaging, which can influence how artificial intelligence chatbots are trained.
"[The Wikimedia] foundation, which hosts the Wikipedia platform, has acknowledged taking actions responding to misconduct by volunteer editors who effectively create Wikipedia's encyclopedic articles. The Committee recognizes that virtually all web-based information platforms must contend with bad actors and their efforts to manipulate. Our inquiry seeks information to help our examination of how Wikipedia responds to such threats and how frequently it creates accountability when intentional, egregious, or highly suspicious patterns of conduct on topics of sensitive public interest are brought to attention," Comer and Mace wrote. The lawmakers requested information about "the tools and methods Wikipedia utilizes to identify and stop malicious conduct online that injects bias and undermines neutral points of view on its platform," including documents and records about possible coordination of state actors in editing, the kind of accounts that have been subject to review, and and of the panel's analysis of data manipulation or bias. "We welcome the opportunity to respond to the Committee's questions and to discuss the importance of safeguarding the integrity of information on our platform," a Wikimedia Foundation spokesperson said.
Read more of this story at Slashdot.
Pages
|