Linux fréttir

Do Electric Vehicles Fail at a Lower Rate Than Gas Cars In Extreme Cold?

Slashdot - Sun, 2024-01-21 02:34
In a country experiencing extreme cold — and where almost 1 in 4 cars are electric — a roadside assistance company says it's still gas-powered cars that are experiencing the vast majority of problems starting. Electrek argues that while extreme cold may affect chargers, "it mainly gets attention because it's a new technology and it fails for different reasons than gasoline vehicles in the cold." Viking, a road assistance company (think AAA), says that it responded to 34,000 assistance requests in the first 9 days of the year. Viking says that only 13% of the cases were coming from electric vehicles (via TV2 — translated from Norwegian) ["13 percent of the cases with starting difficulties are electric cars, while the remaining 87 percent are fossil cars..."] To be fair, this data doesn't adjust for the age of the vehicles. Older gas-powered cars fail at a higher rate than the new ones and electric vehicles are obviously much more recent on average. Thanks to long-time Slashdot reader Geoffrey.landis for sharing the article.

Read more of this story at Slashdot.

Categories: Linux fréttir

Ultra-Large Structure Discovered In Distant Space Challenges Cosmological Principle

Slashdot - Sat, 2024-01-20 23:34
"The discovery of a second ultra-large structure in the remote universe has further challenged some of the basic assumptions about cosmology," writes SciTechDaily: The Big Ring on the Sky is 9.2 billion light-years from Earth. It has a diameter of about 1.3 billion light-years, and a circumference of about four billion light-years. If we could step outside and see it directly, the diameter of the Big Ring would need about 15 full Moons to cover it. It is the second ultra-large structure discovered by University of Central Lancashire (UCLan) PhD student Alexia Lopez who, two years ago, also discovered the Giant Arc on the Sky. Remarkably, the Big Ring and the Giant Arc, which is 3.3 billion light-years across, are in the same cosmological neighborhood — they are seen at the same distance, at the same cosmic time, and are only 12 degrees apart on the sky. Alexia said: "Neither of these two ultra-large structures is easy to explain in our current understanding of the universe. And their ultra-large sizes, distinctive shapes, and cosmological proximity must surely be telling us something important — but what exactly? "One possibility is that the Big Ring could be related to Baryonic Acoustic Oscillations (BAOs). BAOs arise from oscillations in the early universe and today should appear, statistically at least, as spherical shells in the arrangement of galaxies. However, detailed analysis of the Big Ring revealed it is not really compatible with the BAO explanation: the Big Ring is too large and is not spherical." Other explanations might be needed, explanations that depart from what is generally considered to be the standard understanding in cosmology... And if the Big Ring and the Giant Arc together form a still larger structure then the challenge to the Cosmological Principle becomes even more compelling... Alexia said, "From current cosmological theories we didn't think structures on this scale were possible. " Possible explanations include a Conformal Cyclic Cosmology, or the effect of cosmic strings passing through... Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.

Categories: Linux fréttir

NPM Users Download 2.1B Deprecated Packages Weekly, Say Security Researchers

Slashdot - Sat, 2024-01-20 22:34
The cybersecurity site SC Media reports that NPM registry users "download deprecated packages an estimated 2.1 billion times weekly, according to a statistical analysis of the top 50,000 most-downloaded packages in the registry." Deprecated, archived and "orphaned" NPM packages can contain unpatched and/or unreported vulnerabilities that pose a risk to the projects that depend on them, warned the researchers from Aqua Security's Team Nautilus, who published their findings in a blog post on Sunday... In conjunction with their research, Aqua Nautilus has released an open-source tool that can help developers identify deprecated dependencies in their projects. Open-source software may stop receiving updates for a variety of reasons, and it is up to developers/maintainers to communicate this maintenance status to users. As the researchers pointed out, not all developers are transparent about potential risks to users who download or depend on their outdated NPM packages. Aqua Nautilus researchers kicked off their analysis after finding that one open-source software maintainer responded to a report about a vulnerability Nautilus discovered by archiving the vulnerable repository the same day. By archiving the repository without fixing the security flaw or assigning it a CVE, the owner leaves developers of dependent projects in the dark about the risks, the researchers said... Taking into consideration both deprecated packages and active packages that have a direct dependency on deprecated projects, the researchers found about 4,100 (8.2%) of the top 50,000 most-downloaded NPM packages fell under the category of "official" deprecation. However, adding archived repositories to the definition of "deprecated" increased the number of packages affected by deprecation and deprecated dependencies to 6,400 (12.8%)... Including packages with linked repositories that are shown as unavailable (404 error) on GitHub increases the deprecation rate to 15% (7,500 packages), according to the Nautilus analysis. Encompassing packages without any linked repository brings the final number of deprecated packages to 10,600, or 21.2% of the top 50,000. Team Nautilus estimated that under this broader understanding of package deprecation, about 2.1 billion downloads of deprecated packages are made on the NPM registry weekly.

Read more of this story at Slashdot.

Categories: Linux fréttir

Billy Mitchell and Twin Galaxies Settle Lawsuits On Donkey Kong World Records

Slashdot - Sat, 2024-01-20 21:34
"What happens when a loser who needs to win faces a winner who refuses to lose?" That was the tagline for the iconic 2007 documentary The King of Kong: A Fistful of Quarters, chronicling a middle-school teacher's attempts to take the Donkey Kong record from reigning world champion Billy Mitchell. "Billy Mitchell always has a plan," says Billy Mitchell in the movie (who is also shown answering his phone, "World Record Headquarters. Can I help you?") By 1985, 30-year-old Mitchell was already listed in the "Guinness Book of World Records" for having the world's highest scores for Pac-Man, Ms. Pac-Man, Donkey Kong, Donkey Kong, Jr., Centipede, and Burger Time. But then, NME reports... In 2018, a number of Mitchell's Donkey Kong high-scores were called into question by a fellow gamer, who supplied a string of evidence on the Twin Galaxies forums suggesting Mitchell had used an emulator to break the records, rather than the official, unmodified hardware that's typically required to keep things fair. [Twin Galaxies is Guiness World Records' official source for videogame scores.] Following "an independent investigation," Mitchell's hi-scores were removed from video game database Twin Galaxies as well as the Guinness Book Of Records, though the latter reversed the decision in 2020. Forensic analysts also accused him of cheating in 2022 but Mitchell has fought the accusations ever since. This week, 58-year-old Billy Mitchell posted an announcement on X. "Twin Galaxies has reinstated all of my world records from my videogame career... I am relieved and satisfied to reach this resolution after an almost six-year ordeal and look forward to pursuing my unfinished business elsewhere. Never Surrender, Billy Mitchell." X then wrote below the announcement, "Readers added context they thought people might want to know... Twin Galaxies has only reinstated Michell's scores on an archived leaderboard, where rules were different prior to TG being acquired in 2014. His score remains removed from the current leaderboard where he continues to be ineligible by today's rules." The statement from Twin Galaxies says they'd originally believed they'd seen "a demonstrated impossibility of original, unmodified Donkey Kong arcade hardware" in a recording of one of Billy's games. As punishment they'd then invalidated every record he'd ever set in his life. But now an engineer (qualified as an expert in federal courts) says aging components in the game board could've produced the same visual artifacts seen in the videotape of the disputed game. Consistent with Twin Galaxies' dedication to the meticulous documentation and preservation of video game score history, Twin Galaxies shall heretofore reinstate all of Mr. Mitchell's scores as part of the official historical database on Twin Galaxies' website. Additionally, upon closing of the matter, Twin Galaxies shall permanently archive and remove from online display the dispute thread... as well as all related statements and articles. NME adds: Twin Galaxies' lawyer David Tashroudian told Ars Technica that the company had all its "ducks in a row" for a legal battle with Mitchell but "there were going to be an inordinate amount of costs involved, and both parties were facing a lot of uncertainty at trial, and they wanted to get the matter settled on their own terms." And the New York Times points out that while Billy scored 1,062,800 in that long-ago game, "The vigorous long-running and sometimes bitter dispute was over marks that have long since been surpassed. The current record, as reported by Twin Galaxies, belongs to Robbie Lakeman. It's 1,272,800." Thanks to long-time Slashdot reader UnknowingFool for sharing the news.

Read more of this story at Slashdot.

Categories: Linux fréttir

Billy Mitchell and Twin Galaxies Settle Lawsuits On Donkey Kong World Records

Slashdot - Sat, 2024-01-20 21:34
"What happens when a loser who needs to win faces a winner who refuses to lose?" That was the tagline for the iconic 2007 documentary The King of Kong: A Fistful of Quarters, chronicling a middle-school teacher's attempts to take the Donkey Kong record from reigning world champion Billy Mitchell. "Billy Mitchell always has a plan," says Billy Mitchell in the movie (who is also shown answering his phone, "World Record Headquarters. Can I help you?") By 1985, 30-year-old Mitchell was already listed in the "Guinness Book of World Records" for having the world's highest scores for Pac-Man, Ms. Pac-Man, Donkey Kong, Donkey Kong, Jr., Centipede, and Burger Time. But then, NME reports... In 2018, a number of Mitchell's Donkey Kong high-scores were called into question by a fellow gamer, who supplied a string of evidence on the Twin Galaxies forums suggesting Mitchell had used an emulator to break the records, rather than the official, unmodified hardware that's typically required to keep things fair. [Twin Galaxies is Guiness World Records' official source for videogame scores.] Following "an independent investigation," Mitchell's hi-scores were removed from video game database Twin Galaxies as well as the Guinness Book Of Records, though the latter reversed the decision in 2020. Forensic analysts also accused him of cheating in 2022 but Mitchell has fought the accusations ever since. This week, 58-year-old Billy Mitchell posted an announcement on X. "Twin Galaxies has reinstated all of my world records from my videogame career... I am relieved and satisfied to reach this resolution after an almost six-year ordeal and look forward to pursuing my unfinished business elsewhere. Never Surrender, Billy Mitchell." X then wrote below the announcement, "Readers added context they thought people might want to know... Twin Galaxies has only reinstated Michell's scores on an archived leaderboard, where rules were different prior to TG being acquired in 2014. His score remains removed from the current leaderboard where he continues to be ineligible by today's rules." The statement from Twin Galaxies says they'd originally believed they'd seen "a demonstrated impossibility of original, unmodified Donkey Kong arcade hardware" in a recording of one of Billy's games. As punishment they'd then invalidated every record he'd ever set in his life. But now an engineer (qualified as an expert in federal courts) says aging components in the game board could've produced the same visual artifacts seen in the videotape of the disputed game. Consistent with Twin Galaxies' dedication to the meticulous documentation and preservation of video game score history, Twin Galaxies shall heretofore reinstate all of Mr. Mitchell's scores as part of the official historical database on Twin Galaxies' website. Additionally, upon closing of the matter, Twin Galaxies shall permanently archive and remove from online display the dispute thread... as well as all related statements and articles. NME adds: Twin Galaxies' lawyer David Tashroudian told Ars Technica that the company had all its "ducks in a row" for a legal battle with Mitchell but "there were going to be an inordinate amount of costs involved, and both parties were facing a lot of uncertainty at trial, and they wanted to get the matter settled on their own terms." And the New York Times points out that while Billy scored 1,062,800 in that long-ago game, "The vigorous long-running and sometimes bitter dispute was over marks that have long since been surpassed. The current record, as reported by Twin Galaxies, belongs to Robbie Lakeman. It's 1,272,800." Thanks to long-time Slashdot reader UnknowingFool for sharing the news.

Read more of this story at Slashdot.

Categories: Linux fréttir

Rust-Written Linux Scheduler Continues Showing Promising Results For Gaming

Slashdot - Sat, 2024-01-20 20:34
"A Canonical engineer has been experimenting with implementing a Linux scheduler within the Rust programming language..." Phoronix reported Monday, "that works via sched_ext for implementing a scheduler using eBPF that can be loaded during run-time." The project was started "just for fun" over Christmas, according to a post on X by Canonical-based Linux kernel engineer Andrea Righi, adding "I'm pretty shocked to see that it doesn't just work, but it can even outperform the default Linux scheduler (EEVDF) with certain workloads (i.e., gaming)." Phoronix notes the a YouTube video accompanying the tweet shows "a game with the scx_rustland scheduler outperforming the default Linux kernel scheduler while running a parallel kernel build in the background." "For sure the build takes longer," Righi acknowledged in a later post. "This scheduler doesn't magically makes everything run faster, it simply prioritizes more the interactive workloads vs CPU-intensive background jobs." Righi followed up by adding "And the whole point of this demo was to prove that, despite the overhead of running a scheduler in user-space, we can still achieve interesting performance, while having the advantages of being in user-space (ease of experimentation/testing, reboot-less updates, etc.)" Wednesday Righi added some improvements, posting that "Only 19 lines of code (comments included) for ~2x performance improvement on SMT isn't bad... and I spent my lunch break playing Counter Strike 2 to test this patch..." And work seems to be continuing, judging by a fresh post from Righi on Thursday. "I fixed virtme-ng to run inside Docker and used it to create a github CI workflow for sched-ext that clones the latest kernel, builds it and runs multiple VMs to test all the scx schedulers. And it does that in only ~20min. I'm pretty happy about virtme-ng now."

Read more of this story at Slashdot.

Categories: Linux fréttir

Revolutionary 'LEGO-Like' Photonic Chip Paves Way For Semiconductor Breakthroughs

Slashdot - Sat, 2024-01-20 19:34
"Researchers at the University of Sydney Nano Institute have developed a small silicon semiconductor chip that combines electronic and photonic (light-based) elements," reports SciTechDaily. "This innovation greatly enhances radio-frequency (RF) bandwidth and the ability to accurately control information flowing through the unit." Expanded bandwidth means more information can flow through the chip and the inclusion of photonics allows for advanced filter controls, creating a versatile new semiconductor device. Researchers expect the chip will have applications in advanced radar, satellite systems, wireless networks, and the roll-out of 6G and 7G telecommunications and also open the door to advanced sovereign manufacturing. It could also assist in the creation of high-tech value-add factories at places like Western Sydney's Aerotropolis precinct. The chip is built using an emerging technology in silicon photonics that allows the integration of diverse systems on semiconductors less than 5 millimeters wide. Pro-Vice-Chancellor (Research) Professor Ben Eggleton, who guides the research team, likened it to fitting together Lego building blocks, where new materials are integrated through advanced packaging of components, using electronic 'chiplets'.... Dr Alvaro Casas Bedoya, Associate Director for Photonic Integration in the School of Physics, who led the chip design, said the unique method of heterogeneous materials integration has been 10 years in the making. "The combined use of overseas semiconductor foundries to make the basic chip wafer with local research infrastructure and manufacturing has been vital in developing this photonic integrated circuit," he said. "This architecture means Australia could develop its own sovereign chip manufacturing without exclusively relying on international foundries for the value-add process...." The photonic circuit in the chip means a device with an impressive 15 gigahertz bandwidth of tunable frequencies with spectral resolution down to just 37 megahertz, which is less than a quarter of one percent of the total bandwidth.

Read more of this story at Slashdot.

Categories: Linux fréttir

'For Truckers Driving EVs, There's No Going Back'

Slashdot - Sat, 2024-01-20 18:34
The Washington Post looks at "a small but growing group of commercial medium-to-heavy-duty truck drivers who use electric trucks." "These drivers — many of whom operate local or regional routes that don't require hundreds of miles on the road in a day — generally welcome the transition to electric, praising their new trucks' handling, acceleration, smoothness and quiet operation. "Everyone who has had an EV has no aspirations to go back to diesel at this point," said Khari Burton, who drives an electric Volvo VNR in the Los Angeles area for transport company IMC. "We talk about it and it's all positivity. I really enjoy the smoothness ... and just the quietness as well." Mike Roeth, the executive director of the North American Council for Freight Efficiency, said many drivers have reported that the new vehicles are easier on their bodies — thanks to both less rocking off the cab, assisted steering and the quiet motor. "Part of my hypothesis is that it will help truck driver retention," he said. "We're seeing people who would retire driving a diesel truck now working more years with an electric truck." Most of the electric trucks on the road today are doing local or regional routes, which are easier to manage with a truck that gets only up to 250 miles of range... Trucking advocates say electric has a long way to go before it can take on longer routes. "If you're running very local, very short mileage, there may be a vehicle that can do that type of route," said Mike Tunnell, the executive director of environmental affairs for the American Trucking Association. "But for the average haul of 400 miles, there's just nothing that's really practical today." There's other concerns, according to the article. "[S]ome companies and trucking associations worry this shift, spurred in part by a California law mandating a switch to electric or emissions-free trucks by 2042, is happening too fast. While electric trucks might work well in some cases, they argue, the upfront costs of the vehicles and their charging infrastructure are often too heavy a lift." But this is probably the key sentence in the article: For the United States to meet its climate goals, virtually all trucks must be zero-emissions by 2050. While trucks are only 4 percent of the vehicles on the road, they make up almost a quarter of the country's transportation emissions. The article cites estimates that right now there's 12.2 million trucks on America's highways — and barely more than 1% (13,000) are electric. "Around 10,000 of those trucks were just put on the road in 2023, up from 2,000 the year before." (And they add that Amazon alone has thousands of Rivian's electric delivery vans, operating in 1,800 cities.) But the article's overall message seems to be that when it comes to the trucks, "the drivers operating them say they love driving electric." And it includes comments from actual truckers: 49-year-old Frito-Lay trucker Gary LaBush: "I was like, 'What's going on?' There was no noise — and no fumes... it's just night and day." 66-year-old Marty Boots: Diesel was like a college wrestler. And the electric is like a ballet dancer... You get back into diesel and it's like, 'What's wrong with this thing?' Why is it making so much noise? Why is it so hard to steer?"

Read more of this story at Slashdot.

Categories: Linux fréttir

James Webb Telescope Detects Earliest Known Black Hole

Slashdot - Sat, 2024-01-20 17:34
The Hubble Space Telescope's discovery of GN-z11 in 2016 marked it as the most distant galaxy known at that time, notable for its unexpected luminosity despite its ancient formation just 400 million years after the Big Bang. Now, in a paper published in Nature, astrophysicist Roberto Maiolino proposes that this brightness could be due to a supermassive black hole, challenging current understanding of early black hole formation and growth. NPR reports: This wasn't just any black hole. First -- assuming that the black hole started out small -- it could be devouring matter at a ferocious rate. And it would have needed to do so to reach its massive size. "This black hole is essentially eating the [equivalent of] an entire Sun every five years," says Maiolino. "It's actually much higher than we thought could be feasible for these black holes." Hence the word "vigorous" in the paper's title. Second, the black hole is 1.6 million times the mass of our Sun, and it was in place just 400 million years after the dawn of the universe. "It is essentially not possible to grow such a massive black hole so fast so early in the universe," Maiolino says. "Essentially, there is not enough time according to classical theories. So one has to invoke alternative scenarios." Here's scenario one -- rather than starting out small, perhaps supermassive black holes in the early universe were simply born big due to the collapse of vast clouds of primordial gas. Scenario two is that maybe early stars collapsed to form a sea of smaller black holes, which could have then merged or swallowed matter way faster than we thought, causing the resulting black hole to grow quickly. Or perhaps it's some combination of both. In addition, it's possible that this black hole is harming the growth of the galaxy GN-z11. That's because black holes radiate energy as they feed. At such a high rate of feasting, this energy could sweep away the gas of the host galaxy. And since stars are made from gas, it could quench star formation, slowly strangling the galaxy. Not to mention that without gas, the black hole wouldn't have anything to feed on and it too would die.

Read more of this story at Slashdot.

Categories: Linux fréttir

Ceph: a Journey To 1 TiB/s

Slashdot - Sat, 2024-01-20 16:34
It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation". The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE. And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers — a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness... Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery.... The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full... For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track. It's a long blog post, but here's where it ends up: Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states." Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests." In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU." Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled." The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s — and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes — achieving 950GiB/s — then tried some more performance optimizations...

Read more of this story at Slashdot.

Categories: Linux fréttir

S&P 500 Index Sets Record High, Thanks to 'AI-Driven Frenzy' and Tech Stocks

Slashdot - Sat, 2024-01-20 15:34
The S&P 500 index tracks 500 of the largest companies listed on U.S. stock exchanges, according to Wikipedia. And Friday that index "hit an all-time closing high," reports the Washington Post, "reflecting the staggering gains of a coterie of Big Tech firms against the backdrop of a surprisingly stable economy." The broad-based index closed at 4,839.81 — up more than 1 percent for the day — surpassing the previous closing record set in January of 2022. The stock market surged upward in the final quarter of 2023 as evidence gathered that the [U.S.] economy has not tipped into recession territory, despite the Federal Reserve's campaign to raise interest rates. At the same time analysts point to an AI-driven frenzy on Wall Street that rivals the dot-com boom of the late '90s, when investors sought to capitalize on the transformative gains brought by the early internet. A booming S&P 500 is a welcome sign for the millions of Americans who invest in the index through retirement accounts. Investors in 2022 had about $5.7 trillion in assets passively indexed to the S&P 500 and another $5.7 trillion in funds that use it as a benchmark comparison, according to S&P Global. Voters' feelings about the stock market and economy could affect the 2024 election... Tech companies, including a few names heavily associated with artificial intelligence work, led the S&P 500's gains. Seven of the largest tech stocks known as the "Magnificent Seven" — Apple, Microsoft, Alphabet, Amazon, Nvidia, Tesla and Meta — increased 75 percent on average in 2023 and represented 30 percent of the index's total market value at the end of 2023. "AI is the new dot-com," said Michael Farr of Farr, Miller and Washington. "It's the new magic that is going to change the world that we don't really understand yet. But we all understand it's very powerful." Those seven stocks made up around half of the S&P 500's growth last year. Nvidia, whose high-performance chips have become popular for AI uses, had the best year of the bunch, at one point gaining nearly $190 billion in value overnight, a 24 percent gain. In the last 12 months, the index has risen 21.83%. The article notes that "Although the rest of the market has lagged Big Tech, analysts say promising economic data from recent months has boosted optimism about the broader economy."

Read more of this story at Slashdot.

Categories: Linux fréttir

Can an AI Become Its Own CEO After Creating a Startup? Google DeepMind Co-Founder Thinks So

Slashdot - Sat, 2024-01-20 13:00
An anonymous reader quotes a report from Inc. Magazine: Google's DeepMind division has long led the way on all sorts of AI breakthroughs, grabbing headlines in 2016, when one of its systems beat a world champion at the strategy game Go, then seen as an unlikely feat. So when one of DeepMind's co-founders makes a pronouncement about the future of AI, it's worth listening, especially if you're a startup entrepreneur. AI might be coming for your job! Mustafa Suleyman, co-founder of DeepMind and now CEO of Inflection AI -- a small, California-based machine intelligence company -- recently suggested this possibility could be reality in a half-decade or so. At the World Economic Forum meeting at Davos this week, Suleyman said he thinks AI tech will soon reach the point where it could dream up a company, project-manage it, and successfully sell products. This still-imaginary AI-ntrepreneur will certainly be able to do so by 2030. He's also sure that these AI powers will be "widely available" for "very cheap" prices, potentially even as open-source systems, meaning some aspects of these super smart AIs would be free. Whether an AI entrepreneur could actually beat a human at the startup game is something we'll have to wait to find out, but the mere fact that Suleyman is saying an AI could carry out the role is stunning. It's also controversial, and likely tangled in a forest of thorny legal matters. For example, there's the tricky issue of whether an AI can own or patent intellectual property. A recent ruling in the U.K. argues that an AI definitively cannot be a patent holder. Underlining how much of all of this is theoretical, Suleyman's musings about AI entrepreneurs came from an answer to a question about whether AIs can pass the famous Turing test. This is sometimes considered a gold standard for AI: If a real artificial general intelligence (AGI) can fool a human into thinking that it too is a human. Cunningly, Suleyman twisted the question around, and said the traditional Turing test wasn't good enough. Instead, he argued a better test would be to see if an AGI could perform sophisticated tasks like acting as an entrepreneur. No matter how theoretical Suleyman's thinking is, it will unsettle critics who worry about the destructive potential of AI, and it may worry some in the venture capital world, too. How exactly would one invest in a startup with a founder that's just a pile of silicon chips? Even Suleyman said he thinks that this sort of innovation would cause a giant economic upset.

Read more of this story at Slashdot.

Categories: Linux fréttir

How artists can poison their pics with deadly Nightshade to deter AI scrapers

TheRegister - Sat, 2024-01-20 10:05
Models will need to swallow a lot of it, mind you

University of Chicago boffins this week released Nightshade 1.0, a tool built to punish unscrupulous makers of machine learning models who train their systems on data without getting permission first.…

Categories: Linux fréttir

SpaceX's 'Dragon' Capsule Carries Four Private Astronauts to the ISS for Axiom Space

Slashdot - Sat, 2024-01-20 10:00
"It's the third all-private astronaut mission to the International Space Station," writes NASA — and they're expected to start boarding within the next hour! Watch it all on the official stream of NASA TV. More details from Ars Technica: The four-man team lifted off from NASA's Kennedy Space Center in Florida aboard a SpaceX Falcon 9 rocket Thursday, kicking off a 36-hour pursuit of the orbiting research laboratory. Docking is scheduled for Saturday morning. This two-week mission is managed by Houston-based Axiom Space, which is conducting private astronaut missions to the ISS as a stepping stone toward building a fully commercial space station in low-Earth orbit by the end of this decade. Axiom's third mission, called Ax-3, launched at 4:49 pm EST (21:49 UTC) Thursday. The four astronauts were strapped into their seats inside SpaceX's Dragon Freedom spacecraft atop the Falcon 9 rocket. This is the 12th time SpaceX has launched a human spaceflight mission, and could be the first of five Dragon crew missions this year. NASA reports that the crew "will spend about two weeks conducting microgravity research, educational outreach, and commercial activities aboard the space station." NASA Administrator Bill Nelson said "During their time aboard the International Space Station, the Ax-3 astronauts will carry out more than 30 scientific experiments that will help advance research in low-Earth orbit. As the first all-European commercial astronaut mission to the space station, the Ax-3 crew is proof that the possibility of space unites us all...." The Dragon spacecraft will dock autonomously to the forward port of the station's Harmony module as early as 4:19 a.m. [EST] Saturday. Hatches between Dragon and the station are expected to open after 6 a.m. [EST], allowing the Axiom crew to enter the complex for a welcoming ceremony and start their stay aboard the orbiting laboratory.... The Ax-3 astronauts are expected to depart the space station Saturday, February 3, pending weather, for a return to Earth and splashdown at a landing site off the coast of Florida.

Read more of this story at Slashdot.

Categories: Linux fréttir

Water Ice Buried At Mars' Equator Is Over 2 Miles Thick

Slashdot - Sat, 2024-01-20 07:00
Keith Cooper reports via Space.com: A European Space Agency (ESA) probe has found enough water to cover Mars in an ocean between 4.9 and 8.9 feet (1.5 and 2.7 meters) deep, buried in the form of dusty ice beneath the planet's equator. The finding was made by ESA's Mars Express mission, a veteran spacecraft that has been engaged in science operations around Mars for 20 years now. While it's not the first time that evidence for ice has been found near the Red Planet's equator, this new discovery is by far the largest amount of water ice detected there so far and appears to match previous discoveries of frozen water on Mars. "Excitingly, the radar signals match what we expect to see from layered ice and are similar to the signals we see from Mars' polar caps, which we know to be very ice rich," said lead researcher Thomas Watters of the Smithsonian Institution in the United States in an ESA statement. The deposits are thick, extended 3.7km (2.3) miles underground, and topped by a crust of hardened ash and dry dust hundreds of meters thick. The ice is not a pure block but is heavily contaminated by dust. While its presence near the equator is a location more easily accessible to future crewed missions, being buried so deep means that accessing the water-ice would be difficult.

Read more of this story at Slashdot.

Categories: Linux fréttir

US Government Opens 22 Million Acres of Federal Lands To Solar

Slashdot - Sat, 2024-01-20 03:30
An anonymous reader quotes a report from Electrek: The Biden administration has updated the roadmap for solar development to 22 million acres of federal lands in the US West. The Bureau of Land Management (BLM) and the Department of Energy's National Renewable Energy Laboratory have determined that 700,000 acres of federal lands will be needed for solar farms over the next 20 years, so BLM recommended 22 million acres to give "maximum flexibility" to help the US reach its net zero by 2035 power sector goal. The plan is an update of the Bureau of Land Management's 2012 Western Solar Plan, which originally identified areas for solar development in six states -- Arizona, California, Colorado, Nevada, New Mexico, and Utah. The updated roadmap refines the analysis in the original six states and expands to five more states -- Idaho, Montana, Oregon, Washington, and Wyoming. It also focuses on lands within 10 miles of existing or planned transmission lines and moves away from lands with sensitive resources. [...] BLM under the Biden administration has approved 47 clean energy projects and permitted 11,236 megawatts (MW) of wind, solar, and geothermal energy on public lands, enough to power more than 3.5 million homes. Ben Norris, vice president of regulatory affairs at the Solar Energy Industries Association (SEIA), said in response to BLM's announced Western Solar Plan updates: "The proposal ... identifies 200,000 acres of land near transmission infrastructure, helping to correct an important oversight and streamline solar development. Under the current policy, there are at least 80 million acres of federal lands open to oil and gas development, which is 100 times the amount of public land available for solar. BLM's proposal is a big step in the right direction and recognizes the key role solar plays in our energy economy."

Read more of this story at Slashdot.

Categories: Linux fréttir

Microsoft Executive Emails Hacked By Russian Intelligence Group, Company Says

Slashdot - Sat, 2024-01-20 02:02
In a regulatory filing today, Microsoft said that a Russian intelligence group hacked into some of the company's top executives' email accounts. CNBC reports: Nobelium, the same group that breached government supplier SolarWinds in 2020, carried out the attack, which Microsoft detected last week, according to the company. The announcement comes after new U.S. requirements for disclosing cybersecurity incidents went into effect. A Microsoft spokesperson said that while the company does not believe the attack had a material impact, it still wanted to honor the spirit of the rules. In late November, the group accessed "a legacy non-production test tenant account," Microsoft's Security Response Center wrote in the blog post. After gaining access, the group "then used the account's permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents," the corporate unit wrote. The company's senior leadership team, including finance chief Amy Hood and president Brad Smith, regularly meets with CEO Satya Nadella. Microsoft said it has not found signs that Nobelium had accessed customer data, production systems or proprietary source code. The U.S. government and Microsoft consider Nobelium to be part of the Russian foreign intelligence service SVR. The hacking group was responsible for one of the most prolific breaches in U.S. history when it added malicious code to updates to SolarWinds' Orion software, which some U.S. government agencies were using. Microsoft itself was ensnared in the hack. Nobelium, also known as APT29 or Cozy Bear, is a sophisticated hacking group that has attempted to breach the systems of U.S. allies and the Department of Defense. Microsoft also uses the name Midnight Blizzard to identify Nobelium. It was also implicated alongside another Russian hacking group in the 2016 breach of the Democratic National Committee's systems.

Read more of this story at Slashdot.

Categories: Linux fréttir

Zuckerberg wants to build artificial general intelligence with 350K Nvidia H100 GPUs

TheRegister - Sat, 2024-01-20 01:58
Maybe the AGI can finish that Metaverse, haha – oh wait, they're serious

Facebook supremo Mark Zuckerberg is redirecting Meta-wide efforts to build artificial general intelligence and wants to secure a whopping 350,000 or more Nvidia H100 GPUs by the end of the year to make that happen.…

Categories: Linux fréttir

The Rabbit R1 Will Offer Up-To-Date Answers Powered By Perplexity's AI

Slashdot - Sat, 2024-01-20 01:25
Despite many questions going unanswered, a startup called Rabbit sold out of its pocket AI companion a day after it was debuted at CES 2024 last week. Now, the company finally shared more details about which large language model (LLM) will be powering the device. According to Engadget, the provider in question is Perplexity, "a San Francisco-based startup with ambitions to overtake Google in the AI space." From the report: Perplexity will be providing up-to-date search results via Rabbit's $199 orange brick -- without the need of any subscription. That said, the first 100,000 R1 buyers will receive one year of Perplexity Pro subscription -- normally costing $200 -- for free. This advanced service adds file upload support, a daily quota of over 300 complex queries and the ability to switch to other AI models (GPT-4, Claude 2.1 or Gemini), though these don't necessarily apply to the R1's use case.

Read more of this story at Slashdot.

Categories: Linux fréttir

Now OpenAI CEO Sam Altman wants billions for AI chip fabs

TheRegister - Sat, 2024-01-20 00:59
All those neural network weights aren't much without ample decent silicon

OpenAI CEO Sam Altman is reportedly seeking billions of dollars in capital to build out a network of AI chip fabs.…

Categories: Linux fréttir

Pages

Subscribe to netserv.is aggregator - Linux fréttir