Linux fréttir
Brace yourselves Britain, PM Keir Starmer's challenged his teams: 'show me how they can use AI'
Britain's beefiest supercomputer, Isambard-AI, is set to become fully operational this summer, as the government steps up its strategy to push AI everywhere as a driver for economic recovery.…
Big expensive Moon rockets = good. Science = yeah, whatever
While US President Donald Trump and his former best pal, Elon Musk, were having a very public spat, the US Senate fired back with its response to NASA's proposed budget cuts. Big rockets = good. Science = still bad.…
A cybersecurity researcher was able to figure out the phone number linked to any Google account, information that is usually not public and is often sensitive, according to the researcher, Google, and 404 Media's own tests. From a report: The issue has since been fixed but at the time presented a privacy issue in which even hackers with relatively few resources could have brute forced their way to peoples' personal information. "I think this exploit is pretty bad since it's basically a gold mine for SIM swappers," the independent security researcher who found the issue, who goes by the handle brutecat, wrote in an email.
[...] In mid-April, we provided brutecat with one of our personal Gmail addresses in order to test the vulnerability. About six hours later, brutecat replied with the correct and full phone number linked to that account. "Essentially, it's bruting the number," brutecat said of their process. Brute forcing is when a hacker rapidly tries different combinations of digits or characters until finding the ones they're after. Typically that's in the context of finding someone's password, but here brutecat is doing something similar to determine a Google user's phone number.
Brutecat said in an email the brute forcing takes around one hour for a U.S. number, or 8 minutes for a UK one. For other countries, it can take less than a minute, they said. In an accompanying video demonstrating the exploit, brutecat explains an attacker needs the target's Google display name. They find this by first transferring ownership of a document from Google's Looker Studio product to the target, the video says. They say they modified the document's name to be millions of characters, which ends up with the target not being notified of the ownership switch. Using some custom code, which they detailed in their write up, brutecat then barrages Google with guesses of the phone number until getting a hit.
Read more of this story at Slashdot.
An anonymous reader shares a report: Meta is in talks to make a multibillion-dollar investment into AI startup Scale AI, according to people familiar with the matter. The financing could exceed $10 billion in value, some of the people said, making it one of the largest private company funding events of all time.
[...] Scale AI, whose customers include Microsoft and OpenAI, provides data labeling services to help companies train machine-learning models and has become a key beneficiary of the generative AI boom. The startup was last valued at about $14 billion in 2024, in a funding round that included backing from Meta and Microsoft.
Read more of this story at Slashdot.
Christian Klein sees little benefit from trying to compete with the dominant hyperscalers
The leader of Europe's most valuable company says there is no point in the continent building datacenters to try to compete with US cloud hyperscalers which have already invested in the region.…
Apple researchers have found that state-of-the-art "reasoning" AI models like OpenAI's o3-mini, Gemini (with thinking mode-enabled), Claude 3.7, DeepSeek-R1 face complete performance collapse [PDF] beyond certain complexity thresholds when tested on controllable puzzle environments. The finding raises questions about the true reasoning capabilities of large language models.
The study, which examined models using Tower of Hanoi, checker jumping, river crossing, and blocks world puzzles rather than standard mathematical benchmarks, found three distinct performance regimes that contradict conventional assumptions about AI reasoning progress.
At low complexity levels, standard language models surprisingly outperformed their reasoning-enhanced counterparts while using fewer computational resources. At medium complexity, reasoning models demonstrated advantages, but both model types experienced complete accuracy collapse at high complexity levels. Most striking was the counterintuitive finding that reasoning models actually reduced their computational effort as problems became more difficult, despite operating well below their token generation limits.
Even when researchers provided explicit solution algorithms, requiring only step-by-step execution rather than creative problem-solving, the models' performance failed to improve significantly. The researchers noted fundamental inconsistencies in how models applied learned strategies across different problem scales, with some models successfully handling 100-move sequences in one puzzle type while failing after just five moves in simpler scenarios.
Read more of this story at Slashdot.
If gamers can have a slimline version of the OS, why not IT admins?
Microsoft just demonstrated it can put Windows 11 on a diet if it really wants to, with the announcement of PC gaming handhelds running a slimmed-down version of the operating system under the hood.…
Not to worry nervous flyers, FAA vows to banish archaic systems... in a few years
The Federal Aviation Administration (FAA) has confirmed that the US air traffic control system still runs on somewhat antiquated bits of technology, including floppy disks and paper strips.…
Another tech biz to be Yanked from London Stock Exchange
Qualcomm has bid $2.4 billion to buy connectivity specialist Alphawave Semi. If approved it will see yet another London Stock Exchange-listed tech biz put under the control of an overseas owner.…
Vox publishes a story about "the quiet revolutions that have prevented millions of cancer deaths....
"The age-adjusted death rate in the US for cancer has declined by about a third since 1991, meaning people of a given age have about a third lower risk of dying from cancer than people of the same age more than three decades ago... "
The dramatic bend in the curve of cancer deaths didn't happen by accident — it's the compound interest of three revolutions. While anti-smoking policy has been the single biggest lifesaver, other interventions have helped reduce people's cancer risk. One of the biggest successes is the HPV vaccine. A study last year found that death rates of cervical cancer — which can be caused by HPV infections — in US women ages 20-39 had dropped 62 percent from 2012 to 2021, thanks largely to the spread of the vaccine. Other cancers have been linked to infections, and there is strong research indicating that vaccination can have positive effects on reducing cancer incidence.
The next revolution is better and earlier screening. It's generally true that the earlier cancer is caught, the better the chances of survival... According to one study, incidences of late-stage colorectal cancer in Americans over 50 declined by a third between 2000 and 2010 in large part because rates of colonoscopies almost tripled in that same time period. And newer screening methods, often employing AI or using blood-based tests, could make preliminary screening simpler, less invasive and therefore more readily available. If 20th-century screening was about finding physical evidence of something wrong — the lump in the breast — 21st-century screening aims to find cancer before symptoms even arise.
Most exciting of all are frontier developments in treating cancer... From drugs like lenalidomide and bortezomib in the 2000s, which helped double median myeloma survival, to the spread of monoclonal antibodies, real breakthroughs in treatments have meaningfully extended people's lives — not just by months, but years. Perhaps the most promising development is CAR-T therapy, a form of immunotherapy. Rather than attempting to kill the cancer directly, immunotherapies turn a patient's own T-cells into guided missiles. In a recent study of 97 patients with multiple myeloma, many of whom were facing hospice care, a third of those who received CAR-T therapy had no detectable cancer five years later. It was the kind of result that doctors rarely see.
The article begins with some recent quotes from Jon Gluck, who was told after a cancer diagnosis that he had as little as 18 months left to live — 22 years ago...
Read more of this story at Slashdot.
Big tech can't be bothered to fight crime. It can barely be bothered even to say so
Opinion A lot of our tech world is nightmarish, but sometimes this is literally true.…
SentinelOne discovered the campaign when they tried to hit the security vendor's own servers
An IT services company, a European media group, and a South Asian government entity are among the more than 75 companies where China-linked groups have planted malware to access strategic networks should a conflict break out.…
The journey to mass production has been extraordinarily difficult – will it be worth it?
Feature Seagate says it has a clear way forward to 100 TB disk drives using 10 TB per platter technology, but HAMR tech is nearly 25 years old and full mass production is still not underway. What has been taking so long?…
Prospect union threatens to up campaign, raise dispute with CEO
Emotions are running high at BT over the Brit telco's refusal to "improve their derisory and insulting" pay offer to manager grade staff, according to John Ferrett, national secretary at union Prospect.…
The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines."
[OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out:
The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."
Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....
The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.
Read more of this story at Slashdot.
If you like it to keep working, don’t put a ring on it
Who, Me? Reg readers are so dedicated it seems some of you are married to the job, although you also admit that no relationship is perfect when you send stories to Who, Me? It's the column in which we share your tales of making massive mistakes and somehow staying together with your employer afterwords.…
1.19MHz eight-bit CPU trounced modern GPUs – can you do better with your retro-tech?
The Atari 2600 gaming console came into the world in 1977 with an eight-bit processor that ran at 1.19MhZ, and just 128 bytes of RAM – but that’s apparently enough power to beat ChatGPT at chess.…
"Replanting forests can help cool the planet even more than some scientists once believed, especially in the tropics," according to a recent announcement from the University of California, Riverside.
In a new modeling study published in Communications Earth & Environment, researchers at the University of California, Riverside, showed that restoring forests to their preindustrial extent could lower global average temperatures by 0.34 degrees Celsius. That is roughly one-quarter of the warming the Earth has already experienced. The study is based on an increase in tree area of about 12 million square kilometers, which is 135% of the area of the United States, and similar to estimates of the global tree restoration potential of 1 trillion trees. It is believed the planet has lost nearly half of its trees (about 3 trillion) since the onset of industrialized society.
The Washington Post noted that the researchers factored in how tree emissions interacted with molecules in the atmosphere, "encouraging cloud production, reflecting sunlight and cooling Earth's surface."
In a news release, the researchers acknowledge that full reforestation is not feasible... "Reforestation is not a silver bullet," Bob Allen, a professor of climatology at the University of California at Riverside and the paper's lead author, said in a news release. "It's a powerful strategy, but it has to be paired with serious emissions reductions."
Read more of this story at Slashdot.
PLUS: Hitachi turns greybeards into AI agents; Tiananmen anniversary censorship; AWS in Taiwan; and more!
China’s space agency has revealed its Tianwen 2 probe has unfurled a ‘solar wing’.…
A new study "adds a whole extra level of detail to our understanding of caffeine's impact on the brain during sleep," reports ScienceAlert:
Caffeine was shown to increase brain signal complexity, and shift the brain closer to a state of 'criticality', in tests run by researchers from the University of Montreal in Canada. This criticality refers to the brain being balanced between structure and flexibility, thought to be the most efficient state for processing information, learning, and making decisions. However, this state might prevent restful sleep, the researchers suggest. The caffeine isn't just keeping us alert, but actually changing how the brain is operating. What's more, they found younger adults aged 20 to 27 were more greatly affected in this way...
When it comes to the different reactions across different ages, the researchers suggest that changes in the brain as we age might be responsible. Adenosine molecules gradually build up in the brain during the day, leading to a greater feeling of fatigue as bedtime approaches. Caffeine works by blocking the receptors that adenosine interacts with, giving us a temporary jolt of energy. Adenosine receptors are more abundant in younger brains, which may explain why younger people seem to be more sensitive to caffeine's powers. That includes both the positive energizing effects, and the negative effects of keeping the brain too active overnight.
Read more of this story at Slashdot.
Pages
|