news aggregator
AI search biz insists its content capture and summarization is okay because someone asked for it
AI search biz Perplexity claims that Cloudflare has mischaracterized its site crawlers as malicious bots and that the content delivery network made technical errors in its analysis of Perplexity's operations.…
Nearly one-third of all retracted papers at PLoS ONE can be traced back to just 45 researchers who served as editors at the journal, an analysis of its publication records has found. Nature: The study, published in Proceedings of the National Academy of Sciences (PNAS), found that 45 editors handled only 1.3% of all articles published by PLoS ONE from 2006 to 2023, but that the papers they accepted accounted for more than 30% of the 702 retractions that the journal issued by early 2024.
Twenty-five of these editors also authored papers in PLoS ONE that were later retracted. The PNAS authors did not disclose the names of any of the 45 editors. But, by independently analysing publicly available data from PLoS ONE and the Retraction Watch database, Nature's news team has identified five of the editors who handled the highest number of papers that were subsequently retracted by the journal. Together, those editors accepted about 15% of PLoS ONE's retracted papers up to 14 July.
Read more of this story at Slashdot.
OpenAI has released two open-weight language models, marking the startup's first such release since GPT-2 in 2019. The models, gpt-oss-120b and gpt-oss-20b, can run locally on consumer devices and be fine-tuned for specific purposes. Both models use chain-of-thought reasoning approaches first deployed in OpenAI's o1 model and can browse the web, execute code, and function as AI agents.
The smaller 20-billion-parameter model runs on consumer devices with 16 GB of memory. Gpt-oss-120B model will require about 80 GB of memory. OpenAI said the 120-billion-parameter model performs similarly to the company's proprietary o3 and o4-mini models. The models are available free on Hugging Face under the Apache 2.0 license after safety testing that delayed their March announcement.
Read more of this story at Slashdot.
Tablets have also been growing for six straight quarters, says Canalys
Chromebook demand surged in the second quarter of 2025, Canalys reported Tuesday, but that doesn't necessarily mean permanent growth is on the horizon. …
The Government Accountability Office has issued reports criticizing the Department of Homeland Security, Environmental Protection Agency, and General Services Administration for failing to implement critical IT and cybersecurity recommendations.
DHS leads with 43 unresolved recommendations dating to 2018, including seven priority matters. The EPA has 11 outstanding items, including failures to submit FedRAMP documentation and conduct organization-wide cybersecurity risk assessments. GSA has four pending recommendations.
All three agencies failed to properly log cybersecurity events and conduct required annual IT portfolio reviews. The DHS' HART biometric program remains behind schedule without proper cost accounting or privacy controls, with all nine 2023 recommendations still open.
Read more of this story at Slashdot.
Psst, wanna steal someone's biometrics?
black hat Critical security flaws in Broadcom chips used in more than 100 models of Dell computers could allow attackers to take over tens of millions of users' devices, steal passwords, and access sensitive data, including fingerprint information, according to Cisco Talos.…
Wikipedia editors have adopted a policy enabling administrators to delete AI-generated articles without the standard week-long discussion period. Articles containing telltale LLM responses like "Here is your Wikipedia article on" or "Up to my last training update" now qualify for immediate removal.
Articles with fabricated citations -- nonexistent papers or unrelated sources such as beetle research cited in computer science articles -- also meet deletion criteria.
Read more of this story at Slashdot.
CIOs at the EPA, DHS, and GSA are called out for failure to implement critical cybersecurity recommendations
The Government Accountability Office (GAO) scolded a trio of federal agencies on Monday because their CIOs haven't implemented IT-related recommendations designed to safeguard national cybersecurity. …
Palantir chief executive Alex Karp has told analysts and investors that the company treats Harvard, Princeton and Yale graduates the same as those without college degrees, calling employment at the data analytics firm "a new credential independent of class and background."
During the earnings call Monday where Palantir reported its first billion-dollar revenue quarter, Karp said university graduates come to the company after being "engaged in platitudes" and claimed workers without college degrees sometimes create more value than degree holders using Palantir products. The company launched its Meritocracy Fellowship this spring to recruit talent outside traditional university pathways.
Read more of this story at Slashdot.
Some pinpointed software nasties but were suspicious of printer drivers too
Researchers from the Universities of Guelph and Waterloo have discovered exactly how users decide whether an application is legitimate or malware before installing it – and the good news is they're better than you might expect, at least when primed to expect malware.…
An anonymous reader shares a report: Microsoft has published a new video that appears to be the first in an upcoming series of videos dubbed "Windows 2030 Vision," where the company outlines its vision for the future of Windows over the next five years. It curiously makes references to some potentially major changes on the horizon, in the wake of AI.
This first episode features David Weston, Microsoft's Corporate Vice President of Enterprise & Security, who opens the video by saying "the world of mousing and keyboarding around will feel as alien as it does to Gen Z [using] MS-DOS."
Right out of the gate, it sounds like he's teasing the potential for a radical new desktop UX made possible by agentic AI. Weston later continues, "I truly believe the future version of Windows and other Microsoft operating systems will interact in a multimodal way. The computer will be able to see what we see, hear what we hear, and we can talk to it and ask it to do much more sophisticated things."
Read more of this story at Slashdot.
Tools vendor targets 'creators who've never coded' but devs will be wary
IDE and developer tools vendor JetBrains has released a private preview of Kineto, an AI-driven no-code platform for creators and small businesses.…
AI meeting transcription software is inadvertently sharing private conversations with all meeting participants through automated summaries. WSJ found a series of mishaps that people confirmed on-record.
Digital marketing agency owner Tiffany Lewis discovered her "Nigerian prince" joke about a potential client was included in the summary sent to that same client. Nashville branding firm Studio Delger received meeting notes documenting their discussion about "getting sandwich ingredients from Publix" and not liking soup when their client failed to appear. Communications agency coordinator Andrea Serra found her personal frustrations about a neighborhood Whole Foods and a kitchen mishap while making sweet potato recipes included in official meeting recaps distributed to colleagues.
Read more of this story at Slashdot.
New version season is near, and some of the big names are dropping x86-32 – but not this one
NetBSD 11 is taking shape and the code branch for the new release has been created.…
An anonymous reader shares a report: A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI's chatbot for, and inadvertently exposing. 404 Media's testing has found the dataset includes everything from the sensitive to the benign: alleged texts of non-disclosure agreements, discussions of confidential contracts, people trying to use ChatGPT to understand their relationship issues, and lots of people asking ChatGPT to write LinkedIn posts.
The news follows a July 30 Fast Company article which reported "thousands" of shared ChatGPT chats were appearing in Google search results. People have since dug through some of the chats indexed by Google. The around 100,000 conversation dataset provides a better sense of the scale of the problem, and highlights some of the potential privacy risks in using any sharing features of AI tools. OpenAI did not dispute the figure of around 100,000 indexed chats when contacted for comment.
Read more of this story at Slashdot.
The U.S. Coast Guard determined the implosion of the Titan submersible that killed five people while traveling to the wreckage of the Titanic was a preventable disaster caused by OceanGate Expeditions's inability to meet safety and engineering standards. WSJ: A 335-page report [PDF] detailing a two-year inquiry from the U.S. Coast Guard's Marine Board of Investigation found the company that owned and operated the Titan failed to follow maintenance and inspection protocols for the deep-sea submersible.
OceanGate avoided regulatory review and managed the submersible outside of standard protocols "by strategically creating and exploiting regulatory confusion and oversight challenges," the report said. The Coast Guard opened its highest-level investigation into the event in June 2023, shortly after the implosion occurred. "There is a need for stronger oversight and clear options for operators who are exploring new concepts outside of the existing regulatory framework," Jason Neubauer, the chair of the Coast Guard Marine Board of Investigation for the Titan submersible, said in a statement.
Read more of this story at Slashdot.
Wiz Research details flaws in Python backend that expose AI models and enable remote code execution
Security researchers have lifted the lid on a chain of high-severity vulnerabilities that could lead to remote code execution (RCE) on Nvidia's Triton Inference Server.…
An anonymous reader shares a report: In a landmark move, Illinois state lawmakers have passed a bill banning AI from acting as a standalone therapist and placing firm guardrails on how mental health professionals can use AI to support care. Governor JB Pritzker signed the bill into law on Aug. 1.
The legislation, dubbed the Wellness and Oversight for Psychological Resources Act, was introduced by Rep. Bob Morgan and makes one thing clear: only licensed professionals can deliver therapeutic or psychotherapeutic services to another human being. [...] Under the new state law, mental health providers are barred from using AI to independently make therapeutic decisions, interact directly with clients, or create treatment plans -- unless a licensed professional has reviewed and approved it. The law also closes a loophole that allows unlicensed persons to advertise themselves as "therapists."
Read more of this story at Slashdot.
Stricken spacecraft gives scientists the silent treatment
NASA has called it quits on attempts to contact its Lunar Trailblazer probe, notching up a failure in its low-cost, high-risk science program.…
For years, whistle-blowers have warned that fake results are sneaking into the scientific literature at an increasing pace. A new statistical analysis backs up the concern. From a report: A team of researchers found evidence of shady organizations churning out fake or low-quality studies on an industrial scale. And their output is rising fast, threatening the integrity of many fields.
"If these trends are not stopped, science is going to be destroyed," said LuÃs A. Nunes Amaral, a data scientist at Northwestern University and an author of the study, which was published in the Proceedings of the National Academy of Sciences on Monday. Science has made huge advances over the past few centuries only because new generations of scientists could read about the accomplishments of previous ones. Each time a new paper is published, other scientists can explore the findings and think about how to make their own discoveries. Fake scientific papers produced by commercial "paper mills" are doubling every year and a half, according to the report. Northwestern University researchers examined over one million papers and identified networks of fraudulent studies sold to scientists seeking to pad their publication records. The team estimates the actual scope of fraud may be 100 times greater than currently detected cases. Paper mills charge hundreds to thousands of dollars for fake authorship and often target specific research fields like microRNA cancer studies.
Read more of this story at Slashdot.
Pages
|