news aggregator
DuckDuckGo recently asked its users how they felt about AI in search. The answer has come back loud and clear: more than 90% of the 175,354 people who voted said they don't want it.
The privacy-focused search engine has since set up two versions of its tool: noai.duckduckgo.com for the AI-averse and yesai.duckduckgo.com for the curious. Users can also tweak settings on the main site to disable AI summaries, AI-generated images, and the Duck.ai chatbot individually.
Read more of this story at Slashdot.
Parent company Cognizant hit with multiple lawsuits
Thousands more Oregonians will soon receive data breach letters in the continued fallout from the TriZetto data breach, in which someone hacked the insurance verification provider and gained access to its healthcare provider customers across multiple US states.…
An anonymous reader shares a report: A hacking of the Nobel organization's computer systems is the most likely cause of last year's leak of Nobel Peace Prize laureate Maria Corina Machado's name, according to the results of an investigation [non-paywalled source]. An individual or a state actor may have illegally gained access in a cyber breach, the Norwegian Nobel Institute said on Friday after concluding an internal investigation assisted by security authorities.
The leak had triggered an unusual betting surge on Machado at the Polymarket platform hours before she was unveiled as the award recipient in October. The Venezuelan opposition leader hadn't previously been considered a favorite for the 2025 prize.
"We still think that the digital domain is the main suspect," said Kristian Berg Harpviken, director of the Oslo-based institute, an administrative arm of the Nobel Committee that awards the prize. The institute has decided against filing for a police investigation given "the absence of a clear theory," he said in an interview in Oslo.
Read more of this story at Slashdot.
Fewer humans, more bots - just in time for filing season
Tax season 2026 could be an interesting one as the IRS seeks to replace the staff it sent to the unemployment line with AI. Bots could handle tasks ranging from reviewing an org's request for tax-exempt status to processing amended individual filings.…
A new study [PDF] examining the United States between 1850 and 1920 found that expanded market access -- driven largely by railroad expansion -- made Americans more trusting of strangers and more outward-looking, but weakened family-based care for the vulnerable.
Researchers Max Posch of the University of Exeter and Itzchak Tzachi Raz of Hebrew University compared places and people gaining different levels of commercial connectivity. In better-connected regions, Americans became more likely to marry outside their local communities, and parents more likely to pick nationally common names for children. Trust toward others rose, as measured through language in local newspapers.
The researchers used multiple tests to rule out the possibility that these shifts simply reflected places getting richer. The cultural changes were concentrated among migrants in trade-exposed industries; workers in construction and entertainment showed no effect. But market access also meant orphans, the disabled, and the elderly became less likely to be cared for by relatives at home.
Read more of this story at Slashdot.
Apple's call-screening feature, introduced in iOS 26 last year, was designed to combat the more than 2 billion robocalls placed to Americans every month, but as WSJ is reporting, it is now creating friction for the rich and powerful who find themselves subjected to automated interrogation when dialing from unrecognized numbers.
The feature uses an automated voice to ask unknown callers for their names and reasons for calling, transcribes the responses, and lets recipients decide whether to answer -- essentially giving everyone a pocket-sized executive assistant.
Venture capitalist Bradley Tusk said his first reaction when encountering call screening is irritation, though he understands the necessity given the spam problem. Ben Schaechter, who runs cloud-cost management company Vantage, said the feature "dramatically changed my life" after his personal number ended up in founding paperwork and attracted endless sales calls.
Read more of this story at Slashdot.
The western US saw the most activity overall
Cloud storage firm Backblaze says that a sharp rise in AI-driven data traffic to neocloud operators may signal a shift from internet-style traffic patterns to large, high-bandwidth flows characteristic of large-scale model training and inference work.…
The UK government paid consulting firm PwC $5.65 million to build its new AI Skills Hub, a site meant to help 10 million workers gain AI skills by 2030 that functions largely as a bookmarking service, directing users to external training courses that already existed before the contract was awarded.
The hub links to platforms like Salesforce's free Trailhead learning system rather than offering original educational content. PwC has acknowledged the site does not fully meet accessibility standards. The platform also contains factual errors in its course on AI and intellectual property, which references "fair use" -- a legal doctrine specific to the U.S. -- rather than the UK's "fair dealing" framework.
Read more of this story at Slashdot.
An anonymous reader shares a report: Amazon is in talks to invest up to $50 billion in OpenAI, according to people familiar with the matter, in what would be a giant bet on the hot AI startup. The ChatGPT maker is seeking up to $100 billion in new capital from investors, a round that could value it at as much as $830 billion, The Wall Street Journal previously reported.
Andy Jassy, Amazon's chief executive, is leading the negotiations with OpenAI CEO Sam Altman, according to some of the people. The exact shape of a deal, should one be reached, could still change, the people said. Investing tens of billions of dollars in OpenAI could make Amazon the biggest contributor in the AI company's ongoing fundraising round. SoftBank is in talks to invest up to $30 billion more in OpenAI as part of the round, adding to the Japanese conglomerate's already large stake in the startup.
Read more of this story at Slashdot.
Big Red promises 'new era' as long-frustrated contributors weigh whether to believe it
Oracle is taking steps to "repair" its relationship with the MySQL community, according to sources, by moving "commercial-only" features into the database application's Community Edition and prioritizing developer needs.…
An anonymous reader shares a report: Microsoft's PowerToys team is contemplating building a top menu bar for Windows 11, much like Linux, macOS, or older versions of Windows. The menu bar, or Command Palette Dock as Microsoft calls it, would be a new optional UI that provides quick access to tools, monitoring of system resources, and much more.
Microsoft has provided concept images of what it's looking to build, and is soliciting feedback on whether Windows users would use a PowerToy like this. "The dock is designed to be highly configurable," explains Niels Laute, a senior product manager at Microsoft. "It can be positioned on the top, left, right, or bottom edge of the screen, and extensions can be pinned to three distinct regions of the dock: start, center, and end."
Read more of this story at Slashdot.
AI vision systems can be very literal readers
Indirect prompt injection occurs when a bot takes input data and interprets it as a command. We've seen this problem numerous times when AI bots were fed prompts via web pages or PDFs they read. Now, academics have shown that self-driving cars and autonomous drones will follow illicit instructions that have been written onto road signs.…
Mike Swanson, commenting on modern software's intrusive, attention-seeking behavior: What if your car worked like so many apps? You're driving somewhere important...maybe running a little bit late. A few minutes into the drive, your car pulls over to the side of the road and asks:
"How are you enjoying your drive so far?"
Annoyed by the interruption, and even more behind schedule, you dismiss the prompt and merge back into traffic.
A minute later it does it again.
"Did you know I have a new feature? Tap here to learn more."
It blocks your speedometer with an overlay tutorial about the turn signal. It highlights the wiper controls and refuses to go away until you demonstrate mastery.
Ridiculous, of course.
And yet, this is how a lot of modern software behaves. Not because it's broken, but because we've normalized an interruption model that would be unacceptable almost anywhere else.
Read more of this story at Slashdot.
Analyst predicts massive spend on domestic AI stacks
Countries intent on digital sovereignty will need to invest at least 1 percent of their entire gross domestic product (GDP) into AI infrastructure by 2029, according to analyst biz Gartner.…
An anonymous reader quotes a report from Variety: In the future, studios that use synthetic actors in place of humans might have to pay a royalty into a union fund. That's one of the ideas kicking around as SAG-AFTRA prepares to sit down with the studios on Feb. 9. Artificial intelligence was central to the 2023 actors strike, and it's only gotten more urgent since. Social media is awash in slop, while user-made videos of Leia and Elsa are soon to debut on Disney+. And then there's Tilly Norwood -- the digital creation that crystallized AI fears last fall. Though SAG-AFTRA won some AI protections in the strike, it can't stop Tilly and her ilk from taking actors' jobs. As negotiations with studios begin early ahead of the June contract deadline, AI remains the most existential concern. Actors are also pushing to revisit streaming residuals, arguing that current "success bonuses" fall far short of the rerun-based income that once sustained middle-class careers. They also note the strain caused from long streaming hiatuses, exclusivity clauses, and self-taped auditions.
Read more of this story at Slashdot.
GPT-4o gets second death sentence after last year's reprieve, but this time barely anyone's bothered
OpenAI is sunsetting some of its ChatGPT models next month, a move it knows "will feel frustrating for some users."…
Stock management also important, says Mitchell Hashimoto
HashiCorp co-founder Mitchell Hashimoto took to X this week to unveil the secret of workplace success: stay off your phone, sweep the floor, and clean the machines after that.…
Just because you're paranoid about digital sovereignty doesn't mean they're not after you
Opinion I'm an eighth-generation American, and let me tell you, I wouldn't trust my data, secrets, or services to a US company these days for love or money. Under our current government, we're simply not trustworthy.…
Spot's new cleanup gig involves gamma rays, alpha particles, and considerably less PPE than fleshy colleagues
Bark!Bark!Bark! Sellafield Ltd is to use Boston Dynamics' Spot robot dogs in "routine, business-as-usual operations" amid the ongoing cleanup and decommissioning of the notorious UK nuclear site.…
Longtime Slashdot reader schwit1 shares a report from CBS News: A former Google engineer has been found guilty on multiple federal charges for stealing the tech giant's trade secrets on artificial intelligence to benefit Chinese companies he secretly worked for, federal prosecutors said. According to the U.S. Attorney's Office for the Northern District of California, a jury on Thursday convicted Linwei Ding on seven counts of economic espionage and seven counts of theft of trade secrets, following an 11-day trial. The 38-year-old, also known as Leon Ding, was hired by Google in 2019 and was a resident of Newark.
According to evidence presented at trial, Ding stole more than 2,000 pages of confidential information containing Google AI trade secrets between May 2022 and April 2023. He uploaded the information to his personal Google Cloud account. Around the same time, Ding secretly affiliated himself with two Chinese-based technology companies. Around June 2022, prosecutors said Ding was in discussions to be the chief technology officer for an early-stage tech company. Several months later, he was in the process of founding his own AI and machine learning company in China, acting as the company's CEO. Prosecutors said Ding told investors that he could build an AI supercomputer by copying and modifying Google's technology.
In late 2023, prosecutors said Ding downloaded the trade secrets to his own personal computer before resigning from Google. According to the superseding indictment, Google uncovered the uploads after finding out that Ding presented himself as CEO of one of the companies during an Beijing investor conference. Around the same time, Ding told his manager he was leaving the company and booked a one-way flight to Beijing. "Silicon Valley is at the forefront of artificial intelligence innovation, pioneering transformative work that drives economic growth and strengthens our national security. The jury delivered a clear message today that the theft of this valuable technology will not go unpunished," U.S. Attorney Craig Missakian said in a statement.
Read more of this story at Slashdot.
Pages
|