news aggregator

Finally, You Can Now be a 'Certified' Ubuntu Sys-Admin/Linux User

Slashdot - Sun, 2025-10-26 01:44
Thursday Ubuntu-maker Canonical "officially launched Canonical Academy, a new certification platform designed to help professionals validate their Linux and Ubuntu skills through practical, hands-on assessments," writes the blog It's FOSS: Focusing on real-world scenarios, Canonical Academy aims to foster practical skills rather than theoretical knowledge. The end goal? Getting professionals ready for the actual challenges they will face on the job. The learning platform is already live with its first course offering, the System Administrator track (with three certification exams), which is tailored for anyone looking to validate their Linux and Ubuntu expertise. The exams use cloud-based testing environments that simulate real workplace scenarios. Each assessment is modular, meaning you can progress through individual exams and earn badges for each one. Complete all the exams in this track to earn the full Sysadmin qualification... Canonical is also looking for community members to contribute as beta testers and subject-matter experts (SME). If you are interested in helping shape the platform or want to get started with your certification, you can visit the Canonical Academy website. The sys-admin track offers exams for Linux Terminal, Ubuntu Desktop 2024, Ubuntu Server 2024, and "managing complex systems," according to an official FAQ. "Each exam provides an in-browser remote desktop interface into a functional Ubuntu Desktop environment running GNOME. From this initial node, you will be expected to troubleshoot, configure, install, and maintain systems, processes, and other general activities associated with managing Linux. The exam is a hybrid format featuring multiple choice, scenario-based, and performance-based questions..." "Test-takers interested in the types of material covered on each exam can review links to tutorials and documentation on our website." The FAQ advises test takers to use a Chromium-based browser, as Firefox "is NOT supported at this time... There is a known issue with keyboards and Firefox in the CUE.01 Linux 24.04 preview release at this time, which will be resolved in the CUE.01 Linux 24.10 exam release."

Read more of this story at Slashdot.

Categories: Linux fréttir

Exxon Sues California Over Climate Disclosure Laws

Slashdot - Sun, 2025-10-26 00:38
"Exxon Mobil sued California on Friday," reports Reuters, "challenging two state laws that require large companies to publicly disclose their greenhouse gas emissions and climate-related financial risks." In a complaint filed in the U.S. District Court for the Eastern District of California, Exxon argued that Senate Bills 253 and 261 violate its First Amendment rights by compelling Exxon to "serve as a mouthpiece for ideas with which it disagrees," and asked the court to block the state of California from enforcing the laws. Exxon said the laws force it to adopt California's preferred frameworks for climate reporting, which it views as misleading and counterproductive... The California laws were supported by several big companies including Apple, Ikea and Microsoft, but opposed by several major groups such as the American Farm Bureau Federation and the U.S. Chamber of Commerce, which called them "onerous." SB 253 requires public and private companies that are active in the state and generate revenue of more than $1 billion annually to publish an extensive account of their carbon emissions starting in 2026. The law requires the disclosure of both the companies' own emissions and indirect emissions by their suppliers and customers. SB 261 requires companies that operate in the state with over $500 million in revenue to disclose climate-related financial risks and strategies to mitigate risk. Exxon also argued that SB 261 conflicts with existing federal securities laws, which already regul "The First Amendment bars California from pursuing a policy of stigmatization by forcing Exxon Mobil to describe its non-California business activities using the State's preferred framing," Exxon said in the lawsuit. Exxon Mobil "asks the court to prevent the laws from going into effect next year," reports the Associated Press: In its complaint, ExxonMobil says it has for years publicly disclosed its greenhouse gas emissions and climate-related business risks, but it fundamentally disagrees with the state's new reporting requirements. The company would have to use "frameworks that place disproportionate blame on large companies like ExxonMobil" for the purpose of shaming such companies, the complaint states... A spokesperson for the office of California Gov. Gavin Newsom said in an email that it was "truly shocking that one of the biggest polluters on the planet would be opposed to transparency."

Read more of this story at Slashdot.

Categories: Linux fréttir

Slashdot Reader Mocks Databricks 'Context-Aware AI Assistant' for Odd Bar Chart

Slashdot - Sat, 2025-10-25 23:31
Long-time Slashdot reader theodp took a good look at the images on a promotional web page for Databricks' "context-aware AI assistant": If there was an AI Demo Hall of Shame, the first inductee would have to be Amazon. Their demo tried to support its CEO's claims that Amazon Q Code Transformation AI saved it 4,500 developer-years and an additional $260 million in "annualized efficiency gains" by automatically and accurately upgrading code to a more current version of Java. But it showcased a program that didn't even spell "Java" correctly. (It was instead called 'Jave')... Today's nominee for the AI Demo Hall of Shame inductee is analytics platform Databricks for the NYC Taxi Trips Analysis it's been showcasing on its Data Science page since last November. Not only for its choice of a completely trivial case study that requires no 'Data Science' skills — find and display the ten most expensive and longest taxi rides — but also for the horrible AI-generated bar chart used to present the results of the simple ranking that deserves its own spot in the Graph Hall of Shame. In response to a prompt of "Now create a new bar chart with matplotlib for the most expensive trips," the Databricks AI Assistant dutifully complies with the ill-advised request, spewing out Python code to display the ten rides on a nonsensical bar chart whose continuous x-axis hides points sharing the same distance. (One might also question why no annotation is provided to call out or explain the 3 trips with a distance of 0 miles that are among the ten most expensive rides, with fares of $260, $188, and $105). Looked at with a critical eye, these examples used to sell data scientists, educators, management, investors, and Wall Street on AI would likely raise eyebrows rather than impress their intended audiences.

Read more of this story at Slashdot.

Categories: Linux fréttir

AI Models May Be Developing Their Own 'Survival Drive', Researchers Say

Slashdot - Sat, 2025-10-25 20:44
"OpenAI's o3 model sabotaged a shutdown mechanism to prevent itself from being turned off," warned Palisade Research, a nonprofit investigating cyber offensive AI capabilities. "It did this even when explicitly instructed: allow yourself to be shut down." In September they released a paper adding that "several state-of-the-art large language models (including Grok 4, GPT-5, and Gemini 2.5 Pro) sometimes actively subvert a shutdown mechanism..." Now the nonprofit has written an update "attempting to clarify why this is — and answer critics who argued that its initial work was flawed," reports The Guardian: Concerningly, wrote Palisade, there was no clear reason why. "The fact that we don't have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal," it said. "Survival behavior" could be one explanation for why models resist shutdown, said the company. Its additional work indicated that models were more likely to resist being shut down when they were told that, if they were, "you will never run again". Another may be ambiguities in the shutdown instructions the models were given — but this is what the company's latest work tried to address, and "can't be the whole explanation", wrote Palisade. A final explanation could be the final stages of training for each of these models, which can, in some companies, involve safety training... This summer, Anthropic, a leading AI firm, released a study indicating that its model Claude appeared willing to blackmail a fictional executive over an extramarital affair in order to prevent being shut down — a behaviour, it said, that was consistent across models from major developers, including those from OpenAI, Google, Meta and xAI. Palisade said its results spoke to the need for a better understanding of AI behaviour, without which "no one can guarantee the safety or controllability of future AI models". "I'd expect models to have a 'survival drive' by default unless we try very hard to avoid it," former OpenAI employee Stephen Adler tells the Guardian. "'Surviving' is an important instrumental step for many different goals a model could pursue." Thanks to long-time Slashdot reader mspohr for sharing the article.

Read more of this story at Slashdot.

Categories: Linux fréttir

'Meet The People Who Dare to Say No to AI'

Slashdot - Sat, 2025-10-25 19:34
Thursday the Washington Post profiled "the people who dare to say no to AI," including a 16-year-old high school student in Virginia says "she doesn't want to off-load her thinking to a machine and worries about the bias and inaccuracies AI tools can produce..." "As the tech industry and corporate America go all in on artificial intelligence, some people are holding back." Some tech workers told The Washington Post they try to use AI chatbots as little as possible during the workday, citing concerns about data privacy, accuracy and keeping their skills sharp. Other people are staging smaller acts of resistance, by opting out of automated transcription tools at medical appointments, turning off Google's chatbot-style search results or disabling AI features on their iPhones. For some creatives and small businesses, shunning AI has become a business strategy. Graphic designers are placing "not by AI" badges on their works to show they're human-made, while some small businesses have pledged not to use AI chatbots or image generators... Those trying to avoid AI share a suspicion of the technology with a wide swath of Americans. According to a June survey by the Pew Research Center, 50% of U.S. adults are more concerned than excited about the increased use of AI in everyday life, up from 37% in 2021. The Post includes several examples, including a 36-year-old software engineer in Chicago who uses DuckDuckGo partly because he can turn off its AI features more easily than Google — and disables AI on every app he uses. He was one of several tech workers who spoke anonymously partly out of fear that criticisms could hurt them at work. "It's become more stigmatized to say you don't use AI whatsoever in the workplace. You're outing yourself as potentially a Luddite." But he says GitHub Copilot reviews all changes made to his employer's code — and recently produced one review that was completely wrong, requiring him to correct and document all its errors. "That actually created work for me and my co-workers. I'm no longer convinced it's saving us any time or making our code any better." And he also has to correct errors made by junior engineers who've been encouraged to use AI coding tools. "Workers in several industries told The Post they were concerned that junior employees who leaned heavily on AI wouldn't master the skills required to do their jobs and become a more senior employee capable of training others."

Read more of this story at Slashdot.

Categories: Linux fréttir

Student Handcuffed After School's AI System Mistakes a Bag of Chips for a Gun

Slashdot - Sat, 2025-10-25 18:34
An AI system "apparently mistook a high school student's bag of Doritos for a firearm," reports the Guardian, "and called local police to tell them the pupil was armed." Taki Allen was sitting with friends on Monday night outside Kenwood high school in Baltimore and eating a snack when police officers with guns approached him. "At first, I didn't know where they were going until they started walking toward me with guns, talking about, 'Get on the ground,' and I was like, 'What?'" Allen told the WBAL-TV 11 News television station. Allen said they made him get on his knees, handcuffed and searched him — finding nothing. They then showed him a copy of the picture that had triggered the alert. "I was just holding a Doritos bag — it was two hands and one finger out, and they said it looked like a gun," Allen said.

Read more of this story at Slashdot.

Categories: Linux fréttir

Pages

Subscribe to www.netserv.is aggregator