Linux fréttir
A county transit police detective fed a poor-quality image to an AI-powered facial recognition program, remembers the Washington Post, leading to the arrest of "Christopher Gatlin, a 29-year-old father of four who had no apparent ties to the crime scene nor a history of violent offenses." He was unable to post the $75,000 cash bond required, and "jailed for a crime he says he didn't commit, it would take Gatlin more than two years to clear his name."
A Washington Post investigation into police use of facial recognition software found that law enforcement agencies across the nation are using the artificial intelligence tools in a way they were never intended to be used: as a shortcut to finding and arresting suspects without other evidence... The Post reviewed documents from 23 police departments where detailed records about facial recognition use are available and found that 15 departments spanning 12 states arrested suspects identified through AI matches without any independent evidence connecting them to the crime — in most cases contradicting their own internal policies requiring officers to corroborate all leads found through AI. Some law enforcement officers using the technology appeared to abandon traditional policing standards and treat software suggestions as facts, The Post found. One police report referred to an uncorroborated AI result as a "100% match." Another said police used the software to "immediately and unquestionably" identify a suspected thief.
Gatlin is one of at least eight people wrongfully arrested in the United States after being identified through facial recognition... All of the cases were eventually dismissed. Police probably could have eliminated most of the people as suspects before their arrest through basic police work, such as checking alibis, comparing tattoos, or, in one case, following DNA and fingerprint evidence left at the scene.
Some statistics from the article about the eight wrongfully-arrested people:
In six cases police failed to check alibis
In two cases police ignored evidence that contradicted their theory
In five cases police failed to collect key pieces of evidence
In three cases police ignored suspects' physical characteristics
In six cases police relied on problematic witness statements
The article provides two examples of police departments forced to pay $300,000 settlements after wrongful arrests caused by AI mismatches. But "In interviews with The Post, all eight people known to have been wrongly arrested said the experience had left permanent scars: lost jobs, damaged relationships, missed payments on car and home loans. Some said they had to send their children to counseling to work through the trauma of watching their mother or father get arrested on the front lawn.
"Most said they also developed a fear of police."
Read more of this story at Slashdot.
America's Federal Trade Commission has been "raising antitrust concerns" about them for years, reports NBC News.
The latest? America's three largest drug middlemen "inflated the costs of numerous life-saving medications by billions of dollars over the past few years, the FTC said in a report Tuesday."
The top pharmacy benefit managers (PBMs) — CVS Health's Caremark Rx, Cigna's Express Scripts and UnitedHealth Group's OptumRx — generated roughly $7.3 billion through price hikes over about five years starting in 2017, the FTC said. The "excess" price hikes affected generic drugs used to treat heart disease, HIV and cancer, among other conditions, with some increases more than 1,000% of the national average costs of acquiring the medications, the commission said. The FTC also said these so-called Big Three health care companies — which it estimates administer 80% of all prescriptions in the U.S. — are inflating drug prices "at an alarming rate, which means there is an urgent need for policymakers to address it...."
Some of the steepest drug markups were "hundreds and thousands of percent," according to Tuesday's report, which highlights just how profitable specialty drugs have become for the three leading PBMs. Cancer drugs alone made up nearly half of the $7.3 billion, the commission wrote, with multiple sclerosis medications accounting for another 25%. Dispensing highly marked-up specialty drugs was a massive income stream for the companies in 2021, the FTC found. Out of tens of thousands of drugs dispensed, the top 10 specialty generics alone made up nearly 11% of the companies' pharmacy-related operating income that year, the agency estimated. Across the 51 drugs the agency analyzed, the Big Three's price-markup revenue surged from $522 million in 2017 to $2.1 billion in 2021, the report said.
"The FTC found that 22 percent of specialty drugs dispensed by PBM-affiliated pharmacies were marked up by more than 1,000 percent," reports The Hill, "while 41 percent were marked up between 100 and 1,000 percent. Among those drugs marked up by more than 1,000 percent, half of them were marked up by more than 2,000 percent."
And the nonprofit site progressive news site Common Dreams shares some examples from the FTC's 60-page report:
"For the pulmonary hypertension drug tadalafil (generic Adcirca), for example, pharmacies purchased the drug at an average of $27 in 2022, yet the Big Three PBMs marked up the drug by $2,079 and paid their affiliated pharmacies $2,106, on average, for a 30-day supply of the medication on commercial claims," the publication notes. That's a staggering average markup of 7,736%... The new analysis follows a July 2024 report that revealed Big Three PBM-affiliated pharmacies received 68% of the dispensing revenue generated by specialty drugs in 2023, a 14% increase from 2016...
Responding to the FTC report, Emma Freer, senior policy analyst for healthcare at the American Economic Liberties Project — a corporate accountability and antitrust advocacy group — said in a statement Tuesday that "the FTC's second interim report lays bare the blatant profiteering by PBM giants, which are marking up lifesaving drugs like cancer, HIV, and multiple sclerosis treatments by thousands of percent and forcing patients to pay the price."
Read more of this story at Slashdot.
"The next great space telescope will study distant galaxies and faraway planets from an orbital outpost about a million miles from Earth," writes the Washington Post. "But first it has to be put together, piece by piece, in a cavernous chamber at the NASA Goddard Space Flight Center in Greenbelt, Maryland."
One long-time NASA worker calls it "the largest clean room in the free world," and the Post notes everyone wears white gowns and surgical masks "to keep hardware from being contaminated by humans. No dust allowed. No stray hairs. One wall is entirely covered by HEPA filters."
The place is known as the Clean Room, or sometimes the High Bay. It is 125 feet long, 100 feet wide, 90 feet high, with almost as much volume as the Capitol Rotunda. NASA boasts that in the Clean Room you could put nearly 30 tractor-trailers side by side on the floor and stack them 10 high... About two dozen workers clustered around towering pieces of hardware, some twice or three times the height of a typical person. When stacked and integrated, these components will form the Nancy Grace Roman Space Telescope.
The assembly of the telescope ramped up this fall, with 600 workers aiming to get everything integrated and tested by late 2026. NASA has committed to launching the telescope no later than May 2027. The telescope will be roughly the size of the Hubble Space Telescope, but not quite as long (a "stubby Hubble," some call it). What the astronomy community and the general public will receive in exchange for the considerable taxpayer investment of nearly $4 billion is an instrument that can do what other telescopes can't.
It will have a sprawling field of view, about 100 times that of the Hubble or Webb space telescopes. And it will be able to pivot quickly across the night sky to new targets and download tremendous amounts of data that will be instantly available to the researchers. A primary goal of the Roman is to understand "dark energy," the mysterious driver of the accelerating expansion of space. But it will also attempt to study the atmospheres of exoplanets — worlds orbiting distant stars...
The main element, informally referred to as "the telescope" but officially called the "optical telescope assembly," showed up this fall. It was originally built as a spy satellite for the National Reconnaissance Office. That's right: It was built to look down at Earth, rather than at the rest of the universe. The NRO decided more than a decade ago that it didn't need it, and gave it, along with another, identical spy satellite, to NASA. Roman's wide-angle view of deep space, its maneuverability and ability to download massive amounts of data makes it optimized as a dark energy telescope. And it will also study the effects of dark matter, which comprises about 25 percent of the universe but remains a ghostly presence.
Read more of this story at Slashdot.
"Scientists have just resurrected 'ELIZA,' the world's first chatbot, from long-lost computer code," reports LiveScience, "and it still works extremely well." (Click in the vintage black-and-green rectangle for a blinking-cursor prompt...)
Using dusty printouts from MIT archives, these "software archaeologists" discovered defunct code that had been lost for 60 years and brought it back to life. ELIZA was developed in the 1960s by MIT professor Joseph Weizenbaum and named for Eliza Doolittle, the protagonist of the play "Pygmalion," who was taught how to speak like an aristocratic British woman.
As a language model that the user could interact with, ELIZA had a significant impact on today's artificial intelligence (AI), the researchers wrote in a paper posted to the preprint database arXiv Sunday (Jan. 12). The "DOCTOR" script written for ELIZA was programmed to respond to questions as a psychotherapist would. For example, ELIZA would say, "Please tell me your problem." If the user input "Men are all alike," the program would respond, "In what way."
Weizenbaum wrote ELIZA in a now-defunct programming language he invented, called Michigan Algorithm Decoder Symmetric List Processor (MAD-SLIP), but it was almost immediately copied into the language Lisp. With the advent of the early internet, the Lisp version of ELIZA went viral, and the original version became obsolete. Experts thought the original 420-line ELIZA code was lost until 2021, when study co-author Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley, an MIT archivist, found it among Weizenbaum's papers. "I have a particular interest in how early AI pioneers thought," Shrager told Live Science in an email. "Having computer scientists' code is as close to having a record of their thoughts, and as ELIZA was — and remains, for better or for worse — a touchstone of early AI, I want to know what was in his mind...."
Even though it was intended to be a research platform for human-computer communication, "ELIZA was such a novelty at the time that its 'chatbotness' overwhelmed its research purposes," Shrager said.
I just remember that time 23 years ago when someone connected a Perl version of ELIZA to "an AOL Instant Messenger account that has a high rate of 'random' people trying to start conversations" to "put ELIZA in touch with the real world..."
Thanks to long-time Slashdot reader MattSparkes for sharing the news.
Read more of this story at Slashdot.
From the New York Post:
Generation Z's recent foray into the corporate world has been an eye-popping escapade plagued by their "annoying" workplace habits and helicopter parents accompanying them on interviews. Now, newcomers to the 9-to-5 grind are inflicting a fresh new level of hell onto the workforce with a trending act of defiance known as "career catfishing."
That means "a successful candidate accepted a job and then never showed up," writes Fortune, citing a survey of 1,000 U.K. employees conducted by CV Genius.
The New York Post notes researchers "found that a staggering 34% of 20-somethings skip Day 1 of work, sans communicating with their new employer, as a demonstration of autonomy."
After drudging through the ever-exasperating job hunting process — which often includes submitting dozens of lengthy applications, suffering through endless rounds of interviews and anxiously awaiting updates from sluggish hiring managers — the Z's are apparently "catfishing" jobs to prove that they, rather than their prospective employers, have all the power.
But the rebellious babes aren't the only ones pulling fast ones on new bosses. A surprising 24% of millennials, staffers ranging in age from 28 to 43, have taken a shine to career catfishing, too, per the findings. However, only 11% of Gen Xers, hirelings ages 44 to 59, and 7% of baby boomers, personnel over age 60, have joined in on the office treachery. Unlike their older colleagues, Gen Zs are apparently more concerned about prioritizing their personal needs and goals than kowtowing to the demands of corporate culture.
Fortune agrees that "Gen Z applicants aren't alone in going no- and low-contact during the recruiting process. Some 74% of employers now admit that ghosting is a facet of the hiring landscape, according to a 2023 Indeed survey of thousands of job seekers and employers..."
That being said, simply not showing up to work could prove unsustainable in the long run. Like many young workers before them, Gen Zers have garnered a poor reputation with employers. Hiring managers have labeled them as the most difficult generation to work with, according to a Resume Genius report.
The report found employees also admitted to practicing "quiet vacationing" (taking time off without telling your boss) and "coffee badging" (grabbing coffee in the office before returning home)...
Read more of this story at Slashdot.
Security feature widens out to more Windows 11 users, including those at home
Microsoft is trying a new way of enabling Administrator Protection in Windows 11. The latest Windows Insider Canary build adds a setting that removes the requirement for IT admins to activate the feature.…
An anonymous reader quotes a report from The Verge: Bumble founder and executive chair Whitney Wolfe Herd, who stepped down as CEO at the beginning of 2024, is returning to the post in mid-March. Former Slack CEO Lidiane Jones, who succeeded Herd, has resigned for "personal reasons" and will remain in the role until Wolfe Herd takes over. "As I step into the role of CEO, I'm energized and fully committed to Bumble's success, our mission of creating meaningful, equitable relationships, and our opportunity ahead," Wolfe Herd says in a statement. "We have exciting innovation ahead for Bumble in this bold new chapter." Bumble's share price has dropped by half since the app introduced a redesign and feature in April that let men send the first message in response to prewritten questions. "Bumble gained popularity in part because it was set up for women to message their matches first," notes The Verge.
"In Bumble's most recent earnings report, it said that the number of paying users had increased from 3.8 million to 4.3 million over the last year, however, average revenue per paying user dropped from $23.42 to $21.17, and its total revenue dropped slightly."
Read more of this story at Slashdot.
Gaia makes its final science observation
The European Space Agency's (ESA) Milky Way mapper Gaia has completed the sky-scanning phase of its mission, racking up more than three trillion observations over the past decade.…
Nintendo released Donkey Kong Country Returns HD earlier this week, with fans noticing that the original team members at Retro are not individually credited in the updated version. "Instead, the credits state that it was 'based on the work' of Retro Studios, while the team at Forever Entertainment gets its credits for working on the remaster," reports GameSpot. In a statement issued to Eurogamer, a Nintendo spokesperson said: "We believe in giving proper credit for anyone involved in making or contributing to a game's creation, and value the contributions that all staff make during the development process." From the report: That statement doesn't really address why the original team's names were excluded from the credits, and this has happened before. In 2023, the Retro Studios developers behind Metroid Prime were left out of the credits for Metroid Prime Remastered. Similarly, external translators voiced their frustrations last year because Nintendo didn't credit them for their work either.
This story has been largely overshadowed by the reveal of Switch 2 earlier this week. It seems likely that the Donkey Kong Country franchise will be revisited on that system as well. However, it's not among the games rumored for Switch 2. In the meantime, the bizarre Donkey Kong Country animated TV series is still available to watch on Prime Video.
Read more of this story at Slashdot.
With added manga and snark. What's not to like?
Opinion Windows 1 and 2 flopped almost as badly as OS/2 did. How did Microsoft stage one of the greatest comebacks ever with Windows 3?…
Smithsonian Magazine reports: A homeowner on Prince Edward Island in Canada has had a very unusual near-death experience: A meteorite landed exactly where he'd been standing roughly two minutes earlier. What's more, his home security camera caught the impact on video -- capturing a rare clip that might be the first known recording of both the visual and audio of a meteorite striking the planet. The shocking event took place in July 2024 and was announced in a statement by the University of Alberta on Monday.
"It sounded like a loud, crashing, gunshot bang," the homeowner, Joe Velaidum, tells the Canadian Press' Lyndsay Armstrong. Velaidum wasn't home to hear the sound in person, however. Last summer, he and his partner Laura Kelly noticed strange, star-shaped, grey debris in front of their house after returning from a walk with their dogs. They checked their security camera footage, and that's when they saw and heard it: a small rock plummeting through the sky and smashing into their walkway. It landed so quickly that the space rock itself is only visible in two of the video's frames.
Read more of this story at Slashdot.
OpenAI has developed a language model designed for engineering proteins, capable of converting regular cells into stem cells. It marks the company's first venture into biological data and demonstrates AI's potential for unexpected scientific discoveries. An anonymous reader quotes a report from MIT Technology Review:
Last week, OpenAI CEO Sam Altman said he was "confident" his company knows how to build an AGI, adding that "superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own." The protein engineering project started a year ago when Retro Biosciences, a longevity research company based in San Francisco, approached OpenAI about working together. That link-up did not happen by chance. Sam Altman, the CEO of OpenAI, personally funded Retro with $180 million, as MIT Technology Review first reported in 2023. Retro has the goal of extending the normal human lifespan by 10 years. For that, it studies what are called Yamanaka factors. Those are a set of proteins that, when added to a human skin cell, will cause it to morph into a young-seeming stem cell, a type that can produce any other tissue in the body. [...]
OpenAI's new model, called GPT-4b micro, was trained to suggest ways to re-engineer the protein factors to increase their function. According to OpenAI, researchers used the model's suggestions to change two of the Yamanaka factors to be more than 50 times as effective -- at least according to some preliminary measures. [...] The model does not work the same way as Google's AlphaFold, which predicts what shape proteins will take. Since the Yamanaka factors are unusually floppy and unstructured proteins, OpenAI said, they called for a different approach, which its large language models were suited to. The model was trained on examples of protein sequences from many species, as well as information on which proteins tend to interact with one another. While that's a lot of data, it's just a fraction of what OpenAI's flagship chatbots were trained on, making GPT-4b an example of a "small language model" that works with a focused data set.
Once Retro scientists were given the model, they tried to steer it to suggest possible redesigns of the Yamanaka proteins. The prompting tactic used is similar to the "few-shot" method, in which a user queries a chatbot by providing a series of examples with answers, followed by an example for the bot to respond to. Although genetic engineers have ways to direct evolution of molecules in the lab, they can usually test only so many possibilities. And even a protein of typical length can be changed in nearly infinite ways (since they're built from hundreds of amino acids, and each acid comes in 20 possible varieties). OpenAI's model, however, often spits out suggestions in which a third of the amino acids in the proteins were changed. "We threw this model into the lab immediately and we got real-world results," says Retro's CEO, Joe Betts-Lacroix. He says the model's ideas were unusually good, leading to improvements over the original Yamanaka factors in a substantial fraction of cases.
Read more of this story at Slashdot.
Cyber agency too 'far off mission,' says incoming boss Kristi Noem
America's lead cybersecurity agency on Friday made one final scream into the impending truth void about election security and the role CISA plays in maintaining it.…
Sales of electric vehicles and hybrids reached 20% of new car sales in the U.S. last year, with Tesla maintaining dominance in the EV market despite a slight decline in market share. CNBC reports: Auto data firm Motor Intelligence reports more than 3.2 million "electrified" vehicles were sold last year, or 1.9 million hybrid vehicles, including plug-in models, and 1.3 million all-electric models. Traditional vehicles with gas or diesel internal combustion engines still made up the majority of sales, but declined to 79.8%, falling under 80% for the first time in modern automotive history, according to the data.
Regarding sales of pure EVs, Tesla continued to dominate, but Cox Automotive estimated its annual sales fell and its market share dropped to about 49%, down from 55% in 2023. The Tesla Model Y and Model 3 were estimated to be the bestselling EVs in 2024. Following Tesla in EV sales was Hyundai Motor, including Kia, at 9.3% of EV market share; General Motors at 8.7%; and then Ford Motor at 7.5%, according to Motor Intelligence. BMW rounded out the top five at 4.1%. The EV market in the U.S. is highly competitive: Of the 68 mainstream EV models tracked by Cox's Kelley Blue Book, 24 models posted year-over-year sales increases; 17 models were all new to the market; and 27 decreased in volume.
Read more of this story at Slashdot.
"The FDIC sued 17 former executives and directors of Silicon Valley Bank on Thursday, seeking to recover billions of dollars for alleged gross negligence and breaches of fiduciary duty," reports Reuters. The move comes almost two years after Silicon Valley Bank's March 2023 collapse, which shocked financial markets and ended up benefiting big players like JPMorgan Chase. From the report: In a complaint filed in San Francisco federal court, the FDIC, in its capacity the bank's receiver, said the defendants ignored fundamental standards of prudent banking and the bank's own risk policies in letting the bank take on excessive risks to boost short-term profit and its stock price. The FDIC faulted the bank's overreliance on unhedged, interest rate-sensitive long-term government bonds such as US Treasuries and mortgage-backed securities, as rates looked set to -- and eventually did -- rise. It also objected to the payment of a "grossly imprudent" $294 million dividend to its parent that drained needed capital "at a time of financial distress and management weakness" in December 2022, less than three months before its demise.
"SVB represents a case of egregious mismanagement of interest-rate and liquidity risks by the bank's former officers and directors," the complaint said. The defendants include former Chief Executive Gregory Becker, former Chief Financial Officer Daniel Beck, four other former executives and 11 former directors.
Read more of this story at Slashdot.
Google computer scientists have been using LLMs to streamline internal code migrations, achieving significant time savings of up to 89% in some cases. The findings appear in a pre-print paper titled "How is Google using AI for internal code migrations?" The Register reports: Their focus is on bespoke AI tools developed for specific product areas, such as Ads, Search, Workspace and YouTube, instead of generic AI tools that provide broadly applicable services like code completion, code review, and question answering. Google's code migrations involved: changing 32-bit IDs in the 500-plus-million-line codebase for Google Ads to 64-bit IDs; converting its old JUnit3 testing library to JUnit4; and replacing the Joda time library with Java's standard java.time package. The int32 to int64 migration, the Googlers explain, was not trivial as the IDs were often generically defined (int32_t in C++ or Integer in Java) and were not easily searchable. They existed in tens of thousands of code locations across thousands of files. Changes had to be tracked across multiple teams and changes to class interfaces had to be considered across multiple files. "The full effort, if done manually, was expected to require hundreds of software engineering years and complex crossteam coordination," the authors explain.
For their LLM-based workflow, Google's software engineers implemented the following process. An engineer from Ads would identify an ID in need of migration using a combination of code search, Kythe, and custom scripts. Then an LLM-based migration toolkit, triggered by someone knowledgeable in the art, was run to generate verified changes containing code that passed unit tests. Those changes would be manually checked by the same engineer and potentially corrected. Thereafter, the code changes would be sent to multiple reviewers who are responsible for the portion of the codebase affected by the changes. The result was that 80 percent of the code modifications in the change lists (CLs) were purely the product of AI; the remainder were either human-authored or human-edited AI suggestions.
"We discovered that in most cases, the human needed to revert at least some changes the model made that were either incorrect or not necessary," the authors observe. "Given the complexity and sensitive nature of the modified code, effort has to be spent in carefully rolling out each change to users." Based on this, Google undertook further work on LLM-driven verification to reduce the need for detailed review. Even with the need to double-check the LLM's work, the authors estimate that the time required to complete the migration was reduced by 50 percent. With LLM assistance, it took just three months to migrate 5,359 files and modify 149,000 lines of code to complete the JUnit3-JUnit4 transition. Approximately 87 percent of the code generated by AI ended up being committed with no changes. As for the Joda-Java time framework switch, the authors estimate a time saving of 89 percent compared to the projected manual change time, though no specifics were provided to support that assertion.
Read more of this story at Slashdot.
Longtime Slashdot reader tlhIngan writes: In what is perhaps the greatest irony ever, the operators of RedNote (known as Xiaohongshu) have decided to "wall off" US TikTok refugees fleeing to its service as the TikTok ban looms. The reason? The Chinese Communist Party (CCP) wants to prevent American influence from spreading to Chinese citizens. The ban is expected to be in place next week, while many believe that the influx of Americans to be temporary and just a reaction to the TikTok ban to move to another Chinese app. Many Chinese users are not happy with the influx as having "ruined" their ability to connect with "Chinese culture, Chinese values and Chinese news."
Read more of this story at Slashdot.
Third-party supplier blamed as folks left unable to access funds
Capital One is still battling to fix whatever brought down its systems on Wednesday, which has left people unable to access their money.…
The U.S. Department of the Treasury's OFAC has sanctioned Yin Kecheng and Sichuan Juxinhe Network Technology Co. for their roles in a recent Treasury breach and espionage operations targeting U.S. telecommunications. BleepingComputer reports: "Yin Kecheng has been a cyber actor for over a decade and is affiliated with the People's Republic of China Ministry of State Security (MSS)," reads the Treasury's announcement. "Yin Kecheng was associated with the recent compromise of the Department of the Treasury's Departmental Offices network," says the agency.
OFAC also announced sanctions against Sichuan Juxinhe Network Technology Co., a Chinese cybersecurity firm believed to be directly involved with the Salt Typhoon state hacker group. Salt Typhoon was recently linked to several breaches on major U.S. telecommunications and internet service providers to spy on confidential communications of high-profile targets. "Sichuan Juxinhe Network Technology Co., LTD. (Sichuan Juxinhe) had direct involvement in the exploitation of these U.S. telecommunication and internet service provider companies," the U.S. Treasury explains, adding that "the MSS has maintained strong ties with multiple computer network exploitation companies, including Sichuan Juxinhe." [...]
The sanctions imposed on Kecheng and the Chinese cybersecurity firm under Executive Order (E.O.) 13694 block all property and financial assets located in the United States or are in the possession of U.S. entities, including banks, businesses, and individuals. Additionally, U.S. entities are prohibited from conducting any transactions with the sanctioned entities without OFAC's explicit authorization. It's worth noting that these sanctions come after OFAC sanctioned Beijing-based cybersecurity company Integrity Tech for its involvement in cyberattacks attributed to the Chinese state-sponsored Flax Typhoon hacking group. U.S. Treasury's announcement reiterates that the U.S. Department of State offers, through its Rewards for Justice program, up to $10,000,000 for information leading to uncovering the identity of hackers who have targeted the U.S. government or critical infrastructure in the country.
Read more of this story at Slashdot.
Plus: Uncle Sam is cross with this one Chinese biz over Salt Typhoon mega-snooping
Decades-old legislation requiring American telcos to lock down their systems to prevent foreign snoops from intercepting communications isn't mere decoration on the pages of law books – it actually means carriers need to secure their networks, the FCC has huffed.…
Pages
|