Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Rubaiya Hafiz

Pages: 1 ... 5 6 [7]
91
Technology News / Google Maps adds new features for Bangladesh
« on: July 31, 2019, 11:24:55 AM »
Google, with its global to local approach, has introduced three new features for Bangladeshi users. Some of the features were made available for the local users recently but the formal announcement came today. In a press conference today, Google's regional officials showcased the new three features. Google also informed it has added more than 50,000KM of roads, more than 8 million buildings, and more than 600,000 points of interests on Google Maps across Bangladesh since January 2018. ​

Of the three features, most prominent is probably a dedicated and new navigation mode for motorcycle riders. With the surge of ridesharing through two-wheelers, Bangladesh saw a 200% growth of two-wheelers in the country according to data by Bangladesh Road Transport Authority. Keeping in that very crowd in mind, Google also introduced this navigation mode. Previously, riders would need to do a mental calculation to estimate their arrival times based on a combination of walking and driving routes, but this new feature delivers more accurate estimated time of arrival to the destination based on machine learning models that reflect motorbike speeds. It also guides with the direction with the assistance of the  Google Street View image for pre-trip planning. Another update rolling out today is turn-by-turn voice navigation in the local language. Now you can expect Google Maps to effortlessly tell you all the key roads and places in Bangla.

Lastly, Google Maps is introducing 'Stay safer' and 'Set off-route alerts' option. After searching for your destination and getting directions, you can enable this option so that if your vehicle deviates more than 0.5km from the Google Maps suggested route, your phone will buzz and you can tap in to see the deviation. You can also choose to share your live trip with friends and family directly from that screen so they know you are off route and can keep track of your journey.

92
Latest Technology / The 5G health hazard that isn't
« on: July 31, 2019, 11:23:03 AM »
In 2000, the Broward County Public Schools in Florida received an alarming report. Like many affluent school districts at the time, Broward was considering laptops and wireless networks for its classrooms and 250,000 students. Were there any health risks to worry about?
The district asked Bill P Curry, a consultant and physicist, to study the matter. The technology, he reported back, was “likely to be a serious health hazard.” He summarised his most troubling evidence in a large graph labelled “Microwave Absorption in Brain Tissue (Grey Matter).”

The chart showed the dose of radiation received by the brain as rising from left to right, with the increasing frequency of the wireless signal. The slope was gentle at first, but when the line reached the wireless frequencies associated with computer networking, it shot straight up, indicating a dangerous level of exposure.

“This graph shows why I am concerned,” Curry wrote. The body of his report detailed how the radio waves could sow brain cancer, a terrifying disease that kills most of its victims.

Over the years, Curry’s warning spread far, resonating with educators, consumers and entire cities as the frequencies of cellphones, cell towers and wireless local networks rose. To no small degree, the blossoming anxiety over the professed health risks of 5G technology can be traced to a single scientist and a single chart.

Except that Curry and his graph got it wrong.

According to experts on the biological effects of electromagnetic radiation, radio waves become safer at higher frequencies, not more dangerous. (Extremely high-frequency energies, such as X-rays, behave differently and do pose a health risk.)

In his research, Curry looked at studies on how radio waves affect tissues isolated in the lab and misinterpreted the results as applying to cells deep inside the human body. His analysis failed to recognise the protective effect of human skin. At higher radio frequencies, the skin acts as a barrier, shielding the internal organs, including the brain, from exposure. Human skin blocks the even higher frequencies of sunlight.

“It doesn’t penetrate,” said Christopher M Collins, a professor of radiology at New York University who studies the effect of high-frequency electromagnetic waves on humans. Curry’s graph, he added, failed to take into account “the shielding effect.”

Dr Marvin C Ziskin, an emeritus professor of medical physics at Temple University School of Medicine, agreed. For decades, Ziskin explored whether such high frequencies could sow illness. Many experiments, he said, support the safety of high-frequency waves.

Despite the benign assessment of the medical establishment, Curry’s flawed reports were amplified by alarmist websites, prompted articles linking cellphones to brain cancer and served as evidence in lawsuits urging the removal of wireless classroom technology. In time, echoes of his reports fed Russian news sites noted for stoking misinformation about 5G technology. What began as a simple graph became a case study in how bad science can take root and flourish.

“I still think there are health effects,” Curry said in an interview. “The federal government needs to look at it more closely.”
An authoritative mistake
Curry was not the first to endorse the idea that advances in wireless technology could harbor unforeseen risks. In 1978, Paul Brodeur, an investigative journalist, published “The Zapping of America,” which drew on suggestive but often ambiguous evidence to argue that the growing use of high frequencies could endanger human health.

In contrast, Curry’s voice was authoritative. He became a private consultant in the 1990s after federal budget cuts brought his research career to an end. He had degrees in physics (1959 and 1965) and electrical engineering (1990). His credentials and decades of experience at federal and industrial laboratories, including the Lawrence Livermore National Laboratory, seemed to make him a very strong candidate to conduct the Broward study.

“He was a very bright guy,” recalled Gary Brown, an expert in the district’s technology unit who worked with Curry to prepare the reports. But Curry lacked biological expertise. He could solve atomic and electromagnetic puzzles with ease, but he had little or no formal training in the intricacies of biomedical research.

In 2000, Curry, writing on letterhead from his home office in the Chicago suburbs, sent the Broward district two reports, the first in February 2000 and the second in September of that year. The latter study went to the superintendent, the school board and the district’s head of safety and risk management.

The frequency graph in the second report was far more detailed. Its rising line bore annotations noting the precise locations for the wireless-network dose and, far lower down, for radio, television and cellphone signals.

Overall, Curry’s reports cast the emerging topic as crucial for public health. He warned that children were especially vulnerable to the cancer risk of wireless technology. “Their brains are developing,” he noted in his first report.

Curry belonged to a national group of wireless critics, and his two reports for the Broward district soon began to circulate widely among industry foes. One reached Dr David O Carpenter, who for decades had clashed with the science establishment on the health risks of radio waves.

Carpenter’s credentials were impressive. He graduated magna cum laude from Harvard in 1959 and cum laude from its medical school in 1964. From 1985 to 1997, he served as dean of the School of Public Health at the State University of New York in Albany and in 2001 became director of its Institute for Health and the Environment, where he still works. His resume lists hundreds of journal reports, jobs, grants, awards, advisory boards, books and legal declarations.

Carpenter stirred global controversy in the 1980s by asserting that high-voltage power lines could cause leukemia in nearby children. He appeared as an authority in Brodeur’s 1989 book, “Currents of Death.” But federal researchers failed to find solid evidence to support the warnings.

In late 2011, Carpenter introduced Curry’s graph in a lawsuit that sought to force the Portland, Oregon, public schools to abandon their wireless computer networks. The suit had been filed by a worried parent.

As an expert witness, Carpenter said in a legal declaration Dec 20, 2011, that the graph showed how the brain’s absorption of radio-wave energy “increases exponentially” as wireless frequencies rise, calling it evidence of grave student danger. The graph “illustrates the problem with the drive of the wireless industry toward ever higher frequencies,” he said.

In response to such arguments, the industry noted that it obeys government safety rules. The judge in the Portland case said the court had no jurisdiction over federal regulatory matters and dismissed the lawsuit.

Despite the setback, Carpenter’s 2011 declaration, which included Curry’s graph, kept drawing attention. In 2012, he introduced it as part of his testimony to a Michigan state board assessing wireless dangers, and it soon began circulating online among wireless critics.

And he saw a new danger. Between 2010 and 2012, the frequencies of the newest generation of cellphones, 4G, rose past those typical of the day’s wireless networks. Carpenter now had a much larger and seemingly more urgent target, especially since cellphones were often held snugly against the head.

“There is now much more evidence of risks to health, affecting billions of people,” he said in introducing a 1,400-page report on wireless dangers that he edited with an aide. “The status quo is not acceptable.”

His BioInitiative Report, released in late 2012, gained worldwide notice. But mainstream science rejected its conclusions. Two Oxford University researchers described it as “scientifically discredited.”

A ‘fact’ is born

Unbowed, Carpenter worked hard to revise established science. In 2012, he became editor-in-chief of Reviews on Environmental Health, a quarterly journal. He published several authors who filed alarmist reports, as well as his own.

“The rapid increase in the use of cellphones increases risk of cancer, male infertility, and neurobehavioral abnormalities,” Carpenter wrote in 2013.

In subsequent years, as the frequencies of wireless devices continued to rise, an associated risk of brain cancer was repeated uncritically, often without attribution to Curry or Carpenter. Instead, it came to be regarded by activists as an established fact of modern science.

“The higher the frequency, the more dangerous,” according to Radiation Health Risks, a website, in reference to signals from 5G towers. The idea was echoed by a similar website, 5G Exposed — “Higher frequencies are more dangerous to health” — on a page entitled “Scientific Discussion.” Overall, the site bristled with brain-cancer warnings.

Recently, Carpenter told RT America, a Russian television network, that the newest cellphones represented a dire health threat. “The rollout of 5G is very frightening,” he said. “Nobody is going to be able to escape the radiation.”

In recent months, the network has run a series of segments critical of 5G technology. “The higher the frequency, the more dangerous it is to living organisms,” a RT reporter told viewers in March. The show described children as particularly vulnerable.

The new cellphones are to employ a range of radio frequencies up to dozens of times higher than those Curry identified two decades ago as endangering student health. But mainstream scientists continue to see no evidence of harm from cellphone radio waves.

“If phones are linked to cancer, we’d expect to see a marked uptick,” David Robert Grimes, a cancer researcher at the University of Oxford, wrote recently in The Guardian. “Yet we do not.”

In a recent interview, Carpenter defended his high-frequency view. “You have all this evidence that cellphone radiation penetrates the brain,” he said. But he conceded after some discussion that the increasingly high frequencies could in fact have a difficult time entering the human body: “There’s some legitimacy to that point of view.”

He noted that, in cities, 5G service requires the placement of many antenna towers because walls, buildings, rain, leaves and other objects can block the high-frequency signals. “That’s why they put the towers so close together,” he said. “The waves don’t penetrate.” If human skin also blocks 5G signals, Carpenter acknowledged, “maybe it’s not that big a deal.”

Curry, now 82, was less forthcoming. In an interview, he said he no longer follows the wireless industry and disavowed any knowledge of having made a scientific error.

“They can say whatever they want,” Curry said of his detractors. “I’ll leave it to the young in the business and let them figure it out.”

 

c.2019 New York Times News Service

93
Faced with a rising backlash over the spread of disinformation in the aftermath of the 2016 elections, Facebook last year came up with a seemingly straightforward solution: It created an online library of all the advertisements on the social network.
Transparency, it decided, was the best disinfectant.

Ads would stay in the library for seven years, letting ordinary users see who was pushing what messages and how much they were paying to do it. Facebook gave researchers and journalists deeper access, allowing them to extract information directly from the library so they could create their own databases and tools to analyse the ads — and ferret out disinformation that had slipped past the social network’s safeguards.

“We know we can’t protect elections alone,” Facebook said when it unveiled the latest version of its Ad Library in March. “We’re committed to creating a new standard of transparency and authenticity for advertising.”

But instead of setting a new standard, Facebook appears to have fallen short. While ordinary users can look up individual ads without a problem, access to the library’s data is so plagued by bugs and technical constraints that it is effectively useless as a way to comprehensively track political advertising, according to independent researchers and two previously unreported studies on the archive’s reliability, one by the French government and the other by researchers at Mozilla, maker of the Firefox web browser.

The problems raise new questions about Facebook’s commitment to battling disinformation and reflect the struggles of big tech firms and governments across the world to counter it.

US officials are already grappling with Russian attempts to interfere in the 2020 presidential race and are powerless to stop American tricksters from joining the fray because they are protected by the First Amendment. In Europe, an ambitious effort to build an early warning system fell flat during European Parliament elections in May, producing no alerts, despite Russian disinformation campaigns that officials said were designed to sway public opinion and depress voter turnout.

For Facebook, in particular, it is an especially challenging moment: The company was ordered to pay a record $5 billion fine by the Federal Trade Commission on Wednesday for privacy violations, and it agreed to better police how it handles its users’ data. The measures, though, will do little to help the company with its disinformation problem.

Mozilla researchers, who provided their report to The New York Times, had originally set out to track political advertising ahead of the European elections using the application program interface, or API, that Facebook set up to provide access to the library’s data. They instead ended up documenting problems with Facebook’s library after managing to download the information they needed on only two days in a six-week span because of bugs and technical issues, all of which they reported to Facebook.

94
Mobile Apps / FaceApp is the future
« on: July 31, 2019, 11:17:20 AM »
My favourite thing about John Herrman (c. 2069) is that he’s still alive. My least favourite thing about him is that he can’t talk. He knows what’s coming. He probably also has some ideas about what really mattered, in hindsight, and I’d probably agree with his ideas, because he is me. There are many things I would like to ask this flatteringly aged version of myself, who is smiling, despite whatever. Who dies when? How bad does, you know, it get? Got a sports almanac sitting around?
I would also like to ask him about FaceApp, his indifferent creator. Aside from being a decent icebreaker, it would be a way into something 2019 me is quite curious about: The group of millennium-era technologies known colloquially as the internet, and where they are taking us.

It’s hard enough to tell, looking back over just a few weeks, how we got here with FaceApp. After going viral for the second or third time in the United States, the novelty app became, in the space of a few days, an avatar for deceptive user agreements, a player in an insinuated global conspiracy and a cautionary tale about how little it takes to convince tens of millions of people to give up their likenesses for processing. Sen. Chuck Schumer of New York, a Democrat, said in a letter that he would like the FBI and FTC to investigate, writing that it is “deeply troubling” that Americans’ personal data had been transferred to a “hostile foreign power.” This week, Sen. Rick Scott of Florida, a Republican, introduced a bill that would require app stores to list software’s country of origin, citing FaceApp.

It’s not hard to understand why people fell for the app. Its facial filters are often good enough to be plausible, and novel enough to be surprising, creating the impression that this particular artificial intelligence knows something we don’t. It’s a cartoonish but illustrative example of automation, in that it takes something rare and specialised — age-advanced portraits — and makes them available to everyone at no upfront cost. The app also set up the perfect lightly self-deprecating joke for the countless attractive celebrities who took part in the FaceApp Challenge. (The challenge’s directive being: post your FaceApp.)

The FaceApp backlash is more complicated. Users of the app agreed to grant the company a license to their photos that is “perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid,” which is indeed quite expansive. Just as the app went viral, so did some of the fine print.

Of course, the agreement was boilerplate. Assumptions about the motives of Russian developers were just that; while anyone operating an app that asks for the permissions granted to FaceApp has comprehensive access to users’ devices, close traffic analysis found that it was, for now, doing what it claimed: collecting millions of uploaded user photos, analysing them and then serving them back older, younger, gender-swapped, smiling or styled.

After a few days of hazy panic came some attempts to restore clarity: There are many apps we already use that are at least as invasive as FaceApp, and that use facial recognition, and that are already known to have breached the trust of their users. Worries about FaceApp were, in other words, disproportionate to its significance. “Think FaceApp Is Scary? Wait Till You Hear About Facebook,” read a headline in Wired. The situation was also, in part, our fault: “You downloaded FaceApp. Here’s what you just did to your privacy,” another ominously stated in The Washington Post.

It’s true that tens of millions of people downloaded the app. And we do live in a world in which egregious privacy violations are the norm. But reflexive calls to calm down, or see the bigger picture, were part of the slow, complicated process that made them normal.

The fact that a frictionless process enjoyed between friends, motivated by reasons both silly and poignant, entails granting a legal license to media containing your likeness is plainly absurd. How did this happen? Just because privacy violations are routine doesn’t mean they make sense, or that anyone actually asked for them. No wonder our embrace and rejection of FaceApp seems confused.

“We have this underlying infection, and every once in a while it breaks out, and suddenly you have a spike of fever,” said Shoshana Zuboff, author of “The Age of Surveillance Capitalism.”

“You treat the symptom, or you don’t,” she said, “and you stay oblivious to the underlying cause.”

“The default economic model for almost every app in existence is to not only take what you give it, but to take its privileged position on your phone, in your computer, to secretly take much more than you’ve given, and to use that much more in ways that are ultimately monetisable,” Zuboff added. It’s not what these companies have done, or haven’t done, or what they might do in the future. It’s what they’re able to do, and how little say we have in it.

In a generation, large-scale facial recognition has gone from the sort of thing that might be abused by governments to a technology that widely is. It’s also both legally and practically possible for any company that gives people a way to have fun for a few minutes on their smartphones.

Discussion about the dangers of an app like FaceApp have revolved around competing possible future violations: users’ images being sold as stock photos, or used in an ad; a massive data set being sold to a company with different ambitions; a hack. But the real violation is right there in the concept, and in the name.

FaceApp, in order to do the innocent thing that it advertises, must collect data so personal that its frequent surrender and seizure could soon result in the end of anonymous free movement on Earth. This is what the app economy, often a synonym for the new economy, demands. You can make the most innocent assumptions about FaceApp and its creators and still arrive at the conclusion that it should not exist, and yet here it is, the perfect smartphone toy, with nearly a million reviews in the App Store, and a rating of 4.7/5 stars. Sound crazy? Try it again this way: How did you make that photo? Oh my God. FaceApp? I’ve got to download that. “Allow access to your photos”? OK, “Allow.” “Enable access to the camera”? OK, “Enable.”

If the old man in my FaceApp could talk, he might scream: “FaceApp is the singularity! FaceApp is the end of the world.” More likely, though, he wouldn’t remember FaceApp at all. If anything, he’d probably say that FaceApp was just a fun app whose makers, after a bad year or two, maybe began to wonder just how much the things they already had access to were worth, and to whom, and who, like us, didn’t really feel the need to think about just how completely we had been habituated to not think about how, whether an app failed and disappeared or became incredibly successful, the data we had given it years ago would be put to uses beyond our control or imagination.

The story of FaceApp has been short, confused and speculative, but of course it has. The digital world that produced it configured to make us feel like we’re losing our minds. Its story, as easy as it is to dismiss, and as conspicuously frivolous as it is, is not excused, but rather given power by how well it rhymes with the much longer stories of Facebook and Google. It was Facebook to which we were uploading photos, for reasons silly and poignant, 15 years ago. And it was Facebook that started asking us to tag them, and then which started tagging them itself. It was Google that started as one thing and become many things, each bigger than the first, carrying with it whatever data we gave it, and whatever permissions we granted to its unrecognisable former selves. And while Facebook’s betrayals, both prosecutable and more general, are both more established and far larger than anything a gimmick app like FaceApp could aspire to, Facebook was a gimmick site too, for a while.

The Facebook backlash has been intense, confused and frequently criticized as misguided, as it backtracks up seemingly countless branches of a diagnostic tree. This backlash, like the FaceApp backlash, is locked within the industrial and legal framework that produced these apps in the first place. One that devised, as punishment for Facebook’s handling of user data, a $5 billion fine that the company can easily pay through the further handling of user data. One that takes for granted that industry has a right to use poorly understood forms of personal data into perpetuity. And one that doesn’t dare to entertain the possibility that it has created something that, in its popular form, simply should not exist.

95
The average performance of the lowest income students in the United States lags about three to four years behind that of the highest income students — an achievement gap that has remained constant for more than four decades, a new study finds.

An analysis of standardized tests given to more than 2.7 million middle and high school students over almost 50 years suggests that federal education programs aimed at closing that gap are falling short, researchers report online March 18 in the National Bureau of Economic Research. Lower achievement in high school leads to lower earning potential throughout adulthood, says coauthor Eric Hanushek, education economist at Stanford University. “The next generation is going to look a lot like this generation. Kids from poor families will become poor themselves.” 

Whether the problem is worsening, however, is up for debate. A widely cited 2011 study, also out of Stanford, showed the achievement gap widening between children born in the mid-1970s and those born in the early 2000s. But Hanushek says his work suggests the gap is holding steady, but isn’t worsening, as previously believed.

He and colleagues looked at results from four different programs conducted nationwide at various intervals from 1971 to 2015 to test teenagers in math, reading and science. A total of 98 exams were used in the programs, testing 13–15 year olds as well as 17 year olds.

To categorize students by family income level, the researchers relied on demographic surveys given alongside the standardized tests that included information on parents’ education levels and other lifestyle indicators. For example, a dishwasher in the 1950s was seen as a wealth indicator. More recent signs of wealth include whether a student has a separate bedroom or a personal computer.

Test scores for 17-year-old students in the bottom 10th income percentile were far lower than those in the top 10th percentile — suggesting the poorest students’ learning was about three or four years behind that of the richest, the authors report.

Meanwhile, the overall test scores themselves didn’t shift for 17 year olds during the study period. They did improve slightly for 13-15 year olds, but with the lowest-income students still scoring much lower than highest-income students. That suggests that federal programs for younger students have been helpful, including the Head Start preschool program for needy families, or the No Child Left Behind initiative setting academic standards and testing programs for grades 3 to 8, Hanushek says. Programs for older students are sorely needed, he says.

The 2011 study also shows the poorest students about three to six years behind their wealthier peers in terms of learning. But that study, conducted by Stanford education sociologist Sean Reardon, suggests that the achievement gap has been growing wider for decades. The 2011 study looked at 12 exams administered from 1960 to 2007, and found that the gap in test scores between the poorest students and the wealthiest grew by 40 percent from the 1970s to the early 2000s. Reardon suggested parents with means were increasingly investing in their children’s education, exacerbating the divide.

The differing results between the new study and that conducted in 2011 come down to the fact that the researchers analyzed results from different tests and family income assessments, says education sociologist Anna Chmielewski at the University of Toronto, who was not involved in either of the studies.

Hanushek and Reardon agree that the income-related achievement gap is alarming. “That shouldn’t be obscured by academic quibbling,” Reardon says.     

96
wow...its nice...thank you for sharing.

97
At times, MyScript Calculator 2 feels a bit like magic. It lets you write out a calculation by hand, so you’re not reliant on calculator buttons, then its turns it into neat text and solves it for you. In our tests – with our exceedingly messy handwriting – it knew what we were writing every time.

It goes way beyond the basics too, with support for brackets, logarithms, constants, roots, trigonometry, and more.

It also lets you write calculations over multiple lines, scribble out mistakes (or hit the undo button), and drag and drop elements of the calculation to move them around, updating the answer as you do so.It saves previous calculations so you can always return to them, and lets you share your sums with other apps.

100
Thank you so much for sharing such an innovative idea.

101
http://www.bbc.com/news/technology-43949552
A cancer research team hopes to build a network of more than 100,000 UK smartphones to help process data while their owners sleep.

Phone owners can get involved by downloading an app and donating some of their wi-fi or data plan.

The handsets will need to be switched on and charged for six hours per night.

The aim is that they will form a huge network that processes data about different drug combination, using an algorithm built by the team.

It would take over 100 years for a single desktop computer to process the amount of data involved, the researchers said.

"Cancer research progress is slowed by a lack of access to supercomputing," said Dr Kirill Veselkov, from the faculty of medicine at Imperial College.

"It's needed to complete analysis - but it's limited and costly.

"This is a great example of a citizen science project - with members of the public directly involved."

The project, called Drugs (Drug Repositioning Using Grids of Smartphones), is a collaboration with the Vodafone Foundation and is expected to run for two years.
Dr Veselkov's team is searching for new combinations of existing drugs that can be tailored to cancer patients' individual genetic make-up. Each one is unique.

"Let's say there are 10,000 drugs, with different combinations - that's a trillion possibilities," said Dr Veselkov.

"If you want to crunch these possibilities, it could take over 300 years. Harnessing the power of 100,000 phones, you can do the same in two or three months."

The Dreamlab app it will run from was originally developed in partnership with the Garvan Institute in Australia for a similar purpose in 2015.

Another scheme, The World Community Grid, run by IBM, uses the computing power of volunteered devices to research many different illnesses.

Data can be sent to the phone via either wi-fi or mobile data, with a cap of 500MB per month.

No data is taken from the device or any apps installed on it.

"People forget that the processing capability of a modern high-end smartphone is as powerful as a computer from only a few years ago," said analyst Ben Wood from CCS Insight.

"It makes great sense to try and harness this resource, particularly when a phone is sitting doing nothing overnight other than being charged for the next day's use."

103
Its a good news for all of us that Microsoft has announced a new feature that is claimed to increase the battery life on Windows 10 devices. Dubbed 'Power Throttling', the feature has been included in the latest Windows 10 Insider Preview build 16176. The company says that 'Power Throttling' makes a Windows 10 computer's CPU switch to energy-efficient operating mode automatically, thus reducing overall power consumption.
"You may remember some of our January power experiments we mentioned in Build 15002's release notes. Power Throttling was one of those experiments, and showed up to 11% savings in CPU power consumption for some of the most strenuous use cases," said Bill Karagounis, director of program management, Windows Insider Program & OS Fundamentals, Microsoft.
The company has clarified that 'Power Throttling' is only limited to devices with Intel's 6th-gen (and beyond) Core processors for now. Microsoft is working on expanding support to other processors over the next few months.

Pages: 1 ... 5 6 [7]