Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - anwar.swe

Pages: [1] 2 3
What are we to make of the revelations published over the weekend, in the Observer and the Times, that Cambridge Analytica, the data-analytics and messaging company financed, in part, by the conservative billionaire Robert Mercer, used tens of millions of ill-gotten Facebook profiles to create algorithms aimed at “breaking” American democracy?

First, that these were not really revelations at all. Reporters from the Guardian, The New Yorker, The New York Review of Books, Das Magazin, and the Intercept have been reporting these facts for years. We knew as early as December, 2015, for instance, that Facebook data obtained without users’ knowledge was being exploited by Cambridge Analytica on behalf of Senator Ted Cruz, who at that time was Mercer’s preferred candidate in the Republican Presidential primaries. Later, when the Mercer family, along with Steve Bannon, came to support Donald Trump, it was no secret that they brought with them Cambridge Analytica, a firm that boasted of being able to parse and influence the electorate through “psychographic” algorithms derived from that data. After Trump won, Alexander Nix, the head of Cambridge Analytica, crowed that the company’s psychographic algorithms had carried the day. (He later retracted that, then reaffirmed it.)

The millions of Facebook accounts in question—as the reporter Michael Schwartz pointed out last March—were mostly culled from the friend networks of people who clicked on a cute personality quiz on the site. A significant number of the initial test-takers, starting in around 2014, were paid freelancers recruited through Amazon’s Mechanical Turk marketplace by a British research company called GSR. They and others who took the quiz likely did not know that they were giving GSR permission to access their Facebook friends’ profiles. If Facebook officials were not aware of this at the time, when GSR sold this data to Cambridge Analytica, they certainly knew it by January, 2017, when the Swiss researchers Hannes Grassegger and Mikael Krogerus published “The Data That Turned the World Upside Down,” a detailed account of how C.A.’s psychological modelling was used by the Trump campaign. (The Guardian recently quoted a former Facebook employee in charge of data security explaining that he “always assumed there was something of a black market” for data obtained by third-party companies such as GSR, and that when he brought this up to his bosses he was discouraged from investigating too deeply. “Do you really want to see what you’ll find?” he says a Facebook executive told him.)


Consultants working for Donald Trump's presidential campaign exploited the personal Facebook data of millions.

That's the key message in March 17 stories by The New York Times and the UK's Guardian and Observer newspapers, as well as in statements from Facebook. The stories and statements indicate the social networking giant was duped by researchers, who reportedly gained access to the data of more than 50 million Facebook users, which was then misused for political ads during the 2016 US presidential election.

Until now, most of what you've heard about Facebook and the 2016 election has been focused on meddling by Russian operatives. Those efforts are being investigated by the FBI and the US Senate.

Data consultancy Cambridge Analytica represents a different problem. The UK-based company reportedly acquired data about millions of Facebook users in a way that violated the social network's policies. It then tapped that information to build psychographic profiles of users and their friends, which were utilized for targeted political ads in the UK's Brexit referendum campaign, as well as by Trump's team during the 2016 US election.

Facebook says it told Cambridge Analytica to delete the data, but also that reports suggest the info wasn't destroyed. Cambridge Analytica says it complies with the social network's rules, only receives data "obtained legally and fairly," and did wipe out the data Facebook is worried about.


Facebook is changing the way it shares data with third-party applications, Mark Zuckerberg announced Wednesday in his first public statement since the Observer reported that the personal data of about 50 million Americans had been harvested and improperly shared with a political consultancy.

The Facebook CEO broke his five-day silence on the scandal that has enveloped his company this week in a Facebook post acknowledging that the policies that allowed the misuse of data were “a breach of trust between Facebook and the people who share their data with us and expect us to protect it”.

Why have we given up our privacy to Facebook and other sites so willingly?
 Read more
“We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you,” Zuckerberg wrote. He noted that the company has already changed some of the rules that enabled the breach, but added: “We also made mistakes, there’s more to do, and we need to step up and do it”.

Facebook’s chief operating officer, Sheryl Sandberg, shared Zuckerberg’s post and added her own comment: “We know that this was a major violation of peoples’ trust, and I deeply regret that we didn’t do enough to deal with it.”

Zuckerberg also spoke to a handful of media outlets on Wednesday, including a televised interview with CNN in which he apologized for the “breach of trust”, saying: “I’m really sorry that this happened.” In similar conversations with the New York Times, Wired, and tech website Recode, Zuckerberg expressed qualified openness to testifying before Congress and said that he was not entirely opposed to Facebook being subject to more regulations

Teaching & Research Forum / Amazon Web Services
« on: December 23, 2017, 05:30:16 PM »
Amazon is big. In its last financial quarter, it sold $32bn (£25.6bn) worth of stuff worldwide, including $6bn of media, $10bn of sales outside North America, and $23bn of electronics “and other general merchandise”. That “other” category encompasses everything from crucifixes to sex toys, board games to plyboard, and mousemats printed with the faces of obscure TV and Radio personalities.

It has also diversified beyond its simple shopping business: the company will sell you something to be delivered in less than one hour, food from restaurants, and even digital content to be watched on your TV or listened to on your phone. And, of course, it has a hardware business which many other companies would kill for, producing ebook readers and tablets, and single-handedly creating the product category of “smart speaker” with the Echo.

But there’s another chunk of Amazon that you’re less likely to know about. It’s responsible for a full tenth of the company’s revenues, yet its “operating income” – the amount of money it leaves in Amazon’s coffers once expenses are accounted for – dwarfs any other sector, pulling in $861m compared to the $255m Amazon makes in North American sales and the $541m it loses internationally.

The division is Amazon Web Services, or AWS, the section of the company that sells cloud computing services to both the outside world and to Amazon itself. You can buy storage space to hold a huge database, bandwidth to host a website, or processing power to run complex software remotely. It lets companies and individuals avoid the hassle of buying and running their own hardware, while also letting them pay for only what they actually use.

It began as almost a point of principle for Amazon founder, Jeff Bezos, before evolving to become the single most profitable part of the entire company. Now, AWS is moving into the third stage of its life, providing the underpinning for Amazon’s own quest to dominate not just our shopping, but our homes themselves.

Teaching & Research Forum / iCloud Photo Library
« on: December 23, 2017, 05:25:45 PM »
When you turn on iCloud Photo Library, all the photos and videos you take with iPhone or iPad are automatically uploaded, so you can access them from your iPhone, iPad, iPod touch, Mac, and PC and on By storing all your photos and videos safely in iCloud, you’ll have more space on your iPhone to take even more great shots. And iCloud Photo Sharing makes it easy to show off just the photos and videos you want to just the people you want to see them.

iCloud Photo Library helps you make the most of the space available by automatically storing the original full-resolution photos and videos in iCloud and leaving behind lightweight versions that are perfectly sized for each device. You’ll always have access to everything on your device, even if you’re offline. And thanks to next-generation photo and video compression technology, photos and videos with the same quality as before now take up half the space.

Teaching & Research Forum / Google Cloud Services
« on: December 23, 2017, 05:24:33 PM »
By choosing Google Cloud Platform, you can build on the same future-proof infrastructure that allows Google to return billions of search results in milliseconds, serve 6 billion hours of YouTube video per month and provide storage for 1 Billion Gmail users. Our infrastructure is protected by more than 700 top experts in information, application, and network security. Google data centers are the most energy efficient and environmentally-friendly in the world.

Cloud Platform provides fast and consistent performance across the range of computing, storage and application services. With powerful processing, access to the memory you need and high IOPS, your application will deliver consistent performance to your users. You enjoy the benefits of reduced latency and avoid noisy-neighbor problems.

Teaching & Research Forum / IBM Cloud
« on: December 23, 2017, 05:22:23 PM »
The IBM Cloud has been built to help you solve problems and advance opportunities in a world flush with data. Whether it’s data you possess, data outside your firewall, or data that’s coming, the IBM Cloud helps you protect it, move it, integrate it and unlock intelligence from it — giving you what it takes to prevail in a competitive market.

IBM Cloud compute services flexibly drives the end user experiences that you and your customers want. Whether you need to tune an application with specific OS access requirements, provide a stateless API that services high request volumes and delivers special services, or implement microservices that quickly and automatically adapt to use, IBM Cloud has you covered

Eight years after launching its self-driving “moon shot,” Waymo, a k a Google’s driverless car company, is having its Neil Armstrong moment.

The company is now running its autonomous minivans around Phoenix with no human inside to grab the wheel if things go bad, CEO John Krafcik announced Tuesday. And in just a few months, it will invite passengers to climb aboard the world’s first driverless ride-hailing service.

This launch brings up a host of unanswered questions about the details and practical elements of such a service, but what’s already clear is Waymo is taking one of the final steps on the long road toward taking the human driver out of the picture and finally cashing in on the profits and safety benefits that come with the transition to robot chauffeurs.

“Fully self-driving cars are here,” Krafcik said at Web Summit in Lisbon, where he announced the move.

Waymo took its first driverless spin on public roads in October 2015, when it was still officially part of Google. (In December 2015, it launched as a stand-alone company under the umbrella of Alphabet, Google’s parent company.) Steve Mahan, a blind man, took a solo, 10-minute ride around Austin, Texas in the company’s “pod car,” the funky one without a steering wheel or pedals (Waymo retired those cars this summer in favor of its minivans).

The difference here, Krafcik says, is that the cars prowling Phoenix sans humans aren’t part of a demo. “What you’re seeing now marks the start of a new phase for Waymo,” he said in Lisbon.

Software Engineering / The Google self-driving car project is now Waymo
« on: December 03, 2017, 07:28:28 PM »
Waymo is a self-driving technology company with a mission to make it safe and easy for people and things to move around.
From our beginnings as the Google self-driving car project, we’ve been working to make our roads safer and increase mobility for the millions of people who cannot drive. Our ultimate goal is to help millions of people get safely from door to door at the push of a button.

Japan’s big-betting holding firm SoftBank is buying Boston Dynamics, one of the most highly regarded robotics labs in the world, from Google’s parent company Alphabet, for an undisclosed price.

Google acquired Boston Dynamics in 2013 under the leadership of Andy Rubin, the co-inventor of Android, who was leading a wave of acquisitions of robotics companies under the search giant.

Boston Dynamics’ robots routinely make headlines, including a high-profile demo at this year’s TED conference. The company, led by CEO Marc Raibert, has made a robotic cheetah that can run 28 miles per hour, a robotic dog that it recently used to deliver packages to doorsteps in Boston, and most recently a massive legged and wheeled robot that can clear hurdles and walk down stairs.

The firm has been hailed by other roboticists for its ability to blend hardware and artificial intelligence to make machines capable of dynamic, agile movements. Its most recent wheeled robot, Handle, can manipulate objects that are comparable to its own weight, and its four-legged, animal-like robots can maneuver over different types of terrain.

One industry source said earlier this year that Boston Dynamics’ most recent machine “changes the whole ballgame.”

SoftBank is also buying Schaft, a Japan-based robotics firm that unveiled a bipedal walking robot last year, from Alphabet. A source close to the matter said Schaft, which was part of Google, never fully integrated into the company and operated as a sort of separate entity taking a different approach to robotics than the rest of Google.

Science and Information / Boston Dynamics
« on: December 03, 2017, 07:26:31 PM »
Boston Dynamics is an American engineering and robotics design company that is best known for the development of BigDog, a quadruped robot designed for the U.S. military with funding from Defense Advanced Research Projects Agency (DARPA), and DI-Guy, software for realistic human simulation. Early in the company's history, it worked with the American Systems Corporation under a contract from the Naval Air Warfare Center Training Systems Division (NAWCTSD) to replace naval training videos for aircraft launch operations with interactive 3D computer simulations featuring DI-Guy characters.The company is a pioneer in the field of robotics and it is one of the most advanced in its domain

Marc Raibert is the company's president and project manager. He spun the company off from the Massachusetts Institute of Technology in 1992.[9]

On 13 December 2013, the company was acquired by Google X (later X, a subsidiary of Alphabet Inc.) for an unknown price where it was managed by Andy Rubin until his departure from Google in 2014.Immediately before the acquisition, Boston Dynamics transferred their DI-Guy software product line to VT MÄK, a simulation software vendor based in Cambridge, Massachusetts.

On 8 June 2017, Alphabet Inc. announced the sale of the company to Japan's SoftBank Group for an undisclosed sum.

Software Engineering / What is HCI research?
« on: December 03, 2017, 07:22:56 PM »
To me, research in HCI involves both understanding how humans interact with computers and creating better ways for humans to interact with computers. A more expansive view makes HCI also about understanding how humans use computers to interact with other humans, and then creating better ways for humans to interact with other humans via computers.

By “computer” I mean any sort of computational device (e.g., smartphones, smartwatches, tablets, Internet-of-things) – not just a traditional desktop or laptop computer.

Because over 3 billion people around the world now directly interface with computers of some sort, via either traditional computers or mobile devices. To have computer science as a field only study computers for their own sake without taking humans into account is to ignore a core reason why computers were invented: to serve humans.

As cheesy as it sounds, I believe that humans and computers should be viewed together as a human-computer system to make the most of both sides' strengths. Do we want to keep working toward a future where we're replaced by machines running fully-automated algorithms, or one where we work symbiotically with machines? I'd much rather prefer the latter. Of course, full automation is preferable for many problems where it's too slow or tedious for humans to intervene, but I still often prefer for humans to be in control, albeit with machine assistance.

Software Engineering / Research on HCI
« on: December 03, 2017, 07:22:15 PM »

Our research includes innovation in user-interface software tools, studies of computer-supported cooperative work and tools to support it, gesture recognition, data visualization, intelligent agents, human-robot interaction, visual interface design, intelligent tutoring systems, cognitive models, and understanding and building platforms that maximize the positive organizational and social impact of technology.

A quarter century after the department was opened, the Human-Computer Interaction Institute continues this tradition through our multidisciplinary research and education initiatives. The HCII broadly designs, builds and studies new tools and technologies to support human activity and organization in order to create theory for the field and artifacts for the real world. Our research includes empirical and analytic studies of behavior among groups and individuals to inform the design and evaluation of new technologies. Students from other departments at CMU also find a rich source of research opportunities in the HCII; and the HCII has a rich history for research and development in partnership with industry.

Software Engineering / What is Human-Computer Interaction (HCI)?
« on: December 03, 2017, 07:20:57 PM »
Human-Computer Interaction (HCI) is a field of study focusing on the design of computer technology and, in particular, the interaction between humans (the users) and computers. It encompasses multiple disciplines, such as computer science, cognitive science, and human-factors engineering. While initially concerned with computers, HCI has since expanded to cover almost all forms of information technology design.

HCI emerged in the 1980s. It was the crucial instrument in popularizing the idea that the interaction between a computer and the user should resemble a human-to-human, open-ended dialogue. It initially focused on using knowledge in cognitive and computer sciences to improve the usability of computers (i.e., concentrating on how easy computers are to learn and use). However, since then—and thanks to the advent of technologies such as the Internet and the smartphone—it has steadily encompassed more fields (including information visualization, social computing, etc.). The relevance of HCI in the 21st century is particularly apparent in the breakthrough of new modes of interactivity, namely voice user interfaces (VUIs).

In many ways, HCI was the forerunner that would grow to become what we now call “User Experience (UX) Design.” Despite that, some differences persist between HCI and UX design. Practitioners of HCI tend to be more academically focused, and are involved in scientific research and developing empirical understandings of users. UX designers, on the other hand, tend to be industry-focused, and most UX designers are involved in building a product or service—for example, a smartphone app or a website. Regardless of this difference, the practical considerations for products that UX designers concern themselves with have direct links to the findings of HCI specialists about the mindsets of users. Due to this, there is little point in separating these realms to any great extent.

Software Engineering / Human Computer Interaction - brief intro
« on: December 03, 2017, 07:20:38 PM »
Human-computer interaction (HCI) is an area of research and practice that emerged in the early 1980s, initially as a specialty area in computer science embracing cognitive science and human factors engineering. HCI has expanded rapidly and steadily for three decades, attracting professionals from many other disciplines and incorporating diverse concepts and approaches. To a considerable extent, HCI now aggregates a collection of semi-autonomous fields of research and practice in human-centered informatics. However, the continuing synthesis of disparate conceptions and approaches to science and practice in HCI has produced a dramatic example of how different epistemologies and paradigms can be reconciled and integrated in a vibrant and productive intellectual project.

Until the late 1970s, the only humans who interacted with computers were information technology professionals and dedicated hobbyists. This changed disruptively with the emergence of personal computing in the later 1970s. Personal computing, including both personal software (productivity applications, such as text editors and spreadsheets, and interactive computer games) and personal computer platforms (operating systems, programming languages, and hardware), made everyone in the world a potential computer user, and vividly highlighted the deficiencies of computers with respect to usability for those who wanted to use computers as tools.

Pages: [1] 2 3