Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Abdus Sattar

Pages: 1 [2] 3 4 5
16
How the latest tech and some healthy activism can curb fake news
The term “fake news” has become ubiquitous over the past two years. The Cambridge English dictionary defines it as “false stories that appear to be news, spread on the internet or using other media, usually created to influence political views or as a joke”.

As part of a global push to curb the spread of deliberate misinformation, researchers are trying to understand what drives people to share fake news and how its endorsement can propagate through a social network.

But humans are complex social animals, and technology misses the richness of human learning and interactions.

That’s why we decided to take a different approach in our research. We used the latest techniques from artificial intelligence to study how support for – or opposition to – a piece of fake news can spread within a social network. We believe our model is more realistic than previous approaches because individuals in our model learn endogenously from their interactions with the environment and not just follow prescribed rules. Our novel approach allowed us to learn a number of new things about how fake news is spread.

The main take away from our research is that when it comes to preventing the spread of fake news, privacy is key. It is important to keep your personal data to yourself and be cautious when providing information to large social media websites or search engines.

The most recent wave of technological innovations has brought us the data-centric web 2.0 and with it a number of fundamental challenges to user privacy and the integrity of news shared in social networks. But as our research shows, there’s reason to be optimistic that technology, paired with a healthy dose of individual activism, might also provide solutions to the scourge of fake news.

Modelling human behaviour
Existing literature models the spread of fake news in a social network in one of two ways.

In the first instance, you could model what happens when people observe what their neighbours do and then use this information in a complicated calculation to optimally update their beliefs about the world.

The second approach assumes that people follow a simple majority rule: everyone does what most of their neighbours do.

But both approaches have their shortcomings. They cannot mimic what happens when someone’s mind is changed after several conversations or interactions.

Our research differed. We modelled humans as agents who develop their own strategies on how to update their views on a piece of news given their neighbours’ actions. We then introduced an adversary that tried to spread fake news and compared how efficient the adversary was when he had knowledge about the strength of other agents’ beliefs compared to when he didn’t.

So in a real world example, an adversary determined to spread fake news might first read your Facebook profile and see what you believe, then tailor his disinformation to try and match your beliefs to increase the likelihood that you share the fake news he sent to you.

We learnt a few new things about how fake news is spread. For example, we show that providing feedback about news that’s been shared means that its easier for people to detect fake news.

Our work also suggests that artificially injecting a certain amount of fake news into a social network can train users to better spot fake news.

Crucially, we can also use models like ours to come up with strategies on how to curb the spread of fake news.

There are three things we have learned from this research about what everyone can do to stop fake news.

Fighting fake news
Because humans learn from their neighbours, who learn from their neighbours, and so on, everybody who detects and flags fake news can help prevent the spread of fake news on the network. When we modelled how the spread of fake news can be prevented, we found the single best way was to allow users to provide feedback to their friends about a piece of news they shared.

Beyond pointing out fake news, you can also praise a friend when they share a well researched and balanced piece of quality journalism. Importantly, this praise can happen even when you disagree with the conclusion or political point of view expressed in the article. Studies in human psychology and reinforcement learning show that people adapt their behaviour in response to negative and positive feedback – particularly when this feedback comes from within their social circle.

The second big lesson was: keep your data to yourself.

The web 2.0 was built on the premise that companies offer free services in exchange for users’ data. Billions followed the siren’s call, turning Facebook, Google, Twitter, and LinkedIn into multi-billion dollar behemoths. But as these companies grew, more and more data was collected. Some estimate that as much as 90% of all the world’s data has only been created in the past few years.

Do not give your personal information away easily or freely. Whenever possible, use tools that are fully encrypted and very little information is collected about you online. There is a more secure and more privacy-focused alternative for most applications, from search engines to messaging apps.

Social media sites don’t yet have privacy-focused alternatives. Luckily the emergence of blockchain has provided a new technology that could solve the privacy-profitability paradox. Instead of having to trust Facebook to keep your data secure, you can now put it on a decentralised blockchain that was designed to operate as a trustless environment.

Source: https://theconversation.com/how-the-latest-tech-and-some-healthy-activism-can-curb-fake-news-98319

17
Key differences between Python 2 and 3: How to navigate change
 August 22, 2018  Vinodh Kumar

As every programming language evolves, there are big changes between each major release. In this article, Vinodh Kumar explains some of the big differences between Python 2 and Python 3 with examples to help illustrate how the language has changed.

This tutorial will cover the following topics:

Expressions
Print options
Unequal operations
Range
Automated migration
Performance issues
Some major housekeeping changes
Having problems?

1. Expressions
Expressions represent something, like a number, a string, or an instance of a class. Any value is an expression! Anything that does something is a statement. Any assignment to a variable or function call is a statement. Any value contained in that statement in an expression.

This is what you’d type to get an evaluated expression in Python 2:

1   X = raw_input ("enter some values)
But in Python 3, you’d have to type this:

1   X = input ("enter some values")
So, whatever we enter then for the value is assigned to variable x in both 2 and 3. When I enter 2*6 in Python 2, the result will be 12, which is the evaluated value.

However, when this same program is run in Python 3, the result is string values. In this case, it would look like 2*6 in string format.

Then, how can we get the evaluated expression? Now, we have to use an expression or function called eval. when you write eval before the input, it will turn the expression into an evaluated value.

1   x= eval(input("enter some values")) = 12
Detailed expression examples:

Here’s what it would look like in Python 2:

1
2
name = input("What is your name? ")
print ("Hello, %s." %name)
And the output:

python

Here’s how it would look like in Python 3:

1
2
name = input("What is your name? ")
print ("Hello, %s." %name)
And the output:

python

As you can clearly see, there is very little difference between the two.

SEE MORE: Will Python dethrone Java this year? Programming language rankings say yes
2. Print options
In Python 2, print is a statement that does not need a parenthesis. In Python 3, print is a function and the values need to be written in parenthesis.

Python 2

Input:

1
print "hello world"
Output:

python

Python 3

Input:

1
2
1 != 1.0
print (False)
Output:

python

3. Unequal operations
Let’s move on to the third difference. When we use an unequal operator in Python 2, we need to use the greater than > or less than < signs. However, in Python 3, there is a general operator. The exclamation mark ! and equal sign = are used to show if things do not equal the same amount.

Python 2 – <> operator is used for not equal
Python 3 –  ! operator is used for not equal

Python 2

Input:

1
2
1 <> 1.0
print "False"
Output:

PYTHON

Python 3

Input:

1
2
3
1 != 1.0
print (False)1 != 1.0
print (False)
Output:



SEE ALSO: Glances: Keep a good eye on your code with this Python monitoring tool
4. Range
Now, let’s turn to ranges. What are the ranges?

A range is used to generate a list of numbers, which is generally used to iterate over with for loops.

python

Here you can see X equal to Range 10. When we check the variable X, it returned our list type. This means that in Python 2, range is the type of list. When I write X, after that, we get a list of object. which is 0 1 2 3 4 5 6 7 8 9.

python

Now let’s move to the Python 3, when we write x equal to range 5. This value of range 5 is assigned to the variable X; when we check the type for variable X, then it returns a range object itself. This means that in Python 3 range is a range object itself.

Python 2

Input:

1
print range(0,10,1)
Output:



 

Python 3

Input:

1
print(list(range(10)))
Output:



 

5. Automated migration
So, how do we automate the migration script to move code from Python 2 into 3?

Here, we can test with a simple program like Add 2 Numbers in python.

Python 2

Input:

1
2
3
4
n1 = 1
n2 = 2
add = float(n1) + float(n2)
print 'sum of {0} and {1} is {2}'.format(n1, n2, add)
Output:



Now using the 2 to 3 migration we can convert the above code.

Input:

1
2
3
4
n1 = 1
n2 = 2
add = float(n1) + float(n2)
print('sum of {0} and {1} is {2}'.format(n1, n2, add))
Output:

So here we see it can be converted to Python 3 code by 2 to 3 on the command line.

Python provides its own tool called 2to3.py. Which runs a bunch of scripts to translate your python 2 code into 3. While it’s not perfect, but it does an amazing job overall. After converting any code, you should go in and manually fix up any problems.

SEE ALSO: Come, Nagini. We need to verify that Python code
6. Performance
Most of the performance issues have been fixed in this upgrade! When comparing benchmarks between the two versions, the differences are almost negligible.

7. Some major housekeeping changes
Python 2

print functional brackets optional.
Prefix string with u to make unicode string.
Division of integers always return integer – 5/2=2.
Raw_input () reads string.
input() evaluates data read.
generator .next().
Python 3

print functional brackets compulsory.
String unicode by default.
Division of integers may result in float – 5/2=2.5.
Raw_input() not available.
Input always reads string.
Next (generator).
Py2 to py3 utility.
Dictionary .keys() and .values() returns a view not a list.
Can no longer use comparison operators on non natural comparisons.
Eg. None < None  will raise a TypeError instead of returning false.
Percent (%) string formatting operator is  deprecated use the .format() Function or concatenation.
SEE ALSO: Top 5 IDEs and code editors for Python
8. Having problems?
You may encounter an error here and there if you have been working in python 2.x for some time. That’s fine! Just google the problem, it’s almost certain that someone else has also had that problem too when migrating.

Source: http://snip.ly/mg8vuc/#https://jaxenter.com/machine-learning-travel-industry-147766.html

18
Guide to Securing Your Mobile App against Cyber Attacks
by Tripwire Guest Authors on August 23, 2018

Thanks to the advent of technology, the number of mobile phone users are increasing day by day. You’ll be shocked to hear that by 2019, this number will cross the 5 billion mark! While mobile phones may have made our life easier, they have also opened up domains for many cybercriminals who are adapting and using new methods to profit from this rapidly growing number of potential victims.

What’s more, apps are used for nearly 90% of usage on mobile phones and tablets making it the number one source for cyber-attacks. People are using apps to access everything from online banking to shopping and even controlling home devices.

User data is like a goldmine for cybercriminals, as they can access anything from credit card details to email passwords and user contact lists. Users have also been scammed into downloading malicious adware, and at times, they unknowingly subscribe to fraud paid services.

This is why a lapse in any mobile app’s security is a daunting scenario for app owners and developers. According to a study, more than 60% of companies reported that an insecure mobile app caused a data breach, and 44% out of them did not take any immediate action to secure their app against further potential cyber attacks.

So, if you are an app owner or developer, start working towards certain frameworks and tools that provide ease and security to your users. Think about the ways you can avoid the mentioned security challenges and protect your app from cybercriminals.

To make your tasks easier, I have listed some of the mobile app security best practices that will benefit you as an owner and also provide your users with a safe and secure online experience.

1. Security by design
The first step towards securing any mobile app is to start by designing a threat model from the very beginning. Think like a hacker and identify every shortfall of your app’s design. Only then will it be possible to implement effective security strategies. You can also hire a professional security team to play the fake bad guys. It is a great way to test the security of your app as they throw different vulnerabilities at you.

Furthermore, if you are a growing eCommerce business and want to develop an online shopping app that can process sensitive information such as financial transactions and credit card credentials, consider the consequences that will occur if a security breach occurs. Ask yourself: in what ways can user privacy be compromised, and how you can prevent it from happening?

Keeping safety as a number one concern from the very beginning will give you ample motivation regarding security measures for your app.

2. Mobile device management
Online security starts with the device that the consumer is using to access your app. Each mobile operating system requires a different approach for its security, whether it is an iOS or an Android system. Developers must understand that the data stored on any device can drive potential security threats.

This is why you should consider encryption methods like 256-bit Advanced Encryption Standard to keep data safe in the form of files, databases, and other data sources. Also, when you are formulating the mobile app security strategy, keep the encryption key management in mind.

In the case of Apple, it has strict policy enforcement practices. Being an app owner, you can restrict any user from installing your app if you feel that the security of the user device seems compromised.

One of the most effective ways to manage iOS devices is to take help of mobile device management (MDM) or enterprise mobile management (EMM) product. There are many vendors such as MobileIron, MaaS360, and Good Technology that offer their services in this regard. Apart from this, you can use the Microsoft Exchange ActiveSync protocol as a policy management tool if you are looking for a cheaper and easier to use option.

Android phones, on the other hand, are a bit trickier to manage. Since they are relatively cheaper as compared to iOS devices, they often become a source of a security breach. You should only be using Android for Work (A4W) in the enterprise. This version of Android encrypts the device and separates personal and professional apps into two categories.

With the combination of the right devices, updated mobile operating systems and MDM, you can provide first level security for your mobile app.

3. App wrapping
App wrapping is a term that is used to define a methodology that segments your app from the rest of the device by capturing it in a secure environment. You will automatically get this option if you are taking help from an MDM provider. Just set a few parameters, and you can segment your apps without any coding required.

4. Strong user authentication
One of the most crucial components of mobile app security is to implement strong user authentication and authorization. You never know who is accessing your app. A seemingly simple question, “Who are you?,” can help secure your device against malware and hackers.

User authentication must include all aspects of user privacy, identity and session management and device security features. Try to enforce 2FA (two-factor authentication) or an MFA (multi-factor authentication). You can get technologies like OpenID Connect protocol or OAuth 2.0 authorization framework on board.

5. Hardening the OS
Another way to secure mobile apps is by hardening the operating system. There is a wide variety of methods in which you can do it. From day one, Apple has done a great job in enforcing security within its operating system. You can use these tools for iOS security:

Read the quarterly reviews of Apple’s security guide.
Check out the latest code samples at Apple’s developer site.
Analyze static code using a commercial tool.
6. Apply security to APIs
Make sure that you use APIs to manage all app data and business logic. API is a very useful tool for the mobile world, as they are the crown jewels for any enterprise. All data, whether it is in transit or at rest, should be secured.

For data in transit, you can use SSL with 256-bit encryption. For data at rest, you should secure the origin of the data as well as the device itself.

Remember, each API should have an app-level authentication. Make sure you validate who is using the service and limit sensitive data to memory as it can easily be wiped off.

Conclusion
When it comes to addressing your mobile application’s security, think that all mobile devices accessing the app are insecure and hackers can easily capture the data flowing to and fro from your app. It doesn’t mean that you’re overly paranoid.

These assumptions will help you stay on top of your security game, and you will always look out for new ways to harden the security of your mobile app against the most common security failures.

There are many other practices with which you can toughen up the security of your app, but these 6 tips will give you a basic framework that can be applied to any business, irrespective of its size. Which strategies do you use to protect your mobile app against cyber attacks?

Source: https://www.tripwire.com/state-of-security/security-data-protection/guide-securing-mobile-app-cyber-attacks/

19
How machine learning is changing the travel industry
August 24, 2018  Wilco van Duinkerken

Machine learning’s growth continues as it permeates into unrelated industries. Travel booking might not seem like a good fit at first, but Wilco van Duinkerken of trivago explains how ML is innovating the way you find and book your next holiday.

Everyone’s heard how machine learning has huge potential, how it could upend existing systems and change the world. But that only tells us so much — to really understand the potential of machine learning, you’ve got to focus on applications and outcomes. Travel isn’t the first industry that comes to mind when you think about machine learning. However, there are impressive innovations coming out of the travel sector with a foundation in machine learning and AI technologies, which should be an inspiration to other sectors in the future.

Why travel?
The travel industry has changed a lot thanks to the internet. Where we once went to brick-and-mortar travel agents to book a holiday, we now book our flights and accommodation online. Or so you’d think; as recently as 2016, only 33% of people actually book their hotels online. This is a stunning figure when you consider how much of our work and personal lives has been digitized.

Travel is a deeply personal choice. Where you choose to go on holiday, where you stay, and even what airline you fly with are all choices that say something about you and your personal preferences. For many, the experience of looking at a list of hotels in a web browser traditionally hasn’t always been as good a user experience as speaking with a real person in a travel agency or speaking to someone on the phone.

Making improvements to user experience and offering enhanced personalization are two key ways of improving customers’ online travel buying experience. Machine learning presents an exciting opportunity to accelerate this change.

SEE ALSO: Progressive Web Apps and how they can increase engagement
Bringing hotel search to life with personalization
Today’s online consumers are producing unprecedented amounts of data. This ‘data exhaust’ is increasingly being used in innovative new ways to provide personalized services for customers. Companies like Amazon and Netflix have already shown how effective product personalization can be in driving engagement and return visits, and the travel sector is moving in the same direction.

The goal is always to offer the traveler the best possible experience. At a company like trivago, this means optimizing the number of steps (i.e. clicks) it takes for a customer to get to what they’re looking for. Machine learning technologies can achieve this by helping to personalize what the customer sees. Natural language processing can be used on the hotel side to analyze hotel descriptions and customer reviews, as well as isolate the most popular features and key points of feedback. This data can then be fed into a database where it can be matched with existing customer preference data.

The information that hotels input to our platform is only part of what can be used to personalize results. Images accompanying listings can also be analyzed using neural networks, a subset of machine learning. For hotels that don’t have the time or the technical know-how to input all the relevant data, analysis of the images that accompany the listing can also yield valuable data around amenities, ambience, and scenery, all of which can be matched with user preferences to develop a more tailored results page.

SEE ALSO: Why are so many machine learning tools open source?
Let’s have an example of these technologies in action: say a customer wants to see hotels with family-friendly pools. Presenting the customer with hotels that have pools is relatively straightforward, but pools that are specifically family-friendly? That’s much more challenging. To start to narrow down the list, natural language processing can be deployed on hundreds of user reviews, measuring the proximity of words like “clean”, “quiet”, “family” or “safety”. But often the words we’re looking for are not posted immediately next to each other, so it becomes more important to understand syntactic relationships and understanding how terms relate to each other. This is something that can only be done through advanced semantic technologies and specialized databases.

The end goal is to make the experience of searching for travel products more a search for an exciting experience, rather than a technical process of selecting features and on/off toggles. Machine learning is critical in helping platforms like trivago isolate the most unique and attractive aspects of a hotel and suggesting those experiences to customers who have already signaled their interest.

Build teams made for machine learning
The conversation around machine learning often focuses on raw computing power, but not enough attention is paid to the significant ways in which we need to change our working patterns. Things that were manual processes not very long ago are now automated. Machine learning systems can generate sophisticated suggestions that were previously not available to teams.

This presents unique challenges and requires new specializations within teams to make the most out of machine learning. It’s not enough to keep going in the way of working that you’re used to, such as Agile or Scrum. It’s important to distribute your machine learning resource throughout your teams, making sure there is shared understanding of how that resource is going to be used, and that there is a shared understanding among different product teams around what the goals are with your machine learning implementation.

SEE ALSO: A basic introduction to Machine Learning
Look after your data
Finally, a word on user data. Machine learning needs user data. There’s no getting around it and it’s important to be honest and upfront with your users about it. Your customers’ data is currency: it’s valuable and it should be respected. It’s important to be upfront and transparent with customers about what data they are providing and what their data will be used for. If you present your customers with a clear and transparent choice over what to share and what they stand to gain if they do share, then they will be more open-minded.

Source: http://snip.ly/mg8vuc/#https://jaxenter.com/machine-learning-travel-industry-147766.html

20
AI And Automation In The Cloud -- Seizing The Moment For Digital Transformation
Kristof Kloeckner
Aug 23, 2018,9:00 am

Kristof Kloeckner

CTO and General Manager, Technology, Innovation and Automation for IBM Global Technology Services (retired).


Shutterstock

There can be little doubt that digital disruption is upon us on a large scale. Entire industries and their business models, as well as society at large are being transformed by the impact of digital technologies. To understand risks and opportunities, we need to look at how the following three forces are coming together to turn the winds of change into a perfect storm.

• An abundance of data providing raw material for insights

• Artificial intelligence (AI) and analytics turning data into actionable insights, enabling automation and augmenting human intelligence

• Cloud as a fundamental change to service delivery and service consumption, as well as an enabler of communities

Let’s look at data and AI first. Data has often been called the new oil, and more recently, AI is being compared to the new electricity. These are apt comparisons that describe how data is turned into insights that power business processes. We can distinguish three broad areas of application:

• Gaining insights from data to optimize the execution of business processes, be it through automation or through improved human decision making

• Improving the knowledge life cycle by curating knowledge and providing advisory tools, enabling subject matter experts to do their jobs better, faster and with greater consistency

• Enabling better, more human-like user interfaces to services, especially through natural language conversations. A major part of this is divining human intent and directing appropriate responses, including enabling self-service

These three areas are connected and can support each other. If handled badly, they compound problems. Currently, automation is most successful when applied to low-level tasks that need to be executed at speed and with consistency, like incident response and problem resolution in IT services. At this level, automation improves overall service levels while also allowing humans to focus on more complex situations. Advisory tools, on the other hand, clearly augment human intelligence, but their underlying algorithms warrant scrutiny to avoid bias. There is an ongoing important debate over how to reconcile the need for transparency and accountability with the desire to achieve a competitive advantage, and an increasing research focus on AI that explains itself. Finally, user interface technology like chatbots is making great strides, but in many situations, people still prefer to talk to humans, and chatbots need to know when to hand over to a person.

AI is clearly playing a strong role in the evolution of the nature of work, and in the relationships between enterprises, their employees and their customers and clients. Success will ultimately depend on making all affected groups stakeholders in the transformation. For instance, expert communities need to see benefits for themselves in contributing their knowledge and experience opportunities for professional growth. Clients need to be sure of their ownership of their data. Enterprises need to acquire or build the skills (like data science) to successfully implement AI and to carefully manage expectations. Society needs to see ethical questions of applying AI addressed, which is increasingly reflected in research agendas.

It is very encouraging to see communities and organizations springing up that address aspects of these challenges like OpenAI, foster shared progress, for instance through Kaggle’s competitions or provide practical advice like Andrew Ng’s Machine Learning Yearning. In fact, much of the acceleration of technological advances is due to the wide sharing of code and best practices on the cloud and through digital communities. Availability of AI services and developer toolkits on commercial clouds like Google, Microsoft, IBM or Amazon will also speed up adoption. However, the ease of acquiring core technology puts an added responsibility on the implementers of AI solutions to follow best practices and be mindful of potential pitfalls. Otherwise, naïve and faulty implementations will lead to a backlash against the use of AI and imperil its benefits.

Let’s now have a look at the role of the cloud in digital transformation. We have already seen an indication of the role of clouds as platforms for the sharing of technology and best practices in the example of AI communities and AI cloud services. In fact, even though platform as a service is still relatively small compared to infrastructure as a service and software as a service, it increasingly drives the competitive landscape among the major cloud providers, as the speed of innovation becomes as important as savings, if not more so. Interestingly, data services are at the forefront of this dynamic.

The ability to quickly compose services from existing underlying core services is a major attractor and differentiator for clouds. New capabilities (like data or AI services) are available first on clouds, and clouds drive standardization of what services are being adopted and how. This standardization enables automation, and automation is again a prerequisite for manageability at scale and industrial strength service delivery. It is therefore not surprising that major cloud providers are emphasizing their management and automation capabilities, including managed cloud services.

Enterprise use of clouds is now mainstream, which includes a variety of deployment models and multicloud deployments in hybrid clouds. Needs for cloud integration and transportability of cloud solutions give rise to rapid adoption of container-based approaches. IBM’s Kubernetes-based Cloud Private and Google’s announcement of an on-premise version of the Google Kubernetes Engine are examples of this.

Standardization and automation in the cloud are also a great enabler for an accelerated service delivery life cycle through DevOps., including DevOps for AI. This will, in turn, drive faster business cycles.

We have seen that cloud is the underpinning for delivering the data and AI services that fuel digital transformation, and increasingly, cloud growth itself is fueled by data, AI and the ability to optimize business processes. This is a self-reinforcing process whose speed will be determined by the availability of skills in data science and DevOps, but also the ability of all stakeholders to come to an agreement about the intended benefits and the management of risks of the digital transformation.

Source: http://snip.ly/1v4mpy/#https://www.forbes.com/sites/forbestechcouncil/2018/08/23/ai-and-automation-in-the-cloud-seizing-the-moment-for-digital-transformation/#1fa14c514c99

21
Artificial intelligence model “learns” from patient data to make cancer treatment less toxic
Machine-learning system determines the fewest, smallest doses that could still shrink brain tumors.

Rob Matheson | MIT News Office
August 9, 2018

MIT researchers are employing novel machine-learning techniques to improve the quality of life for patients by reducing toxic chemotherapy and radiotherapy dosing for glioblastoma, the most aggressive form of brain cancer.

Glioblastoma is a malignant tumor that appears in the brain or spinal cord, and prognosis for adults is no more than five years. Patients must endure a combination of radiation therapy and multiple drugs taken every month. Medical professionals generally administer maximum safe drug doses to shrink the tumor as much as possible. But these strong pharmaceuticals still cause debilitating side effects in patients.

In a paper being presented next week at the 2018 Machine Learning for Healthcare conference at Stanford University, MIT Media Lab researchers detail a model that could make dosing regimens less toxic but still effective. Powered by a “self-learning” machine-learning technique, the model looks at treatment regimens currently in use, and iteratively adjusts the doses. Eventually, it finds an optimal treatment plan, with the lowest possible potency and frequency of doses that should still reduce tumor sizes to a degree comparable to that of traditional regimens.

In simulated trials of 50 patients, the machine-learning model designed treatment cycles that reduced the potency to a quarter or half of nearly all the doses while maintaining the same tumor-shrinking potential. Many times, it skipped doses altogether, scheduling administrations only twice a year instead of monthly.

“We kept the goal, where we have to help patients by reducing tumor sizes but, at the same time, we want to make sure the quality of life — the dosing toxicity — doesn’t lead to overwhelming sickness and harmful side effects,” says Pratik Shah, a principal investigator at the Media Lab who supervised this research.

The paper’s first author is Media Lab researcher Gregory Yauney.

Rewarding good choices

The researchers’ model uses a technique called reinforced learning (RL), a method inspired by behavioral psychology, in which a model learns to favor certain behavior that leads to a desired outcome.

The technique comprises artificially intelligent “agents” that complete “actions” in an unpredictable, complex environment to reach a desired “outcome.” Whenever it completes an action, the agent receives a “reward” or “penalty,” depending on whether the action works toward the outcome. Then, the agent adjusts its actions accordingly to achieve that outcome.

Rewards and penalties are basically positive and negative numbers, say +1 or -1. Their values vary by the action taken, calculated by probability of succeeding or failing at the outcome, among other factors. The agent is essentially trying to numerically optimize all actions, based on reward and penalty values, to get to a maximum outcome score for a given task.

The approach was used to train the computer program DeepMind that in 2016 made headlines for beating one of the world’s best human players in the game “Go.” It’s also used to train driverless cars in maneuvers, such as merging into traffic or parking, where the vehicle will practice over and over, adjusting its course, until it gets it right.

The researchers adapted an RL model for glioblastoma treatments that use a combination of the drugs temozolomide (TMZ) and procarbazine, lomustine, and vincristine (PVC), administered over weeks or months.

The model’s agent combs through traditionally administered regimens. These regimens are based on protocols that have been used clinically for decades and are based on animal testing and various clinical trials. Oncologists use these established protocols to predict how much doses to give patients based on weight.

As the model explores the regimen, at each planned dosing interval — say, once a month — it decides on one of several actions. It can, first, either initiate or withhold a dose. If it does administer, it then decides if the entire dose, or only a portion, is necessary. At each action, it pings another clinical model — often used to predict a tumor’s change in size in response to treatments — to see if the action shrinks the mean tumor diameter. If it does, the model receives a reward.

However, the researchers also had to make sure the model doesn’t just dish out a maximum number and potency of doses. Whenever the model chooses to administer all full doses, therefore, it gets penalized, so instead chooses fewer, smaller doses. “If all we want to do is reduce the mean tumor diameter, and let it take whatever actions it wants, it will administer drugs irresponsibly,” Shah says. “Instead, we said, ‘We need to reduce the harmful actions it takes to get to that outcome.’”

This represents an “unorthodox RL model, described in the paper for the first time,” Shah says, that weighs potential negative consequences of actions (doses) against an outcome (tumor reduction). Traditional RL models work toward a single outcome, such as winning a game, and take any and all actions that maximize that outcome. On the other hand, the researchers’ model, at each action, has flexibility to find a dose that doesn’t necessarily solely maximize tumor reduction, but that strikes a perfect balance between maximum tumor reduction and low toxicity. This technique, he adds, has various medical and clinical trial applications, where actions for treating patients must be regulated to prevent harmful side effects.

Optimal regimens

The researchers trained the model on 50 simulated patients, randomly selected from a large database of glioblastoma patients who had previously undergone traditional treatments. For each patient, the model conducted about 20,000 trial-and-error test runs. Once training was complete, the model learned parameters for optimal regimens. When given new patients, the model used those parameters to formulate new regimens based on various constraints the researchers provided.

The researchers then tested the model on 50 new simulated patients and compared the results to those of a conventional regimen using both TMZ and PVC. When given no dosage penalty, the model designed nearly identical regimens to human experts. Given small and large dosing penalties, however, it substantially cut the doses’ frequency and potency, while reducing tumor sizes.

The researchers also designed the model to treat each patient individually, as well as in a single cohort, and achieved similar results (medical data for each patient was available to the researchers). Traditionally, a same dosing regimen is applied to groups of patients, but differences in tumor size, medical histories, genetic profiles, and biomarkers can all change how a patient is treated. These variables are not considered during traditional clinical trial designs and other treatments, often leading to poor responses to therapy in large populations, Shah says.

“We said [to the model], ‘Do you have to administer the same dose for all the patients? And it said, ‘No. I can give a quarter dose to this person, half to this person, and maybe we skip a dose for this person.’ That was the most exciting part of this work, where we are able to generate precision medicine-based treatments by conducting one-person trials using unorthodox machine-learning architectures,” Shah says.

The model offers a major improvement over the conventional “eye-balling” method of administering doses, observing how patients respond, and adjusting accordingly, says Nicholas J. Schork, a professor and director of human biology at the J. Craig Venter Institute, and an expert in clinical trial design. “[Humans don’t] have the in-depth perception that a machine looking at tons of data has, so the human process is slow, tedious, and inexact,” he says. “Here, you’re just letting a computer look for patterns in the data, which would take forever for a human to sift through, and use those patterns to find optimal doses.”

Schork adds that this work may particularly interest the U.S. Food and Drug Administration, which is now seeking ways to leverage data and artificial intelligence to develop health technologies. Regulations still need be established, he says, “but I don’t doubt, in a short amount of time, the FDA will figure out how to vet these [technologies] appropriately, so they can be used in everyday clinical programs.”

22
Latest Technology / Disrupting Education with Digital Technologies
« on: August 13, 2018, 07:39:09 PM »
Disrupting Education with Digital Technologies
Written by Namratha C K on August 8, 2018 in App Development, Technology, Web Design

Education, over the years, has gone through various changes. However, we have witnessed some of the most significant developments in the education sector in the past decade with smart technologies transforming education into newer, more effective ways of learning. And one of the most important components of this smart education ecosystem is mobile application development. Since their very invention, mobile applications prepared the base for various other technologies and flared that mobile education is up for widespread acceptance. Starting from mobile applications, the education sector has come far enough into using digital space and technologies to create comprehensive courses and teaching methods that have been proved to be more effective and efficient, irrespective of the batch size. So, let’s take a look at how different digital technology and platforms are being utilized in imparting better and equal education for each student:

Mobile Apps
Today, children learn operating smartphone before they could even learn the alphabets. They just love using mobile phones because it is an interactive, audio/video enabled device that they find interesting. This is exactly why students like learning from mobile apps and notebooks are slowly being ousted. Moreover, mobile apps allow students to learn freely, anywhere, anytime. Apps like Khan Academy have exceeded what the world expected from mobile education, by providing ‘free education for all’ through easily understandable courses. In school management systems, mobile apps have come up as quite a handy tool in managing elementary to advanced tasks like attendance to tracking the overall performance of a student, and even sending out daily, quarter, annual reports through the mobile apps with customized access for parents.

Virtual Reality (VR)
Majority of the innovators and leaders in the field of education know how important it is to help a student understand the fundamental concepts of what they learn. While there may be other techniques, virtual reality is the most effective as it shows exactly what a concept has at the core of it. Helping students understand the structure of atoms, compounds, or even how a mathematical equation works, can be achieved with a VR set and a compatible phone. Among the three types of VR, the fully immersive VR is currently being applied in many schools to help children understand and absorb concepts more effectively than ever.

Augmented Reality (AR)
Augmented reality does something similar to virtual reality does, the difference between the two is that AR allows students to interact with their surrounds or even create a 3D model of anything, without the need of an external gear like that needed in VR. What is remarkable about using AR technology in education is that it hands over the control to students to create what they are learning, without the need of an instructor or teacher.

Artificial Intelligence
The applications of artificial intelligence in digital education have shown us that there could be a one-on-one interaction between a student and teacher, without the need for real classroom sessions. Although classroom courses are still quite relevant in our education system, artificial intelligence creates personalized study material, tailored tests, progress reports and identifies weak areas that a particular student needs to work on. Integrated with smart apps, the basic idea of artificial intelligence is to recognize that every student is unique and needs a different method of learning to assimilate the concepts. On the other hand, for teachers, AI can handle simple tasks like grading, preparing notes etc.

Social Media
Social Media has reached almost every corner of the world, presenting us with a perfect platform to connect with people and the topics that we want to know more about. In the light of the huge number of people these platforms connect, including students and teenagers, social media platforms like Facebook and YouTube are being used by institutional players to bring comprehensive, quality content to students of all ages and categories. Most of the mobile app development companies that create mobile apps are also asked by the client to create social media channels as well, looking at the traffic that reaches these channel every day.

Education is not something that can come from a single type of source, as there is knowledge of anything and everything. We just needed to develop methods to extract knowledge from these sources, which is currently happening with the rise of digital technologies.

Source: https://www.letsdnd.com/disrupting-education-with-digital-technologies/

23
Latest Technology / Ideation for Design - Preparing for the Design Race
« on: August 13, 2018, 07:34:01 PM »
Ideation for Design - Preparing for the Design Race

Ideation is easy to define. It’s the process by which you generate, develop and then communicate new ideas. Ideas can take many forms such as verbal, visual, concrete or abstract. The principle is simple to create a process by which you can innovate, develop and actualize new products. Ideation is critical to both UX designers and learning experience designers.

As Pablo Picasso, the artist, said about his creations; “I begin with an idea, and then it becomes something else.”


Author/Copyright holder: visualpun.ch. Copyright terms and licence: CC BY-SA 2.0

Ideation does not need to be beautiful to be effective. Creating ideas is the main point rather than graphic design as you can see here.

There are many types of new idea and they are commonly found in the following patterns:

Problem to solution. Find a problem, find a solution – this is, perhaps, the most common form of ideation.
Derivation – where you take an existing idea and then change it (hopefully for the better)
Symbiotic – where you take a group of ideas and combine them to form a single coherent idea
Revolutionary – where you take an existing principle and smash it and derive a totally new perspective
Serendipitous discovery (or accidental discovery) – when an idea turns up when you are in pursuit of something else (penicillin would be a good example of serendipitous discovery)
Targeted innovation – an iterative process where the solution is theorized but the path to it is poorly understood. Repeated attempts are used to create the pathway.
Artistic innovation – a form of ideation which completely disregards “what is practical” and innovates without constraint
Computer aided innovation – where computers are used to probe for solutions and to conduct research
All of these processes can be used by the designer in search of ideas for a project. However, in many cases these are not practical (revolutionary ideation, for example, is generally a once or twice in a lifetime Eureka! moment and not a practical process) or out of budget/time constraints (such as targeted innovation or computer aided innovation).

Thus the designer will seek more practical and prosaic approaches when it comes to ideation including brainstorming, mind mapping, etc.

Ideation on Paper
Almost all ideation techniques can be deployed on paper. Brainstorming and mind mapping, for example, are simply the same process but visualized in different ways.

Thus, in this article, we will examine brainstorming as the key tool for ideation but other tools may be considered on projects to bring about similar results.


Author/Copyright holder: Dave Gray. Copyright terms and licence: CC BY 2.0

Ideation on paper. This is for a blog’s content but the same principles apply for any kind of ideation. Get it down on sticky notes and then organize ruthlessly.

Rules for Initial Ideation
When you are at the start of the ideation process you want to generate ideas in their multitudes. The idea is to follow a few simple rules, as a team, to deliver lots of ideas. These ideas, once the exercise is complete, can then be examined for practical considerations. The rules are as follows:

Prepare the space. Put up posters with user personas, the problem in hand, and any design models or processes that will be used on the project. The more context provided, the easier it should be to come up with ideas.
While initial ideation takes place – there are NO BAD IDEAS – the exercise is to create not judge ideas.
Unrelated ideas can be parked for another discussion. They should, however, be written down.
Volume is important don’t waste time examining any particular idea in depth just write it down and move on.
Don’t be afraid to use lots of space. Write ideas on Sticky-Notes and then plaster them on everything in the room. This can help participants connect seemingly unrelated ideas and enhance them,
No distractions. Turn off phones, laptops, etc. Lock the door or put a sign outside saying “Do not disturb.” You can’t create ideas when you’re constantly interrupted.
Where possible be specific. Draw ideas if you can’t articulate them in writing. Make sure you include as much data as possible to make an idea useful.
Once you have the rules understood. Grab your team and get creative. It can help to do a 10 minute warm up on an unrelated topic to get people thinking before you tackle the problem in hand. Don’t take more than 2 hours for initial ideation.


Author/Copyright holder: Desarrollando América Latina. Copyright terms and licence: CC BY 2.0

Laying down rules at the start of an ideation session will help keep things on track throughout. Don’t be afraid to call people’s attention to the rules if they begin to bend or break.

Structuring Your Ideas
Once you’ve got some ideas coming it’s a good idea to group them around specific areas. Some common idea areas include:

Pain Points
Opportunities
Process Steps
Personas
Metaphors
When You Get Stuck
There are also some simple techniques to get the creative juices flowing when the ideas process gets stuck.

Breaking the law. List all the known project constraints and see if you can break them.
Comparisons. Taking a single phrase that encapsulates the problem and see if you can find real world examples of this.
Be poetic. Try to turn the problem into a poem or haiku. Thinking about the word structures can deliver new ideas.
Keep asking “how and why?” - These words make us think and create.
Use laddering. Move problems from the abstract to the concrete or vice-versa to consider them from another perspective.
Steal ideas. If you get stuck on a particular concept – look to other industries and see how they’ve handled something similar. Of course, in the end you should be emulating in design not copying.
Invert the problem. Act like you want to do the exact opposite of what you’ve set out to do – how would you do that instead?

Author/Copyright holder: Was a bee. Copyright terms and licence: CC BY-SA 2.5

Even simple inversions can make us think very differently. Here the inversion of color changes the picture dramatically.

Review and Filter
Once you have a large number of ideas; you then need to review and filter these down to something more manageable. It is at this stage that ideas can be discarded as “bad”, kept as “good” or modified into something more useful. It’s best to carry out this exercise a little while after the initial ideation phase so that people have a chance to reflect on the ideation as well as become less personally attached to the original ideas.

The Take Away
Creating ideas is often best done in groups – though all the techniques above can be carried out by an individual too. The trick is to just create and keep doing so for an extended period of time. You can worry about works and what doesn’t later. Ideation is one of the most fun things a designer can do but it can also be frustrating if you try and do it by yourself sat in front of a piece of paper.

24
A PhD should be about improving society, not chasing academic kudos
Julian Kirchherr
Thu 9 Aug 2018 10.53 BST

Too much research is aimed at insular academic circles rather than the real world. Let’s fix this broken system.

When you look at the stats, it’s hard not to conclude that the current PhD system is fundamentally broken. Mental health issues are rife: approximately one-third of PhD students are at risk of having or developing a psychiatric disorder like depression. The high level of dropouts is similarly worrying – and possibly another symptom of the same problem. Research suggests that on average 50% of PhD students leave graduate school without finishing – with numbers higher at some institutions.

What’s more, aspiring scientists who manage to finish usually take much longer than originally planned. For instance, a PhD in Germany is supposed to take three years, according to university regulations, but most students need five years to complete one. In the US, meanwhile, the average completion time for a PhD in education sciences surpasses 13 years. The result is that in most countries, PhD students usually don’t graduate until they are well into their 30s.

Although 80% of science students start their PhD with the intention to pursue a career in science, theirenthusiasm typically wanes to the point that just 55% plan to continue in academia when nearing graduation. In any case, most are unlikely to be able to continue. One study found that for every 200 people who complete a PhD, only seven will get a permanent academic post and only one will become a professor.

Many academics enter science to change the world for the better. Yet it can often feel like contemporary academia is more about chasing citations. Most academic work is shared only with a particular scientific community, rather than policymakers or businesses, which makes it entirely disconnected from practice.

Take my example. I research how to mitigate the social impact of hydropower dams. My core paper on this topic has been cited three times so far. I read in the promotions guidelines at my university that if I want to be promoted from assistant to associate professor I need to accumulate significant citations. As a result, I have now published a paper in which I reviewed 114 definitions of a current academic buzzword, circular economy, to propose the 115th definition of this term.

In academic terms, this paper is a hit: it’s been cited 39 times since its publication. It is in the top 3% of all research outputs ever tracked by Altmetric, a tool measuring a paper’s influence among academics on social media. People I’ve never met before come up to me at conferences to congratulate me. But I’m not celebrating: this paper symbolises everything that’s broken in the academy. Academics love definitions, not solutions.

I wish the academy would incentivise scholars to improve society, not chase citations. I want us to reimagine a PhD that is designed not to win kudos within the academic community, but rather aimed at discovering something new that will be useful for practitioners and have real social impact.

This new PhD would see students go out into the field and talk to practitioners from day one of their research, rather than spending the first year (or more) reading obscure academic literature.

Students would then co-create the content of their theses with their supervisor as well as practitioners in their field of research.

Instead of labouring over every sentence of a 100,000-word dissertation locked away in an office, PhD students would share a concise 2,000-word draft with those practitioners to collect targeted feedback. They would finish their PhD when they have made a difference in the real world.

It’s time to disrupt the current PhD system to make it better for early-career researchers. We need to move away from a self-referential culture in which academicstalk only to their peers. Confucius said one of the core principles of the academy should be as follows: “The essence of knowledge is, having it, to apply it”. Reminding ourselves of this may help to fix the broken PhD machine.

Source: https://www.theguardian.com/higher-education-network/2018/aug/09/a-phd-should-be-about-improving-society-not-chasing-academic-kudos?CMP=share_btn_fb

25
4 Ways UX Research Can Be Used To Improve An Old App


According to recent research by Gartner, 20 percent of companies will be letting go of their apps come 2019.

Does that sound shocking to you?

It certainly does when you consider that thousands of apps are added to the app stores each month… but not so much when you realize that the average app is no longer used within a month of initially being downloaded.

When you first began to bring your mobile app idea to life, you went through the rigmarole of making a plan, speaking with stakeholders, observing your users, conducting interviews, testing your product and analyzing your findings.

(Or at least, you should have.)

What I just described is only some of what’s involved in conducting thorough UX research prior to the completion of your app. UX research is the process of observing, understanding and analyzing your users and how they interact with your product. And it’s very important for knowing which direction to take in your app’s development and design.

What some companies don’t realize is that, even after the research is done and the app is built and released, your work is not over.

It takes time and money to market your product, fix bugs and provide regular updates and customer support.

Businesses face this reality every day, and many eventually have to come to terms with the fact that their mobile app just does not make sense long-term.

That’s what I’m here to discuss in this piece. I want to talk about taking an app you already have and making it even better.

So how does UX research come into play for an existing app, you ask? Well, pretty much the same way it did the first time around. Therefore, if you went through the process then, you have a leg up now.

The difference is that this time around you want to conduct a checkup on your app and make the necessary adjustments to boost its performance and reach. A user’s needs and preferences are in a constant state of evolution; therefore, so should be your app. To stay competitive, re-engage your users and improve your product, this is a natural activity that must be embraced.

Let’s get to it. Here are four steps to improving an existing app using UX research:



ux research

Just as a business conducts quarterly reviews of its employees, you should consider conducting quarterly reviews of your app.

You need to know how things are holding up from a business perspective first and foremost, and how your app is contributing to an overall organizational strategy.

Do you remember why you made this app in the first place? What are the primary metrics that you’re looking to influence? What role does this app play in your driving revenue? What is the purpose of redesigning and rebuilding this app? Do you want to inspire more engagement and loyalty from your users? Do you want to increase revenue? Do you want to decrease churn? Do you want to reignite buzz surrounding your brand?

Here’s a reminder for you: A goal only works when you work at it. If you’re reading this post, you likely know that, but you probably also know that it’s easy for goals to veer off track if you don’t occasionally reel them in.

A good goal includes direction and a clear vision, but you can only ensure you’re staying on track if you revisit that goal periodically. Simply put, you need to check in.

Reflecting on your goals helps to reconfirm the why of your app and of your particular sales/marketing approach.

UX research can help you with this. If you’re striving to increase retention, UX research can help you identify areas of confusion and frustration within the app. From that insight, you can redesign the experience to be optimized for a cleaner and more intuitive experience.

By speaking with existing users, analyzing data and re-testing your product, you can develop a better understanding of what has been working and what needs to be changed. And you have to be objective and practical about it.

That leads us to the next step…

Reconfirm Your Target Audience


How well do you know your users and their behaviors?

What do their actions within your app tell you? Do you even know what those actions are?

This is the sort of information you should be gathering in your UX checkup.

Reach out to and interview existing users to uncover patterns and commonalities that can offer further insight into how you can improve your app. You’re looking for specific information, such as what devices they spend the most time on, what devices they use your app on, what they primarily use the app for, and how often they open certain features. You want to know as much as you can about your typical user so that you can cater to them in any subsequent updates.

The best way to approach this step is to have an open mind. You may have released your app thinking it would solve one problem, but your users may be telling you that it helps them with completely unrelated tasks.

Recommended for You
Webcast, August 9th: Improving B2B Paid Marketing Campaign Effectiveness Through Pipeline Measurement

One way to uncover what they want and how they use your app is to set up one-on-one interviews. In this type of meeting, you’re able to focus your attention on each subject individually and really get a sense of their beliefs and feelings about your product. You can ask specific questions to ascertain the thoughts and feelings of a typical user, watch how they interact with your app, identify pain points, and then apply those findings to your app update. You can also pick up on some telling speech and body cues that you may not get using other approaches.

But be careful when listening to your customers and what they are telling, because they may not exactly mean what they say…

I’ll explain further in this next section.


High competition, low adoption rates, and below par engagement rates all contribute to a company’s decision to ultimately scrap their app.

Maybe for your business, scrapping your app is a bit too rash, but how about some of its features?

This is where testing and monitoring can reveal major insights.

First, there’s moderated and unmoderated usability testing. Both allow you to observe how a participant uses your app, but the difference lies in the extent with which you’re able to interact with the participant. Moderated testing takes place in a testing environment with a moderator on-site, or remotely, with the user in their natural environment and the moderator calling or logging in from a separate location. Unmoderated testing, as you probably guessed, means that you cannot interact with the participant at all during the study.

You have to be careful though.

To speak to that warning I mentioned in the previous section, we humans are susceptible to biases and errors, and this can lead to inaccurate UX research findings. Biases can be subconscious or conscious, but either way they can and will occur on the part of all parties involved in your study.

One way to mitigate this data flaw is to let your users show you rather than tell you how they use your app. This can be done with an analytics monitoring tool like Appsee that can track and record the movements and actions of your users within the app. This is a clever and more foolproof way to gain the insights you’re looking for, and the data you get can answer questions you may not even have thought to ask.

Analytics monitoring leaves little room for biased remarks and opinions from a participant who knows they’re part of UX research. It also leaves little room for misinterpretation on the part of the researcher who is looking for specific answers.

With more accurate data, you can make better adjustments to your app.

After assessing your users’ behaviours in real-time, you should be able to determine which features of your app are most important, which ones are confusing or frustrating your users, and what new features could be added to make your app even better.


While the point of this UX checkup is to ensure that all aspects of your mobile app still work as intended and that you’re giving your customers what they REALLY want, it’s important to throw a little fun into the mix!

Your app review is a true discovery session, so use this opportunity to get creative. What more can be done to make your product the best there is?

You may have heard of Coca-Cola’s 70-20-10 rule that the company applies to their marketing practices. The rule states that 70 percent of their efforts are to go into low-risk activities, 20 percent of their time and money are to be spent on medium-risk activities, and 10 percent is reserved for high-risk, innovative practices.

This same mentality can be applied to anything, especially your app. People are using your app either for completing daily tasks, or for receiving special offers, or for entertainment purposes. If you’re solving one of those problems, great!

But how can you take that to the next level and really wow your audience?

An example of an app that took a chance with a high-risk, innovative experience was Instagram. When they launched a more clean, agile and modern look in 2016, they were met with lots of backlash against the move. The shift in design between the original and the new experience was quite significant:

But it worked.

Shortly after this redesign (3 months), Instagram rolled out Instagram Stories which changed their entire business and today, Instagram has more than 800 million monthly users.

As you think about your own high-risk experiences; I’ll leave it to you to determine exactly what you should try. (Hint: It will require more planning!)

Conclusion
So there you have it: four easy steps you can follow to give your app the attention it deserves! Just as a car requires maintenance checks to ensure all parts are in working order, your mobile app needs a checkup too—and most likely a UX tuneup.

Take this UX research process as seriously as you did when you first started your app build. Have a plan in place, execute it and analyze the results.

If you make UX research a regular practice, you’ll begin to understand why some apps make it while others fall into the abyss.


Read more at https://www.business2community.com/mobile-apps/4-ways-ux-research-can-be-used-to-improve-an-old-app-02094584

26
Teaching & Research Forum / Machine learning methods (infographic)
« on: July 25, 2018, 10:48:29 AM »
Machine learning methods
by Alan Morrison and Anand Rao

What’s the right algorithm for the task? Our visual primer shows the most common ones in use and the business problems they solve.

Artificial intelligence (AI) and machine learning are a hot topic in the enterprise, with company leaders having high hopes for how they can be used to improve and automate business processes. In fact, some 54% of organizations are making substantial investments in AI today, and that number jumps to 63% in three years, according to our 2017 Global Digital IQ Survey.

So how will AI solve business problems, like helping you figure out why you’re losing customers or assessing the risk of a credit applicant? It depends on a number of factors, especially the data you are working with and the type of training that will be required. Learn about the most common algorithms and their uses cases below.

For an introduction to machine learning, see the first infographic in this series. To get a glimpse into the future of machine learning and how humans will play a critical role in its development, see our machine learning evolution infographic.

More details in: http://usblogs.pwc.com/emerging-technology/machine-learning-methods-infographic/#20278



27
Gaming addiction: The ‘one-more level’ conundrum
Gaming disorder is now classified as a ‘mental health condition’ by WHO. Should you panic if you are a gamer?

On a humid Thursday afternoon, the NxGT Gaming Lounge in Delhi University’s north campus area is engulfed in digital din. There is a mix of commentary and stadium crowd cheers from FIFA 18, of bodies being smashed against the ring in WWE 2K18, and of bullets and weapon recoil from Far Cry 5, Call Of Duty: Black Ops III and Counter-Strike.

The lighting at the 24x7 pay-and-play arcade is dim, and the air-conditioning on full blast to keep the multiple PlayStation 4s and PCs cool. On any weekday, you will find plenty of gamers, seated on plush black sofas and ergonomic gaming chairs. They remain fixated on the big TV screens in front of them, their fingers move swiftly on the joysticks. They only get up to stretch their legs or use the toilet.

In a far corner, Ivansh Jayaswal, 26, is plotting his next move on Dota 2, a popular multiplayer battle-arena video game. Like most others there, he seems unperturbed that the World Health Organization (WHO) has classified “gaming disorder” as a mental health condition. “I heard about it, but I didn’t think much. I guess most parents now will be strict towards the gaming time their children get. I’m too old for this,” he says, giving us a 5-minute crash course on the game and how one can earn using in-game rewards from Dota. “You can make a good amount of money from this.” One example is The International, an annual Dota 2 eSports tournament organized by Valve, the game’s developer. Its eighth edition is expected to have a prize pool of approximately $20 million (around ₹137.5 crore).

Jayaswal, who works with a telecom start-up, visits the lounge two-three times a week, spending 6-7 hours on the PC. “I’ve been playing for the past 10-12 years now. It’s a stress-buster for me,” he says.

Science and sensibility

At first glance, Jayaswal can be mistaken for someone with a gaming disorder. His eyes remained glued to the screen even when he was talking to us. But can he be called an addict? According to WHO’s International Classification of Diseases (ICD), gaming disorder is a “pattern of gaming behaviour (‘digital-gaming’ or ‘video-gaming’) characterized by impaired control over gaming, increasing priority given to gaming over other activities to the extent that gaming takes precedence over other interests and daily activities, and continuation or escalation of gaming despite the occurrence of negative consequences.” The ICD, a Reuters report says, now covers 55,000 injuries, diseases and causes of death.

Medical experts say that while WHO’s move opens up a window for research on the neuropsychological effects of gaming, it is important to remember that there is a difference between gaming and gaming disorder. “We need to be cautious. You can be a gamer and you might not be addicted to it all. What this diagnosis has done is that it has put a label (on addiction). This is a category of disorder which did not exist. Now we have an official diagnosis for such patients. But we need to keep this distinction in mind: Those who need help must be provided help, but those who don’t need help should not be stigmatized just because they are gaming or using gaming in a healthy way,” says Y.P.S. Balhara, associate professor, psychiatry, and in charge of the Behavioural Addictions Clinic at the All India Institute of Medical Sciences (Aiims), Delhi.

The clinic, started in 2016, sees cases ranging from excessive use of technology and social media to addiction to TV series. Gaming addiction, Dr Balhara says, has emerged as the second most common trait. One case involved an 18-year-old Delhiite who was gaming for up to 14 hours daily. He would not leave his room, and this started affecting his studies. When the parents intervened, he became violent. That’s when they took him to the clinic. “This is how the problem manifests. Once it reaches such levels, it’s difficult to manage,” adds Dr Balhara.

This can adversely affect studies, work and sleeping habits. The last, because a large part of the multiplayer gaming universe is a European audience. Indian gamers often change their sleep routine to match timings with other gamers. “Sometimes, the participant is not communicating with offline friends or parents, but with his or her online team players. This makes them say that their interaction level has not gone down, when it has actually shifted from offline to online,” says Manoj Sharma, professor of clinical psychology and coordinator for the SHUT (Service for Healthy Use of Technology) Clinic at the National Institute of Mental Health and Neurosciences (Nimhans), Bengaluru, over the phone.

Gamers Inc.

Globally, the digital gaming industry is one of the fastest growing sectors. India alone is home to a digital gaming market that is expected to be worth billions of dollars by 2021. Some industry experts believe that while WHO’s classification will not affect the sector, it will help raise awareness among parents about their children’s digital lives. “There are millions of gamers on this planet who lead normal lives despite playing games for a few hours daily, especially in countries like Korea and China. The gaming industry would largely be unaffected, although with better tools now in smartphones to monitor kids’ screen time, even parents have a way of managing their child’s gaming behaviour,” says Rajan Navani, vice-chairman and managing director, JetSynthesys, a Pune-based digital entertainment and gaming company, on email.

“Compulsive gaming, where you can’t survive without it, and you need it to function properly, is the issue, not gaming itself,” says Ishaan Arya, 27, a freelance gaming specialist from Bengaluru who spends 6-7 hours a day gaming and live-streaming his gameplay.

“One important thing about video games is the huge social aspect. People are constantly interacting with others and meeting new people. They actually meet these people who turn into close friends and confidants…. But doing anything too much can be bad and that applies to gaming too,” adds Arya.

It goes without saying that gaming need not turn into an addiction. Take 28-year-old Felix Arvid Ulf Kjellberg, a YouTube sensation known by his online name PewDiePie. Five years ago, Kjellberg, a Swedish game commentator, became the most subscribed user on YouTube, with more than 60 million subscribers. “He is one of the world’s most famous streamers. I think he plays for about 12-14 hours a day. But the fact that he monetizes it makes it his job and absolves him from being an addict,” says Aditya Deshbandhu, an independent researcher from the University of Hyderabad, who focuses on new media studies, digital culture and the emerging field of video-game studies.

Research at the Center for Computer Games Research at the IT University (ITU) of Copenhagen, Denmark, suggests that the complexity surrounding gaming addiction may have something to do with the methodology used.

In recent interview studies with supposed “video-game addicts”, researchers found that they did not experience the severe negative consequences typically associated with addiction. The reason they are “wrongly labelled” as addicts is that current questionnaires designed to measure video-game addiction are based on substance-addiction ones. “It is obviously problematic for adults to use heroin or alcohol to alleviate negative moods. However, playing video games with your friends to achieve the same ends is not necessarily bad; it may be just what is needed. Questions such as ‘do you use video games to make you feel better?’, according to our research, tend to overestimate the negative consequences of video games,” says Rune Kristian Lundedal Nielsen, a psychologist and assistant professor at ITU, on email.

Source: https://www.livemint.com/Technology/uVcfmh0mFO2DLiTRq3MAxN/Gaming-addiction-The-onemore-level-conundrum.html

28
AI applications that are human-centred, unbiased, fair
Responsible artificial intelligence prioritizes ethics above anything in development of AI applications

Even fairly cautious predictions suggest that artificial intelligence, or AI, will reshape our workforces, redesign business processes and give rise to new services in a way that we are only starting to imagine. At Accenture, we believe the societal impact of AI will be huge, but if deployed responsibly, it will be overwhelmingly beneficial, too.

Responsible AI
The core of responsible AI is a recognition by businesses, governments and technology leaders that with the benefits of AI comes a duty of care to manage any adverse consequence. Responsible AI prioritizes ethics above anything else in the development of AI applications. Economic growth is, of course, an end goal of AI, but it must be done in such a way as to empower humans and ensure communities thrive. To meet this objective, transparency is critical and AI must comply fully with all relevant regulations in the locations it is deployed.

The negative consequences of mismanaging the transition to our AI-enabled future are hard to overstate. Even in our relatively immature era of AI deployment, we have seen how badly things can go wrong when AI applications are not sufficiently trained or tested: self-driving cars cause accidents, AI bots develop gender and racial bias, and insensitivity and rogue algorithms create chaos.

Putting AI to the test
How can we ensure that AI decision-making is valid, safe, reliable and, above all, ethical? One of the key methods will be establishing a robust framework for teaching and training AI applications. Just as parents need to educate their children on societal norms, values and behaviours, so, too, will AI applications need to be trained and validated to ensure they are aligned with our values and societal goals—the applications will not know these things implicitly. Businesses and their technology partners, therefore, need to build a robust test framework, bespoke to AI applications, to guarantee decision-making is transparent, explainable, fair and non-discriminatory.

It is important not to underestimate the challenges of building such a test regime. AI software is made up of many components, some of which change over time. Software engineers need to consider, for example, how best to process and verify the vast amounts of structured and unstructured data that fuel AI applications. Additionally, engineers will need to select which AI or machine learning algorithm to use, and then evaluate the accuracy and performance of the learning models to ensure ethical and unbiased decisioning and regulatory compliance. Engineers will also need to build new test and monitoring processes that account for the data-dependent nature of AI systems.

Teach-and-test framework
One way to simplify this task is to divide it into two. In this approach, engineers will first carry out a “teach” stage: processes to train the system to produce outputs by learning from training data. This stage tests the performance of various algorithms, allowing engineers to select the best-performing model to be deployed in production. Next is the “test” phase. Here, engineers check the accuracy and performance of the system by validating outputs both on test and production data.

What’s important across the teach-and-test framework is that businesses and their technology partners should not only assess whether the AI application is effective in meeting a given business goal (enhancing customer experiences, for example), but also that the data used to train the system is unbiased, high-quality and accurate. Importantly, the framework also needs to include a mechanism to test the ability of an AI system to explain its decisions logically.

At Accenture, for example, we developed our own teach-and-test framework and applied it when we built a sentiment analysis solution. The framework was critical in allowing us to develop unbiased training data, which guaranteed the application had the right balance of different sentiments across the social media, news and other data sources it would be expected to analyse. Significantly, the framework also allowed us to accelerate the model training time by an astounding 50%.

The most important work that lies ahead of us now is to ensure that the AI revolution works for everyone and not just for a few. Accenture believes in building AI applications that are human-centred, unbiased and fair. And the best way to do that is it to apply a rigorous teach-and-test framework while developing new applications, so that ethical outcomes are as certain as good business outcomes.

Source: https://www.livemint.com/AI/5ixVrSb5hAn66jAloTt4cJ/AI-applications-that-are-humancentred-unbiased-fair.html

29
AI Has Given Organizations The Tools to Make Intelligent Decisions
AUTHOR: MALAVIKA SACCHDEVA - JULY 2, 2018

AI Has Given Organizations The Tools to Make Intelligent Decisions
Artificial Intelligence is an important component of the end-to-end digital revolution. There is tremendous interest and desire across organizations on how AI can be effectively used and the role of AI in organization’s operations. Availability of huge compute power at lower cost coupled with availability of large application data sets have enabled and propelled the use of AI. Currently, businesses have the unique opportunity of being at the helm of this shift within their industry, provided they align with the imperatives of the digital world.

According to Abhay Pendse, Chief Architect, Corporate CTO, Persistent Systems, organizations have introduced the use of Machine Learning and AI in their back-office operations. Large scale document processing of handwritten and typed documents using OCR and Machine Learning algorithms have helped in the speedy digitization of back-office work. These data sets are now being used to generate actionable insights and predictions using AI. AI analysis and insights are also being used to make the automation of repetitive tasks more intelligent and effective (Intelligent Automation).

Manufacturing industries are using AI to predict maintenance cycle and for effective inventory management directly improving productivity and capacity.

Organizations are also using AI effectively to detect, identify, isolate and stop cyber-security threats. Insurance companies are using AI models for fraud detection resulting into operational improvements and high optimizations on the costs.

In the healthcare space, AI technology is used in the image processing of X-ray and other medical images for detection of anomalies. This is giving vital data points to Doctors resulting into accurate diagnosis. Some of the life-sciences companies are using AI on huge data sets to speed up the discovery of new drugs which can eventually benefit millions of people.

“AI has given organizations the tools to make intelligent decisions and the ability to predict based on data analysis. This ability given by AI will continue to improve and drive organizational efficiencies to new levels” he added.

According to Dr. Vijay Srinivas Agneeswaran, Senior Director, Technology, SapientRazorfish, AI has helped improve operational/marketing efficiencies for organizations:

-Organizations are able to reach consumers better and able to give what they want – implying marketing efficiencies have improved. This is purely due to AI based recommendation/targeting systems.

-Operational efficiencies is also being improved – this could be w.r.t risk analytics or in certain cases, optimization approaches driven by AI/ML.

Challenges faced by companies using AI in India:

 It is critical to understand that AI practices are already being implemented in today’s business environment via analytics software, automation services and even through mobile assistance. “The real challenge is keeping up with the global market. India is lagging behind in the developments of AI as compared to other nations like US and China. Similar to other products of globalization, AI in India is also a side product of globalization which is becoming widely available,” said Ritesh Gandotra .

Nowadays, organizations are utilizing the possibilities of AI with respect to their businesses but at the same time they are skeptical towards it. Sahil Chopra, CEO and Founder, iCubeswire said that there is no reason for them to refute consolidating it with their business, but the challenge arises when it has to be strategically integrated and seamlessly incorporated in the work process. Organizations may not be aware of the ideal way to utilize AI due to insufficient information. However, it is not very difficult to comprehend the usage of AI, but it certainly isn’t too easy as well. Another challenge that hampers AI integration could be self-doubt or lack of trust required for its incorporation. Many organizations still bank upon the traditional methods of dealing with the customers and they do not wish to explore new arenas even if the chances of success are higher than that of failure.

According to Abhay Pendse, one of the most important challenges that are faced by companies in India is unavailability of people with the right data science skills. With only small number of good data scientists available to do AI work, companies need to work with Universities in India to develop skilled data scientists as well as develop in-house training programs to train employees on core data science skills.

All of these challenges can only be addressed when developing a Company-Wide Digital Transformation Strategy. The strategy can be started by thinking of the stakeholders that keep the business alive: the customers. “Also, being consistent and transparent is the key. The organisations must keep employees informed and involved through the whole process of digital transformation. Empower them and paint them a future they can all work towards. The organizational structure should be fluid too— because the new frontier of technology, data and the customer experience will require it be so” said Ritesh Gandotra.

Sahil Chopra aptly concluded by saying that there is no set prototype to overcome these challenges, but one has to conduct a thorough R&D of AI which is indeed a futuristic approach and consolidate its usage with respect to their brand and end motive.

Source: https://www.dqindia.com/ai-given-organizations-tools-make-intelligent-decisions/

30
Public Health / Effective Health Benefits of Guanabana Fruit
« on: June 25, 2018, 10:16:28 AM »
Effective Health Benefits of Guanabana Fruit
William Cotter
Published on June 23, 2018
Soursop…? What a funny name! Well, this beautiful fruit with loads of nutritional value is also called Guanabana.

This awesome fruit can be hard to locate, but do your homework and call around to your neighborhood organic grocery store and see if you can find it there or if they know where it can be purchased, as they are from very tropical regions.

Learning about this exotic fruit can be helpful in your plan to change your diet or broaden your horizon on different foods to eat, which can be extremely helpful to your palette!

Learn about the:

Anti-Baterial Properties
The Benefits as it ties to the Treatment of Cancer
The High Carbohydrate Content
Sedative Effects
Back Pain Treatment
and even Uric Acid Treatments!
Yes! All from one fruit!

Source: https://www.linkedin.com/pulse/effective-health-benefits-guanabana-fruit-william-cotter/?lipi=urn%3Ali%3Apage%3Ad_flagship3_feed%3BCTQ94HOeS8apwvmjPSwqbg%3D%3D

Pages: 1 [2] 3 4 5