Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - farzanaSadia

Pages: [1] 2 3 ... 8
1
Just the mention of AI and the brain invokes pictures of Terminator machines destroying the world. Thankfully, the present picture is significantly more positive. So, let’s explore how AI is helping our planet and at last benefiting humankind. In this blog on Artificial Intelligence applications, I’ll be discussing how AI has impacted various fields like marketing, finance, banking and so on.

If you’re new to AI make sure to check out this blog on what is AI.

The various domains which I’ll be covering in this blog are:

AI in Marketing
AI in Banking
AI in Finance
AI in Agriculture
AI in HealthCare
Artificial Intelligence Applications: Marketing
Marketing is a way to sugar coat your products to attract more customers. We, humans, are pretty good at sugar coating, but what if an algorithm or a bot is built solely for the purpose of marketing a brand or a company? It would do a pretty awesome job!

In the early 2000s, if we searched an online store to find a product without knowing it’s exact name, it would become a nightmare to find the product. But now when we search for an item on any e-commerce store, we get all possible results related to the item. It’s like these search engines read our minds! In a matter of seconds, we get a list of all relevant items. An example of this is finding the right movies on Netflix.

AI Applications - AI in Marketing

Artificial Intelligence Applications – AI in Marketing

One reason why we’re all obsessed with Netflix and chill is because, Netflix provides highly accurate predictive technology based on customer’s reactions to films. It examines millions of records to suggest shows and films that you might like based on your previous actions and choices of films. As the data set grows, this technology is getting smarter and smarter every day.

With the growing advancement in AI, in the near future, it may be possible for consumers on the web to buy products by snapping a photo of it. Companies like CamFind and their competitors are experimenting this already.


Want to master Deep Learning? Enroll Now   
Artificial Intelligence Applications: Banking
AI in banking is growing faster than you thought! A lot of banks have already adopted AI-based systems to provide customer support, detect anomalies and credit card frauds. An example of this is HDFC Bank.

HDFC Bank has developed an AI-based chatbot called EVA (Electronic Virtual Assistant), built by Bengaluru-based Senseforth AI Research.

Course Curriculum
AI & Deep Learning with TensorFlow
Instructor-led SessionsReal-life Case StudiesAssignmentsLifetime Access
Since its launch, Eva has addressed over 3 million customer queries, interacted with over half a million unique users, and held over a million conversations. Eva can collect knowledge from thousands of sources and provide simple answers in less than 0.4 seconds.

Artificial Intelligence Applications - AI in BankingArtificial Intelligence Applications – AI in Banking

The use of AI for fraud prevention is not a new concept. In fact, AI solutions can be used to enhance security across a number of business sectors, including retail and finance.

By tracing card usage and endpoint access, security specialists are more effectively preventing fraud. Organizations rely on AI to trace those steps by analyzing the behaviors of transactions.

Companies such as MasterCard and RBS WorldPay have relied on AI and Deep Learning to detect fraudulent transaction patterns and prevent card fraud for years now. This has saved millions of dollars.

Artificial Intelligence Applications: Finance
Ventures have been relying on computers and data scientists to determine future patterns in the market. Trading mainly depends on the ability to predict the future accurately.

Machines are great at this because they can crunch a huge amount of data in a short span. Machines can also learn to observe patterns in past data and predict how these patterns might repeat in the future.

In the age of ultra-high-frequency trading, financial organizations are turning to AI to improve their stock trading performance and boost profit.

Artificial Intelligence Applications - AI in Finance

Artificial Intelligence Applications – AI in Finance

One such organization is Japan’s leading brokerage house, Nomura Securities. The company has been reluctantly pursuing one goal, i.e. to analyze the insights of experienced stock traders with the help of computers. After years of research, Nomura is set to introduce a new stock trading system.

The new system stores a vast amount of price and trading data in its computer. By tapping into this reservoir of information, it will make assessments, for example, it may determine that current market conditions are similar to the conditions two weeks ago and predict how share prices will be changing a few minutes down the line. This will help to take better trading decisions based on the predicted market prices.

Artificial Intelligence Applications: Agriculture
Here’s an alarming fact, the world will need to produce 50 percent more food by 2050 because we’re literally eating up everything! The only way this can be possible is if we use our resources more carefully. With that being said, AI can help farmers get more from the land while using resources more sustainably.

Course Curriculum
AI & Deep Learning with TensorFlow
Issues such as climate change, population growth and food security concerns have pushed the industry into seeking more innovative approaches to improve crop yield.

Organizations are using automation and robotics to help farmers find more efficient ways to protect their crops from weeds.

Artificial Intelligence Applications - AI in Agriculture

Artificial Intelligence Applications – AI in Agriculture

Blue River Technology has developed a robot called See & Spray which uses computer vision technologies like object detection to monitor and precisely spray weedicide on cotton plants. Precision spraying can help prevent herbicide resistance.

Apart from this, Berlin-based agricultural tech start-up called PEAT, has developed an application called Plantix that identifies potential defects and nutrient deficiencies in soil through images.

The image recognition app identifies possible defects through images captured by the user’s smartphone camera. Users are then provided with soil restoration techniques, tips and other possible solutions. The company claims that its software can achieve pattern detection with an estimated accuracy of up to 95%.

Artificial Intelligence Applications: Health Care
When it comes to saving our lives, a lot of organizations and medical care centers are relying on AI. There are many examples of how AI in healthcare has helped patients all over the world.

An organization called Cambio Health Care developed a clinical decision support system for stroke prevention that can give the physician a warning when there’s a patient at risk of having a heart stroke.

Artificial Intelligence Applications - AI in Health Care

Artificial Intelligence Applications – AI in Health Care

Another such example is Coala life which is a company that has a digitalized device that can find cardiac diseases.

Course Curriculum
AI & Deep Learning with TensorFlow
Weekday / Weekend Batches
Similarly, Aifloo is developing a system for keeping track of how people are doing in nursing homes, home care, etc. The best thing about AI in healthcare is that you don’t even need to develop a new medication. Just by using an existing medication in the right way, you can also save lives.


I’d like to conclude by asking you, how you think AI will benefit us in the future?



2
What is machine learning?

In simple terms, machine learning is the facet of computer science in which professionals give computers the ability to learn a task without being specifically programmed to do so. This is accomplished through teaching computers how to find patterns in data on their own. Given data, a machine learning algorithm can recognize patterns and learn from data to make predictions about new data, all through the use of clever statistics.

In short, if you have data and a pattern in the data, your machine can learn.

As in much of engineering, however, there is obviously more to machine learning than that simple explanation when it comes to execution and delivery. Within the field, there are three types of machine learning algorithms: supervised learning, unsupervised learning and reinforcement learning. According to business analytics software and services company SAS, supervised and unsupervised are currently the most popular learning methods. They differ as follows:

Supervised learning: In this method, the algorithms are trained by entering an input and a desired outcome to create labeled examples. The machine is able to find errors by comparing the actual outcome with the outcome that it knows should be correct based on the information originally entered. An example, according to SAS, would be an algorithm for identifying credit card fraud. The machine can spot unusual charges by comparing them to the expected transactions.
Unsupervised learning: As opposed to supervised learning, unsupervised learning does not have “right” answers – or historical labels – to compare the information to. Rather, the algorithm must look at the information provided and draw its own conclusions. SAS reported that this method is helpful for finding attributes by which to sort groups, such as identifying what consumers can be targeted by the same marketing campaign.
Reinforcement learning: While not as popular as the previous two methods, reinforcement learning is an important part of the field. As opposed to supervised and unsupervised learning, this algorithm learns through trial and error, ultimately learning how to choose the option that will result in the greatest reward. According to SAS, this method is common in robotics, navigation and even gaming.
Machine learning is growing in popularity and importance in large part because companies and government agencies have large quantities of data that need to be sorted, analyzed and leveraged to ensure maximum results – and ideally a boosted return on investment. The data that is used in these algorithms can include everything from customer spreadsheets, past buyer information, murder rates, loaner information, census information, survey information, diabetes rates, website visiting rates and much more. Machine learning can not only reveal trends about this information, but can also give insight toward predicting things about future behavior, such as who is likely to pay back their loans or what customer base a specific marketing campaign should target.

While machine learning is related to the broader field of artificial intelligence, these terms are not synonyms. AI is a branch of computer science that is primarily focused on creating machines that are capable of intelligent thought. However, this is hard to accomplish without the contributions of machine learning.

“AI is basically the intelligence – how we make machines intelligent, while machine learning is the implementation of the compute methods that support it. The way I think of it is: AI is the science and machine learning is the algorithms that make the machines smarter,” Nidhi Chappell, head of machine learning at Intel, told Wired Magazine. “So the enabler for AI is machine learning.”

Applications of machine learning

Though the idea of a machine making decisions on its own and thinking independently may sound almost like a work of fiction, machine learning is actually more common than many people may expect. The general public can find elements of it in many areas of daily life. For instance, when people finish binge-watching a favorite television series, Netflix may suggest a new series that they might enjoy based on previous programs that were watched. This is an example of machine learning in a very practical application. Netflix uses an algorithm that can find common themes in a person’s previous preferences – such as a tendency toward dark comedies – and then extrapolates those preferences to find other television series or films that will likely suit that particular taste in entertainment. The same is true of similar services provided by websites, such as the way that Amazon.com recommends items that consumers might be interested in based on their browsing history and previous purchases.

While these are very useful applications of machine learning for the average person, the field is much more than shopping and entertainment. These algorithms are used in public safety, agriculture, wearable medical devices and even self-driving cars.

Computer vision is also an important application of this field. This area of study works to enable computers to act as the human vision system does, which in simple terms is to gather information from images and translate it to understanding. For example, using computer vision, a drone flying over a field could identify portions of the crop that look diseased and alert the farmer to the problem.

Machine learning and engineering

In the growing field of machine learning, engineers play an important role. Professionals with a background in electrical engineering or software engineering are usually equipped with the knowledge and skill set needed to contribute to this new field in a meaningful way.

To be successful in the field of machine learning, you should develop the following qualifications and skills:

Probability and statistics.
Applied math and algorithms.
Coding languages.
Advanced signal processing techniques.
This role is not to be confused of that of a data analyst. Though the positions are similar, the goal of a data analyst is typically to gather and evaluate information – a process completed by humans – to create usable results. Machine learning engineers typically focus more on giving computers the ability to process that information on their own. However, in the field of machine learning the responsibilities of the positions can overlap.

Career prospects in machine learning

As machine learning grows, so do the opportunities in the field for qualified professionals who want to build a career working with these algorithms. Companies as diverse as Nike, Amazon.com, IBM, Facebook and Spotify hire machine learning engineers. Though responsibilities vary, engineers in these positions usually are focused on implementing algorithms that work with company data to solve specific problems. Based on job postings on the career website Indeed, other responsibilities can include:

Developing strategies and best practices.
Building algorithms and learning systems.
Generating new code for models and improving existing code.
Building data and machine learning platforms.
Though the field is relatively new, that does not mean that compensation is low. Professionals in these positions can earn an extremely competitive salary. According to the job and salary website Glassdoor, the average salary for machine learning engineers in the U.S. is $114,826.

Beginning a career in the field

To begin a career in machine learning, start by reading. This is a rapidly advancing field, so staying up-to-date on the latest developments is critical. Monitor technology news sources and check out free books online. The more you can learn on your own in advance, the better you will be positioned to advance in a formal position once you begin working in the field.

Next, consider furthering your education. If you are interested in pursuing a career in machine learning, consider enrolling in UC Riverside’s online Master of Science in Electrical Engineering degree program to take the next step in entering this growing field through courses on topics such as power systems analysis, power system steady state and smart grids.

Engineers interested in a career in machine learning could also consider completing a Master of Science in Engineering with an emphasis in Data Science at UCR. By choosing to concentrate in data science, you will acquire the skills and knowledge you need to excel in the management of mass quantities of information in machine learning. Topics covered in the program include data mining techniques, systems engineering, statistical computing and a specific course on machine learning.

Sources

https://www.glassdoor.com/Salaries/machine-learning-engineer-salary-SRCH_KO0,25.htm

5 Skills You Need to Become a Machine Learning Engineer



http://bigdata-madesimple.com/7-key-skills-required-for-machine-learning-jobs/

https://www.quora.com/What-skills-are-needed-for-machine-learning-jobs

http://www.sas.com/it_it/insights/analytics/machine-learning.html

https://www.britannica.com/technology/machine-learning

http://www.wired.co.uk/article/machine-learning-ai-explained

http://www.wired.co.uk/article/machine-learning-ai-explained

http://www.sas.com/en_us/insights/analytics/machine-learning.html#

3
Top 10 Applications of Machine Learning in Pharma and Medicine
The increasingly growing number of applications of machine learning in healthcare allows us to glimpse at a future where data, analysis, and innovation work hand-in-hand to help countless patients without them ever realizing it. Soon, it will be quite common to find ML-based applications embedded with real-time patient data available from different healthcare systems in multiple countries, thereby increasing the efficacy of new treatment options which were unavailable before.

Here are the top 10 applications of machine learning in healthcare -

Identifying Disease and Diagnosis1. Identifying Diseases and Diagnosis
One of the chief ML applications in healthcare is the identification and diagnosis of diseases and ailments which are otherwise considered hard-to-diagnose. This can include anything from cancers which are tough to catch during the initial stages, to other genetic diseases. IBM Watson Genomics is a prime example of how integrating cognitive computing with genome-based tumor sequencing can help in making a fast diagnosis. Berg, the biopharma giant is leveraging AI to develop therapeutic treatments in areas such as oncology. P1vital's PReDicT (Predicting Response to Depression Treatment) aims to develop a commercially feasible way to diagnose and provide treatment in routine clinical conditions. Drug Discovery and Manufacturing2. Drug Discovery and Manufacturing
One of the primary clinical applications of machine learning lies in early-stage drug discovery process. This also includes R&D technologies such as next-generation sequencing and precision medicine which can help in finding alternative paths for therapy of multifactorial diseases. Currently, the machine learning techniques involve unsupervised learning which can identify patterns in data without providing any predictions. Project Hanover developed by Microsoft is using ML-based technologies for multiple initiatives including developing AI-based technology for cancer treatment and personalizing drug combination for AML (Acute Myeloid Leukemia).Medical Imaging Diagnosis3. Medical Imaging Diagnosis
Machine learning and deep learning are both responsible for the breakthrough technology called Computer Vision. This has found acceptance in the InnerEye initiative developed by Microsoft which works on image diagnostic tools for image analysis.   As machine learning becomes more accessible and as they grow in their explanatory capacity, expect to see more data sources from varied medical imagery become a part of this AI-driven diagnostic process.Personalized Medicine4. Personalized Medicine
Personalized treatments can not only be more effective by pairing individual health with predictive analytics but is also ripe are for further research and better disease assessment. Currently, physicians are limited to choosing from a specific set of diagnoses or estimate the risk to the patient based on his symptomatic history and available genetic information. But machine learning in medicine is making great strides, and IBM Watson Oncology is at the forefront of this movement by leveraging patient medical history to help generate multiple treatment options. In the coming years, we will see more devices and biosensors with sophisticated health measurement capabilities hit the market, allowing more data to become readily available for such cutting-edge ML-based healthcare technologies.Machine Learning-based Behavioral Modification5.   Machine Learning-based Behavioral Modification
Behavioral modification is an important part of preventive medicine, and ever since the proliferation of machine learning in healthcare, countless startups are cropping up in the fields of cancer prevention and identification, patient treatment, etc. Somatix is a B2B2C-based data analytics company which has released an ML-based app to recognize gestures which we make in our daily lives, allowing us to understand our unconscious behavior and make necessary changes.Smart Health Records6.   Smart Health Records
Maintaining up-to-date health records is an exhaustive process, and while technology has played its part in easing the data entry process, the truth is that even now, a majority of the processes take a lot of time to complete. The main role of machine learning in healthcare is to ease processes to save time, effort, and money. Document classification methods using vector machines and ML-based OCR recognition techniques are slowly gathering steam, such as Google's Cloud Vision API and MATLAB's machine learning-based handwriting recognition technology. MIT is today at the cutting edge of developing the next generation of intelligent, smart health records, which will incorporate ML-based tolls from the ground up to help with diagnosis, clinical treatment suggestions, etc.Clinical Trial and Research7.   Clinical Trial and Research
Machine learning has several potential applications in the field of clinical trials and research. As anybody in the pharma industry would tell you, clinical trials cost a lot of time and money and can take years to complete in many cases. Applying ML-based predictive analytics to identify potential clinical trial candidates can help researchers draw a pool from a wide variety of data points, such as previous doctor visits, social media, etc. Machine learning has also found usage in ensuring real-time monitoring and data access of the trial participants, finding the best sample size to be tested, and leveraging the power of electronic records to reduce data-based errors.Crowdsourced Data Collection8.   Crowdsourced Data Collection
Crowdsourcing is all the rage in the medical field nowadays, allowing researchers and practitioners to access a vast amount of information uploaded by people based on their own consent. This live health data has great ramifications in the way medicine will be perceived down the line. Apple's ResearchKit allows users to access interactive apps which apply ML-based facial recognition to try and treat Asperger's and Parkinson's disease. IBM recently partnered with Medtronic to decipher, accumulate, and make available diabetes and insulin data in real time based on the crowdsourced information. With the advancements being made in IoT, the healthcare industry is still discovering new ways in which to use this data and tackle tough-to-diagnose cases and help in the overall improvement of diagnosis and medication.Better Radiotherapy9.   Better Radiotherapy
One of the most sought-after applications of machine learning in healthcare is in the field of Radiology. Medical image analysis has many discrete variables which can arise at any particular moment of time. There are many lesions, cancer foci, etc. which cannot be simply modeled using complex equations. Since ML-based algorithms learn from the multitude of different samples available on-hand, it becomes easier to diagnose and find the variables. One of the most popular uses of machine learning in medical image analysis is the classification of objects such as lesions into categories such as normal or abnormal, lesion or non-lesion, etc. Google's DeepMind Health is actively helping researchers in UCLH develop algorithms which can detect the difference between healthy and cancerous tissue and improve radiation treatment for the same.Outbreak Prediction10. Outbreak Prediction
AI-based technologies and machine learning are today also being put to use in monitoring and predicting epidemics around the world. Today, scientists have access to a large amount of data collected from satellites, real-time social media updates, website information, etc. Artificial neural networks help to collate this information and predict everything from malaria outbreaks to severe chronic infectious diseases. Predicting these outbreaks is especially helpful in third-world countries as they lack in crucial medical infrastructure and educational systems. A primary example of this is the ProMED-mail, an Internet-based reporting platform which monitors evolving diseases and emerging ones and provides outbreak reports in real-time.

4
Have you ever used your credit card at a new store or location only to have it declined? Has a sale ever been blocked because you charged a higher amount than usual?

Consumers’ credit cards are declined surprisingly often in legitimate transactions. One cause is that fraud-detecting technologies used by a consumer’s bank have incorrectly flagged the sale as suspicious. Now MIT researchers have employed a new machine-learning technique to drastically reduce these false positives, saving banks money and easing customer frustration.

Using machine learning to detect financial fraud dates back to the early 1990s and has advanced over the years. Researchers train models to extract behavioral patterns from past transactions, called “features,” that signal fraud. When you swipe your card, the card pings the model and, if the features match fraud behavior, the sale gets blocked.

Behind the scenes, however, data scientists must dream up those features, which mostly center on blanket rules for amount and location. If any given customer spends more than, say, $2,000 on one purchase, or makes numerous purchases in the same day, they may be flagged. But because consumer spending habits vary, even in individual accounts, these models are sometime inaccurate: A 2015 report from Javelin Strategy and Research estimates that only one in five fraud predictions is correct and that the errors can cost a bank $118 billion in lost revenue, as declined customers then refrain from using that credit card.

The MIT researchers have developed an “automated feature engineering” approach that  extracts more than 200 detailed features for each individual transaction — say, if a user was present during purchases, and the average amount spent on certain days at certain vendors. By doing so, it can better pinpoint when a specific card holder’s spending habits deviate from the norm.

Tested on a dataset of 1.8 million transactions from a large bank, the model reduced false positive predictions by 54 percent over traditional models, which the researchers estimate could have saved the bank 190,000 euros (around $220,000) in lost revenue.

“The big challenge in this industry is false positives,” says Kalyan Veeramachaneni, a principal research scientist at MIT’s Laboratory for Information and Decision Systems (LIDS) and co-author of a paper describing the model, which was presented at the recent European Conference for Machine Learning. “We can say there’s a direct connection between feature engineering and [reducing] false positives. … That’s the most impactful thing to improve accuracy of these machine-learning models.”

Paper co-authors include: lead author Roy Wedge '15, a former researcher in the Data to AI Lab at LIDS; James Max Kanter ’15, SM ’15; and Sergio Iglesias Perez of Banco Bilbao Vizcaya Argentaria.

Extracting “deep” features

Three years ago, Veeramachaneni and Kanter developed Deep Feature Synthesis (DFS), an automated approach that extracts highly detailed features from any data, and decided to apply it to financial transactions.

Enterprises will sometimes host competitions where they provide a limited dataset along with a prediction problem such as fraud. Data scientists develop prediction models, and a cash prize goes to the most accurate model. The researchers entered one such competition and achieved top scores with DFS.

However, they realized the approach could reach its full potential if trained on several sources of raw data. “If you look at what data companies release, it’s a tiny sliver of what they actually have,” Veeramachaneni says. “Our question was, ‘How do we take this approach to actual businesses?’”

Backed by the Defense Advanced Research Projects Agency’s Data-Driven Discovery of Models program, Kanter and his team at Feature Labs — a spinout commercializing the technology — developed an open-source library for automated feature extraction, called Featuretools, which was used in this research.

The researchers obtained a three-year dataset provided by an international bank, which included granular information about transaction amount, times, locations, vendor types, and terminals used. It contained about 900 million transactions from around 7 million individual cards. Of those transactions, around 122,000 were confirmed as fraud. The researchers trained and tested their model on subsets of that data.

In training, the model looks for patterns of transactions and among cards that match cases of fraud. It then automatically combines all the different variables it finds into “deep” features that provide a highly detailed look at each transaction. From the dataset, the DFS model extracted 237 features for each transaction. Those represent highly customized variables for card holders, Veeramachaneni says. “Say, on Friday, it’s usual for a customer to spend $5 or $15 dollars at Starbucks,” he says. “That variable will look like, ‘How much money was spent in a coffee shop on a Friday morning?’”

It then creates an if/then decision tree for that account of features that do and don’t point to fraud. When a new transaction is run through the decision tree, the model decides in real time whether or not the transaction is fraudulent.

Pitted against a traditional model used by a bank, the DFS model generated around 133,000 false positives versus 289,000 false positives, about 54 percent fewer incidents. That, along with a smaller number of false negatives detected — actual fraud that wasn’t detected — could save the bank an estimated 190,000 euros, the researchers estimate.

Iglesias notes he and his colleagues at BBVA have consistently been able to reproduce the MIT team’s results using the DFS model with additional card and business data, with a minimum increase in computational cost.

Stacking primitives

The backbone of the model consists of creatively stacked “primitives,” simple functions that take two inputs and give an output. For example, calculating an average of two numbers is one primitive. That can be combined with a primitive that looks at the time stamp of two transactions to get an average time between transactions. Stacking another primitive that calculates the distance between two addresses from those transactions gives an average time between two purchases at two specific locations. Another primitive could determine if the purchase was made on a weekday or weekend, and so on.

“Once we have those primitives, there is no stopping us for stacking them … and you start to see these interesting variables you didn’t think of before. If you dig deep into the algorithm, primitives are the secret sauce,” Veeramachaneni says.

One important feature that the model generates, Veeramachaneni notes, is calculating the distance between those two locations and whether they happened in person or remotely. If someone who buys something at, say, the Stata Center in person and, a half hour later, buys something in person 200 miles away, then it’s a high probability of fraud. But if one purchase occurred through mobile phone, the fraud probability drops.

“There are so many features you can extract that characterize behaviors you see in past data that relate to fraud or nonfraud use cases,” Veeramachaneni says.

"In fact, this automated feature synthesis technique, and the overall knowledge provided by MIT in this project, has shown us a new way of refocusing research in other challenges in which we initially have a reduced set of features. For example, we are obtaining equally promising results in the detection of anomalous behavior in internal network traffic or in market operations, just to mention two [examples],” Iglesias adds.


5
Data Mining and Big Data / Why random forests outperform decision trees
« on: October 28, 2018, 07:41:22 PM »
Random forests consist of multiple single trees each based on a random sample of the training data. They are typically more accurate than single decision trees. The following figure shows the decision boundary becomes more accurate as more trees are added.

Here we’ll provide two intuitive reasons why random forests outperform single decision trees.

Higher resolution in the feature space

Trees are unpruned. While a single decision tree like CART is often pruned, a random forest tree is fully grown and unpruned, and so, naturally, the feature space is split into more and smaller regions.

Trees are diverse. Each random forest tree is learned on a random sample, and at each node, a random set of features are considered for splitting. Both mechanisms create diversity among the trees.

Two random trees each with one split are illustrated below. For each tree, two regions can be assigned with different labels. By combining the two trees, there are four regions that can be labeled differently.

Unpruned and diverse trees lead to a high resolution in the feature space. For continuous features, it means a smoother decision boundary, as shown in the following.

Handling Overfitting

Single decision tree method needs pruning to avoid overfitting. The following shows the decision boundary from an unpruned tree. The boundary is smoother but makes obvious mistakes (overfitting).

So how can random forests build unpruned trees without overfitting? Let’s provide an explanation below.

For the two-class (blue and red) problem below, both splits x1=3 and x2=3 can fully separate the two classes.

The two splits, however, result in very different decision boundaries. In other words, these boundaries conflict with each other in some regions, and may not be reliable.

Now consider random forests. For each random sample used for training a tree, the probability that the red point missing from the sample is.

So roughly 1 out of 3 trees is built with all blue data and always predict class blue. The other 2/3 of the trees have the red point in the training data. Since at each node a random subset of features is considered, we expect roughly 1/3 of the trees use x1, and the rest 1/3 uses x2. The splits from the two types of trees are illustrated below.

6
Uber has been one of the most active companies trying to accelerate the implementation of real world machine learning solutions. Just this year, Uber has introduced technologies like Michelangelo, Pyro.ai and Horovod that focus on key building blocks of machine learning solutions in the real world. This week, Uber introduced another piece of its machine learning stack, this time aiming to short the cycle from experimentation to product. PyML, is a library to enable the rapid development of Python applications in a way that is compatible with their production runtime.

The problem PyML attempts to address is one of those omnipresent challenges in large scale machine learning applications. Typically, there is a tangible mismatch between the tools and frameworks used by data scientists to prototype models and the corresponding production runtimes. For instance, its very common for data scientists to use Python-based frameworks such as PyTorch or Keras for producing experimental models that then need to be adapted to a runtime such as Apache Spark ML Pipelines that brings very specific constraints. Machine learning technologists refer to this issue as a tradeoff between flexibility and resource-efficiency. In the case of Uber, data scientists were building models in Python machine learning frameworks which needed to be refactored by the Michelangelo team to match the constraints of Apache Spark pipelines.

Overcoming this limitation meant extending the capabilities of Michelangelo to support models authored in mainstream machine learning frameworks while keeping a consistent model for training and optimization.

Enter PyML
The goal of Uber’s PyML is to streamline the development of machine learning applications and bridge the gap between experimentation and production runtimes. To accomplish that, PyML focuses on three main aspects:

1) Provide a standard contract for machine learning prediction models.

2) Enable a consistent model for packaging and deploying machine learning models using Docker containers.

3) Enable Michelangelo-integrated runtimes for online and offline prediction models.

The following figure illustrates the basic architecture principles of PyML.


A Standard Machine Learning Contract
PyML models can be authored different machine learning frameworks such as TensorFlow, PyTorch or Scikit-Learn. The models can use two main types of datasets: DataFrames, which store tabular structured data, and Tensors, which store named multidimensional arrays. After the models are created, they are adapted to a standard PyML contract definition which is essentially a class that inherits from DataFrameModel or TensorModel abstract classes, respectively. In both cases, users only need to implement two methods: a constructor to load their model parameters and a predict() method that accepts and returns either DataFrames or Tensors.


Packaging and Deployment
After the PyML models are created, they can be packaged into Docker containers using a consistent structure. PyML’s introduces a standard deployment format based on four fundamental artifacts:


Using that structure, a developer can package and deploy a PyML model using the following code. The PyML Docker image will contain the model and all the corresponding dependencies. The models will be immediately available for execution in the Michelangelo console.

Offline and Online Predictions
PyML supports both batch(offline) and online execution models for predictions. Offline predictions are modeled as an abstraction over PySpark. In that context, PyML users simply provide a SQL query with column names and types matching the inputs expected by their model, and the name of a destination Hive table in which to store output predictions. Behind the scenes, PyML starts a containerized PySpark job using the same image and Python environment as for serving the model online, ensuring that there are no differences between the offline and online predictions. Executing offline predictions is relatively straightforward as illustrated in the following code:


The standard two-operation (init, predict) contract of PyML models simplifies the implementation of online predictions. PyML enables online predictions by enabling lightweight gRPC interfaces for the Docker containers which are used by a common online prediction Service as shown in the following figure. Upon request, the online prediction service will launch the corresponding PyML model-specific Docker image as a nested Docker container via Mesos’ API. When the container is launched, it starts the PyML RPC server and begins listening for prediction requests on a Unix domain socket from the online prediction service.


PyML addresses one of the most important challenges in large scale machine learning applications by bridging the gap between experimentation and runtime environments. Beyond its specific technological contributions, the architecture of PyML can be adapted to different technology stacks should serve as an important reference for organizations starting their machine learning journey.

7
Software Engineering / Report: Can Kotlin compete with Java?
« on: July 03, 2018, 12:33:50 PM »
Java continues to dominate the programming language space for developers, but a new report reveals that Kotlin may soon knock it out of the top spot for mobile development. Packt released the results of its 2018 Skills Up report designed to look at the trends and tools software developers are using today.

The 2018 Skills Up report surveyed more than 8,000 developers and technology experts in four broad categories: app development, web development, security and systems admin, and data.

Kotlin is a statically typed programming language developed by JetBrains and supported by Google’s Android operating system. While Kotlin didn’t make it onto the list of top programming languages app developers are currently using overall, 71 percent of respondents stated that Kotlin is a serious contender for Java.

“Java beware: respondents say that Kotlin might just topple you from your throne. With adoption by Google for Android development, is this the beginning of the end of Java for mobile?” the report stated. “Kotlin has been around since 2011, but only recently has it started to really capture the imagination of engineers. Google has done a lot to reinforce its reputation — the fact that it was fully supported in Android Studio 3.0 in 2017 has ensured it is now one of the most popular Android development languages. We expect to see it competing closely with Java by the end of the year.”

Rounding out the application developer top five are JavaScript, Python, C# and SQL. Java is more popular when developing for mobile while Python was more favored by higher-earning app developers, and C# was found to be more popular among developers building enterprise and desktop applications.

“In 2018, we’ve seen C-based languages heavily lose out in favor of languages that can write more easily for the web. Only among desktop developers and game scripting does C# still hold the top spot: every other developer is looking to have the capacity to build for the browser, or for mobile.”

The top tools for mobile development, according to the report, included Android Studio, Xcode, macOs, Xamarin and iOS SDK. Android Studio has the most developers using it with 39 percent of the respondents, while Xcode only saw 17 percent of respondents using it. However, 50 percent of developers who make $70,000 or more cited using Xcode, iOS SDK and/or macOS.

Additionally, the report found the top tools for enterprise and desktop included .NET, Visual Studio and Java EE while MySQL, SQL Server and SQLite came out on top for the most commonly used databases.

App developers also found there is potential for using Swift outside of mobile development.
For web development, the report found the top languages included JavaScript, HTML/CSS, PhP, Python and Java. However, the report noted that app development and web development are beginning to no longer be considered as two separate entities, with web and app developers sharing a majority of the same toolchains.

“In 2018, working in tech almost always means working with the web. As more and more applications migrate to the browser and the cloud and as sites become ever more sophisticated, web development knowledge becomes a greater and greater priority,” the report stated.

The top front-end tools and frameworks for web development included JQuery, Bootstrap, npm, Angular and Webpack while the top back-end tools included Node.js, ASP.NET Core, Express.js and Laravel.

Sixty-five percent of web developers also found that conversational UI and chatbots have a strong future in the webUI space.

When looking at security and systems administration, the report found Python and Bash as the top used scripting languages followed by Shell, PowerShell and JavaScript. The top security tools include Wireshark, Nmap, Kali Linux and Metasploit. For system admin and virtualization tools, developers are using Linux, Windows OS, Docker, Ubuntu Server and Windows Server.

Other security and systems admin findings included that IoT is being held back by security issues, and a majority of organizations don’t treat cybersecurity with enough seriousness.

Python continued to top the list of languages when looking at data, followed by SQL, R and JavaScript. The top data libraries, tools and frameworks included Excel, NumPy, Anaconda and Pandas. According to the respondents, the next big areas for data include TensorFlow, deep learning, and machine learning.

Among data developers, 83 percent are excited about the potential of quantum computing, and more than half find AWS is the top cloud provider for Big Data.

Other findings of the report included:

Seventy-two percent of respondents feel like they are a part of a community with other developers
Sixty percent are satisfied with their jobs
Six percent are extremely dissatisfied
The top technical barrier across all industries is dealing with technical debt and legacy problems
Eighty-six percent of respondents agree it is important to develop soft skills such as communication and teamwork.
“Only one thing is certain in the world of tech: change. Working in development is about navigating a constantly evolving industry, keeping up to date with the skills you need to succeed,” the report stated.

8
Researchers have developed a new system designed to tackle complex objects and workflows on Big Data platforms. Computer science researchers from Rice University’s DARPA-funded Pliny project has announced PlinyCompute.

The project is funded through DARPA’s Mining and Understanding Software Enclaves (MUSE) initiative. The Pliny project aims to create systems that automatically detect and fix errors in programs. PlinyCompute is “a system purely for developing high-performance, Big Data codes.”

“With machine learning, and especially deep learning, people have seen what complex analytics algorithms can do when they’re applied to Big Data,” Chris Jermaine, a Rice computer science professor who is leading the platform’s development, said in the announcement. “Everyone, from Fortune 500 executives to neuroscience researchers are clamoring for more and more complex algorithms, but systems programmers have mostly bad options for providing that today. HPC can provide the performance, but it takes years to learn to write code for HPC, and perhaps worse, a tool or library that might take days to create with Spark can take months to program on HPC.”

According to Jermaine, while Spark was developed for Big Data and supports things such as load balancing, fault tolerance and resource allocation, it wasn’t designed for complex computation. “Spark is built on top of the Java Virtual Machine, or JVM, which manages runtimes and abstracts away most of the details regarding memory management,” said Jia Zou, a research scientist at Rice. “Spark’s performance suffers from its reliance on the JVM, especially as computational demands increase for tasks like training deep neural networks for deep learning.”

Zou continued that PlinyCompute was designed for high performance, and has found to be at least twice as fast and 50 times faster at complex computation over Spark. However, PlinyCompute requires programmers to write libraries and models in C++ while Spark requires Java-based coding. Because of this, Jermaine says programmers might find it difficult to write code for PlinyCompute.

“There’s more flexibility with PlinyCompute,” Jermaine said. “That can be a challenge for people who are less experienced and knowledgeable about C++, but we also ran a side-by-side comparison of the number of lines of code that were needed to complete various implementations, and for the most part there was no significant difference between PlinyCompute and Spark.”

9
Computer scientists have a history of borrowing ideas from nature, such as evolution. When it comes to optimising computer programs, a very interesting evolutionary-based approach has emerged over the past five or six years that could bring incalculable benefits to industry and eventually consumers. We call it genetic improvement.

Genetic improvement involves writing an automated “programmer” who manipulates the source code of a piece of software through trial and error with a view to making it work more efficiently. This might include swapping lines of code around, deleting lines and inserting new ones – very much like a human programmer. Each manipulation is then tested against some quality measure to determine if the new version of the code is an improvement over the old version. It is about taking large software systems and altering them slightly to achieve better results.

These interventions can bring a variety of benefits in the realm of what programmers describe as the functional properties of a piece of software. They might improve how fast a program runs, for instance, or remove bugs. They can also be used to help transplant old software to new hardware. 

The potential doesn’t stop there. Because genetic improvement operates on source code, it can also improve the so-called non-functional properties. These include all the features that are not concerned purely with just the input-output behaviour of programs, such as the amount of bandwidth or energy that the software consumes. These are often particularly tricky for a human programmer to deal with, given the already challenging problem of building correctly functioning software in the first place.

We have seen a few examples of genetic improvement beginning to be recognised in recent years – albeit still within universities for the moment. A good early one dates from 2009, where such an automated “programmer” built by the University of New Mexico and University of Virginia fixed 55 out of 105 bugs in various different kinds of software, ranging from a media player to a Tetris game. For this it won $5,000 (£3,173) and a Gold Humie Award, which is awarded for achievements produced by genetic and evolutionary computation.

In the past year, UCL in London has overseen two research projects that have demonstrated the field’s potential (full disclosure: both have involved co-author William Langdon). The first involved a genetic-improvement program that could take a large complex piece of software with more than 50,000 lines of code and speed up its functionality by 70 times.

The second carried out the first automated wholesale transplant of one piece of software into a larger one by taking a linguistic translator called Babel and inserting it into an instant-messaging system called Pidgin.

Nature and computers
To understand the scale of the opportunity, you have to appreciate that software is a unique engineering material. In other areas of engineering, such as electrical and mechanical engineering, you might build a computational model before you build the final product, since it allows you to push your understanding and test a particular design. On the other hand, software is its own model. A computational model of software is still a computer program. It is a true representation of the final product, which maximises your ability to optimise it with an automated programmer.

As we mentioned at the beginning, there is a rich tradition of computer scientists borrowing ideas from nature. Nature inspired genetic algorithms, for example, which crunch through the millions of possible answers to a real-life problem with many variables to come up with the best one. Examples include anything from devising a wholesale road distribution network to fine-tuning the design of an engine.

Though the evolution metaphor has become something of a millstone in this context, as discussed here, genetic algorithms have had a number of successes producing results which are either comparable with human programs or even better.

Evolution also inspired genetic programming, which attempts to build programs from scratch using small sets of instructions. It is limited, however. One of its many criticisms is that it cannot even evolve the sort of program that would typically be expected of a first-year undergraduate, and will not therefore scale up to the huge software systems that are the backbone of large multinationals.

This makes genetic improvement a particularly interesting deviation from this discipline. Instead of trying to rewrite the whole program from scratch, it succeeds by making small numbers of tiny changes. It doesn’t even have to confine itself to genetic improvement as such. The Babel/Pidgin example showed that it can extend to transplanting a piece of software into a program in a similar way to how surgeons transplant body organs from donors to recipients. This is a reminder that the overall goal is automated software engineering. Whatever nature can teach us when it comes to developing this fascinating new field, we should grab it with both hands.

10
By Chris Baraniuk

Candidates hoping to land their dream job are increasingly being asked to play video games, with companies like Siemens, E.ON and Walmart filtering out hundreds of applicants before the interview stage based partly on how they perform. Played on either smartphones or computers, the games’ designers say they can help improve workplace diversity, but there are questions over how informative the results really are.

To the casual observer, many of the games might seem almost nonsensical. One series of tests by UK-based software house Arctic Shores includes a trial where the player must tap a button frantically to inflate balloons for a party without bursting them. In another, the candidate taps a logo matching the one displayed on screen, at an ever more blistering pace.

Afterwards, a personality profile is built using data on how someone performed, says Robert Newry at Arctic Shores. They claim the traits that can be measured include a person’s willingness to deliberate, seek novel approaches to tasks and even their tendency for social dominance. “What we are measuring is not your reaction skills, your dexterity,” says Newry. “It’s the way you go about approaching and solving the challenge that is put in front of you.”

Using the games, Siemens UK doubled the proportion of female candidates that made it past the initial stages of graduate recruitment than in the previous year, according to data recently released by Arctic Shores. Another company that makes such tests, Pymetrics, says its assessments have boosted recruitment of under-represented groups, with one financial services firm increasing the number of minority candidates offered technical roles by 20 per cent. However, it’s not clear if the boost could simply be down to an increased focus on or awareness of diversity in the workplace.

The games are meant to offer a form of psychometric testing and are based on techniques developed for measuring personality traits. But whereas in academic research the tests are generally calibrated the same way for all participants, when used for recruitment they are often tweaked depending on how existing employees at a company play them.

“We go into these companies and say, ‘Your individuals may be different. Let’s use your high performers to put together a data set,’” says Frida Polli, co-founder of Pymetrics. In other words, if your gameplay matches that of someone already at the firm, you’re more likely to advance to the next stage of recruitment.“You can develop a game-based assessment as rigorous as any traditional psychometric assessment,” says Richard Landers at Old Dominion University in Virginia. “But I don’t know how many companies actually succeed at that.” This is because it takes time and money to show that any assessment’s measurement of a given trait is statistically reliable.

Landers performed an independent review of game-like intelligence tests by Australia-based firm Revelian and says the results were reliable. Arctic Shores have also ran a study with around 300 participants to validate their games.

Caryn Lerman at the University of Pennsylvania has studied brain-training apps and says that although people’s improved performance at these can be tracked over time, they generally have no observable impact on cognitive ability in the real world. She is sceptical that playing the games well corresponds to ability to do a good day’s work in the office.

Although the game-based tests are mandatory, a company’s decision to interview someone may be based on other factors as well, such as their academic record. But in trying to find new ways of shortlisting the best of the bunch, companies risk alienating unsuccessful candidates, says Margaret Beier at Rice University in Texas. They might even expose themselves to lawsuits. “If I apply for a job, play games that seem totally unrelated to it and then don’t get that job, I might have a lot of questions about that assessment,” she says.

11
Java Forum / Java will no longer have ‘major’ releases
« on: May 13, 2018, 06:39:28 PM »
Remember when a new number meant a software release was a sighnificant, or major, one? For Java, that pattern is over. Java 9 was the last “major” release, Oracle says.

All versions after that—including the recently released Java 10 and the forthcoming Java 11—are what the industry typically calls “point releases,” because they were usually numbered x.1, x.2, and so on to indicate an intermediate, more “minor” release. (Oracle has called those point releases “feature releases.”)

[ The new Java is coming! Discover the Java 11 JDK roadmap. • The new Java versions are here! Learn everything you need to know about what’s new in Java SE 10 and what’s new in Java EE 8. | Keep up with hot topics in programming with InfoWorld’s App Dev Report newsletter. ]
As of Java 10, Oracle has put Java on a twice-annual release schedule, and although those releases get whole numbers in their versions, they’re more akin to point releases. Oracle recently declared, “There are no ‘major releases’ per se any more; that is now a legacy term. Instead, there is a steady stream of ‘feature releases.’”

Under the plan, moving from Java 9 to Versions 10 and 11 is similar to previous moves from Java 8 to Java version 8u20 and 8u40.

Previously, it took about three years to move from Java 7 to 8 and then 9.

Oracle says that Java’s twice-yearly release schedule makes it easier for tools vendors to keep up with changes, because they will work with a stream of smaller updates. Upgrading of tools from Java from version 9 to 10 happened “almost overnight,” Oracle says, compared to the difficulties it said some tools vendors had making the move from Java 8 to 9.

For users who just want stability and can pass on new features of each “feature release,” Oracle is still holding onto that old three-year schedule for its long-term support (LTS) releases. Java 11 will be an LTS release, with commercial support available from Oracle for at least eight additional years.

12
Driving down energy use and costs should be a number one priority for British companies according to SSE Enterprise Energy Solutions – the UK’s leading provider of energy management services.

Information and Communications Technology (ICT) typically accounts for 12% of business energy use yet ICT energy management is often overlooked as a way to realise cost savings.

SSE Enterprise Energy Solutions recently delivered a 9% cost saving for Glasgow City Council across its school estate. The pilot project ran across 29 high schools with a total of 9,000 devices and delivered savings of £4,500 per week. Its success has led to the council putting in place a schools ICT policy with energy efficiency at its core.

Kevin Greenhorn, Managing Director of SSE Enterprise Energy Solutions, said: “Technology can revolutionise how organisations manage their energy consumption and ICT is one of the first places to start. We can give visibility across all ICT assets and a central solution which reduces energy costs and carbon emissions – without the need to rely on people to change their behaviour.”

He added: “Not only does it reduce costs but it provides analytical data which informs future planning and sustainable procurement decisions. It’s the right solution for organisations looking to take steps to reduce energy consumption beyond traditional building systems, such as heating, ventilation, air conditioning and lighting.”

SSE Enterprise Energy Solutions offers an Energy ICT solution which allows asset identification and management of every single machine across multiple sites. Software is installed centrally on a standard server and the solution allows for remote control of desktop computers, monitors, laptops, routers, wireless access points, printers, copiers, telephones and other devices.

These devices can be powered down when not in use, for example evenings and weekends, and their energy consumption monitored when in use. In Glasgow City Council’s case this has enabled them to produce a weekly summary which details the savings delivered in carbon, energy and pounds which is then reported at executive level.

Andrew McKenzie, Energy ICT Director for SSE Enterprise Energy Solutions, said: “This software is a genuinely exciting solution that allows organisations to access the last untapped area of energy efficiency, the ICT network.”

He added: “It ticks all the right boxes in terms of saving money, energy and carbon, and enables truly innovative and bespoke energy optimisation across ICT networks that align with building control system strategies. Organisations are therefore able to take significant steps towards total energy management.”

Andrew Mouat, Glasgow City Council’s Principal Officer for Carbon Management, said working with SSE Enterprise Energy Solutions has helped give them the ICT security and control they need, while also delivering efficiencies for the finance and facilities teams. He said: “This technology is delivering real value for Glasgow City Council and taxpayers. The cost savings and associated CO2 reduction speak for themselves. Prudent management of our infrastructure creates efficiencies but importantly also gives us the detailed analytics to inform our planning and be more focused in our ICT and building management. 

“It is contributing to Glasgow becoming a smart, sustainable Future City.”

13
আন্তর্জাতিক মানের স্বীকৃতি পেতে যাচ্ছে বাংলাদেশের দশটি ডাটা সেন্টার। আগামী এক বছরের মধ্যেই এই স্বীকৃতি মিলতে পারে।রাজধানীর বঙ্গবন্ধু আন্তর্জাতিক সম্মেলন কেন্দ্রে অনুষ্ঠিত হয়েছে দুই দিনব্যাপী ডাটা সেন্টার প্রযুক্তি সম্মেলনের শেষ দিনে এসব কথা জানান আন্তর্জাতিক প্রতিষ্ঠান আপটাইম ইনিস্টিটিউটের দক্ষিণ এশীয় অঞ্চলের ব্যবস্থাপনা পরিচালক জন ডাফিন।

তিনি বলেন, জাতীয় ডাটা সেন্টারের পাশাপাশি বেসরকারি খাতের বেশ কটি প্রতিষ্ঠান বৈশ্বিক মানসনদের প্রক্রিয়ার মধ্যে আছে।

তিনি আরও বলেন, এ বছরের মধ্যে দুটি প্রতিষ্ঠানের প্রক্রিয়া শেষ হবে। আগামী বছর নাগাদ দশটি প্রতিষ্ঠানকে ‘মান স্বীকৃতি’ দেওয়ার আশা রাখছি।

বাংলাদেশী প্রযুক্তি প্রতিষ্ঠান ডিসি আইকন ও ডাটা সেন্টার প্রফেশনাল সোসাইটি অব বাংলাদেশের যৌথ আয়োজনে দ্বিতীয়বারের মত আয়োজিত সম্মেলনে যোগ দেয় নয় দেশের তথ্য ব্যবস্থাপনা খাতের বিশেষজ্ঞ ও প্রযুক্তিবিদ।

শুক্রবার সম্মেলনটির শেষ দিন ছিল। সম্মেলনে উদ্ভাবনী প্রযুক্তি প্রদর্শনীর পাশাপাশি সংশ্লিষ্ট বিষয়ে ৫০টি সেমিনার অনুষ্ঠিত হয়েছে।

14
IT Forum / Best Linux server distro of 2018
« on: April 26, 2018, 11:46:04 AM »
1. Debian
Debian is over 20-years-old and in part owes that longevity to the emphasis placed on producing a stable operating system. This is crucial if you want to set up a server as updates can sometimes clash badly with existing software.

There are three branches of Debian, named 'Unstable', 'Testing' and 'Stable'. To become part of the Stable current release, packages must have been reviewed for several months as part of the Testing release. This results in a much more reliable system – but don't expect Debian to incorporate much 'bleeding edge' software as a result.


You can get started with Debian using a minimal Network Boot image which is less than 30MB in size. For a faster setup, download the larger network installer which at just under 300MB contains more packages.


2. Ubuntu Server
While Ubuntu is best known for bringing desktop Linux to the masses, its Server variant is also extremely competitive. Canonical, the company behind Ubuntu, has developed LTS (Long Term Support) versions of Ubuntu Server, which like the desktop flavour can be updated up to five years after the date of release, saving you the trouble of upgrading your server repeatedly. Canonical also periodically releases versions of Ubuntu Server at the same time as the latest desktop distro (i.e. 18.04).

If you're intent on building your own cloud platform, you can also download Ubuntu Cloud Server. Canonical claims that over 55% of OpenStack clouds already run on Ubuntu. For a fee, Canonical will even set up a managed cloud for you using BootStack.


3. OpenSUSE
OpenSUSE (formerly SUSE Linux) is a Linux distro specifically designed for developers and system admins wishing to run their own server. The easy-to-use installer can be configured to use 'Text Mode' rather than install a desktop environment to get your server up and running.

OpenSUSE will automatically download the minimum required packages for you, meaning only essential software is installed. The YaST Control Center allows you to configure network settings, such as setting up a static IP for your server. You can also use the built in Zypper package manager to download and install essential server software such as postfix.


4. Fedora Server
Advertisement

Fedora is a community developed operating system based on the commercial Linux distro Red Hat. Fedora Server is a special implementation of the OS, allowing you to deploy and manage your server using the Rolekit tool. The operating system also includes a powerful PostgreSQL Database Server.

Fedora Server also includes FreeIPA, enabling you to manage authentication credentials, access control information and perform auditing from one central location.

You can download the full 2.3GB ISO image of Fedora Server using the link below. The same page contains a link to a minimal 511MB NetInstall Image from Fedora's Other Downloads section for a faster barebones setup.


5. CentOS
Like Fedora, CentOS is a community developed distribution of Linux, originally based on the commercial OS Red Hat Enterprise Linux. In light of this, the developers behind CentOS 7 have promised to provide full updates for the OS until the end of 2020, with maintenance updates until the end of June 2024 – which should save the trouble of performing a full upgrade on your server in the near future.

You can avoid unnecessary packages by installing the 'minimal' ISO from the CentOS website, which at 792MB can fit onto a 90 minute CD-R. If you're eager to get started, the site also offers preconfigured AWS instances and Docker images.

15
It might look like witchcraft, but researchers at Nvidia have developed an advanced deep learning image-retouching tool that can intelligently reconstruct incomplete photos.

While removing unwanted artefacts in image editing is nothing new – Adobe Photoshop's Content-Aware tools are pretty much the industry standard – the prototype tool that Nvidia is showcasing looks incredibly impressive.

Don't take our word for it – check out the two-minute video below to get a taste of what this new technology is capable of.


What differentiates Nvidia's new tool from something like Content-Aware Fill in Photoshop is that it analyzes the image and understands what the subject should actually look like; Content-Aware Fill relies on surrounding parts of the image to fill in what it thinks should be there.

Nvidia's tool is a much more sophisticated solution. For instance, when trying to fill a hole where an eye would be in a portrait, as well as using information from the surrounding area Nvidia's deep learning tool knows an eye should be there, and can fill the hole with a realistic computer-generated alternative.   

"Our model can robustly handle holes of any shape, size location, or distance from the image borders," the researchers write. "Previous deep learning approaches have focused on rectangular regions located around the center of the image, and often rely on expensive post-processing. Further, our model gracefully handles holes of increasing size."

For the moment at least there's no word when we're likely to see this tool become more widely available. For now though, it gives us a glimpse into the near future of image editing.

Pages: [1] 2 3 ... 8