Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - farzanaSadia

Pages: [1] 2 3 ... 9
1
Informative post. It would be great if you can provide some examples.

2
Con you please give more information about project jupyter?

3
Machine Learning/ Deep Learning / Re: How do exactly machines learn?
« on: October 28, 2018, 07:49:36 PM »
What kind of learning does Facebook use to recognize image?

4
Machine Learning/ Deep Learning / Re: Evolution of machine learning
« on: October 28, 2018, 07:48:24 PM »
Informative post.

5
Have you ever used your credit card at a new store or location only to have it declined? Has a sale ever been blocked because you charged a higher amount than usual?

Consumers’ credit cards are declined surprisingly often in legitimate transactions. One cause is that fraud-detecting technologies used by a consumer’s bank have incorrectly flagged the sale as suspicious. Now MIT researchers have employed a new machine-learning technique to drastically reduce these false positives, saving banks money and easing customer frustration.

Using machine learning to detect financial fraud dates back to the early 1990s and has advanced over the years. Researchers train models to extract behavioral patterns from past transactions, called “features,” that signal fraud. When you swipe your card, the card pings the model and, if the features match fraud behavior, the sale gets blocked.

Behind the scenes, however, data scientists must dream up those features, which mostly center on blanket rules for amount and location. If any given customer spends more than, say, $2,000 on one purchase, or makes numerous purchases in the same day, they may be flagged. But because consumer spending habits vary, even in individual accounts, these models are sometime inaccurate: A 2015 report from Javelin Strategy and Research estimates that only one in five fraud predictions is correct and that the errors can cost a bank $118 billion in lost revenue, as declined customers then refrain from using that credit card.

The MIT researchers have developed an “automated feature engineering” approach that  extracts more than 200 detailed features for each individual transaction — say, if a user was present during purchases, and the average amount spent on certain days at certain vendors. By doing so, it can better pinpoint when a specific card holder’s spending habits deviate from the norm.

Tested on a dataset of 1.8 million transactions from a large bank, the model reduced false positive predictions by 54 percent over traditional models, which the researchers estimate could have saved the bank 190,000 euros (around $220,000) in lost revenue.

“The big challenge in this industry is false positives,” says Kalyan Veeramachaneni, a principal research scientist at MIT’s Laboratory for Information and Decision Systems (LIDS) and co-author of a paper describing the model, which was presented at the recent European Conference for Machine Learning. “We can say there’s a direct connection between feature engineering and [reducing] false positives. … That’s the most impactful thing to improve accuracy of these machine-learning models.”

Paper co-authors include: lead author Roy Wedge '15, a former researcher in the Data to AI Lab at LIDS; James Max Kanter ’15, SM ’15; and Sergio Iglesias Perez of Banco Bilbao Vizcaya Argentaria.

Extracting “deep” features

Three years ago, Veeramachaneni and Kanter developed Deep Feature Synthesis (DFS), an automated approach that extracts highly detailed features from any data, and decided to apply it to financial transactions.

Enterprises will sometimes host competitions where they provide a limited dataset along with a prediction problem such as fraud. Data scientists develop prediction models, and a cash prize goes to the most accurate model. The researchers entered one such competition and achieved top scores with DFS.

However, they realized the approach could reach its full potential if trained on several sources of raw data. “If you look at what data companies release, it’s a tiny sliver of what they actually have,” Veeramachaneni says. “Our question was, ‘How do we take this approach to actual businesses?’”

Backed by the Defense Advanced Research Projects Agency’s Data-Driven Discovery of Models program, Kanter and his team at Feature Labs — a spinout commercializing the technology — developed an open-source library for automated feature extraction, called Featuretools, which was used in this research.

The researchers obtained a three-year dataset provided by an international bank, which included granular information about transaction amount, times, locations, vendor types, and terminals used. It contained about 900 million transactions from around 7 million individual cards. Of those transactions, around 122,000 were confirmed as fraud. The researchers trained and tested their model on subsets of that data.

In training, the model looks for patterns of transactions and among cards that match cases of fraud. It then automatically combines all the different variables it finds into “deep” features that provide a highly detailed look at each transaction. From the dataset, the DFS model extracted 237 features for each transaction. Those represent highly customized variables for card holders, Veeramachaneni says. “Say, on Friday, it’s usual for a customer to spend $5 or $15 dollars at Starbucks,” he says. “That variable will look like, ‘How much money was spent in a coffee shop on a Friday morning?’”

It then creates an if/then decision tree for that account of features that do and don’t point to fraud. When a new transaction is run through the decision tree, the model decides in real time whether or not the transaction is fraudulent.

Pitted against a traditional model used by a bank, the DFS model generated around 133,000 false positives versus 289,000 false positives, about 54 percent fewer incidents. That, along with a smaller number of false negatives detected — actual fraud that wasn’t detected — could save the bank an estimated 190,000 euros, the researchers estimate.

Iglesias notes he and his colleagues at BBVA have consistently been able to reproduce the MIT team’s results using the DFS model with additional card and business data, with a minimum increase in computational cost.

Stacking primitives

The backbone of the model consists of creatively stacked “primitives,” simple functions that take two inputs and give an output. For example, calculating an average of two numbers is one primitive. That can be combined with a primitive that looks at the time stamp of two transactions to get an average time between transactions. Stacking another primitive that calculates the distance between two addresses from those transactions gives an average time between two purchases at two specific locations. Another primitive could determine if the purchase was made on a weekday or weekend, and so on.

“Once we have those primitives, there is no stopping us for stacking them … and you start to see these interesting variables you didn’t think of before. If you dig deep into the algorithm, primitives are the secret sauce,” Veeramachaneni says.

One important feature that the model generates, Veeramachaneni notes, is calculating the distance between those two locations and whether they happened in person or remotely. If someone who buys something at, say, the Stata Center in person and, a half hour later, buys something in person 200 miles away, then it’s a high probability of fraud. But if one purchase occurred through mobile phone, the fraud probability drops.

“There are so many features you can extract that characterize behaviors you see in past data that relate to fraud or nonfraud use cases,” Veeramachaneni says.

"In fact, this automated feature synthesis technique, and the overall knowledge provided by MIT in this project, has shown us a new way of refocusing research in other challenges in which we initially have a reduced set of features. For example, we are obtaining equally promising results in the detection of anomalous behavior in internal network traffic or in market operations, just to mention two [examples],” Iglesias adds.


6
Data Mining and Big Data / Why random forests outperform decision trees
« on: October 28, 2018, 07:41:22 PM »
Random forests consist of multiple single trees each based on a random sample of the training data. They are typically more accurate than single decision trees. The following figure shows the decision boundary becomes more accurate as more trees are added.

Here we’ll provide two intuitive reasons why random forests outperform single decision trees.

Higher resolution in the feature space

Trees are unpruned. While a single decision tree like CART is often pruned, a random forest tree is fully grown and unpruned, and so, naturally, the feature space is split into more and smaller regions.

Trees are diverse. Each random forest tree is learned on a random sample, and at each node, a random set of features are considered for splitting. Both mechanisms create diversity among the trees.

Two random trees each with one split are illustrated below. For each tree, two regions can be assigned with different labels. By combining the two trees, there are four regions that can be labeled differently.

Unpruned and diverse trees lead to a high resolution in the feature space. For continuous features, it means a smoother decision boundary, as shown in the following.

Handling Overfitting

Single decision tree method needs pruning to avoid overfitting. The following shows the decision boundary from an unpruned tree. The boundary is smoother but makes obvious mistakes (overfitting).

So how can random forests build unpruned trees without overfitting? Let’s provide an explanation below.

For the two-class (blue and red) problem below, both splits x1=3 and x2=3 can fully separate the two classes.

The two splits, however, result in very different decision boundaries. In other words, these boundaries conflict with each other in some regions, and may not be reliable.

Now consider random forests. For each random sample used for training a tree, the probability that the red point missing from the sample is.

So roughly 1 out of 3 trees is built with all blue data and always predict class blue. The other 2/3 of the trees have the red point in the training data. Since at each node a random subset of features is considered, we expect roughly 1/3 of the trees use x1, and the rest 1/3 uses x2. The splits from the two types of trees are illustrated below.

7
Uber has been one of the most active companies trying to accelerate the implementation of real world machine learning solutions. Just this year, Uber has introduced technologies like Michelangelo, Pyro.ai and Horovod that focus on key building blocks of machine learning solutions in the real world. This week, Uber introduced another piece of its machine learning stack, this time aiming to short the cycle from experimentation to product. PyML, is a library to enable the rapid development of Python applications in a way that is compatible with their production runtime.

The problem PyML attempts to address is one of those omnipresent challenges in large scale machine learning applications. Typically, there is a tangible mismatch between the tools and frameworks used by data scientists to prototype models and the corresponding production runtimes. For instance, its very common for data scientists to use Python-based frameworks such as PyTorch or Keras for producing experimental models that then need to be adapted to a runtime such as Apache Spark ML Pipelines that brings very specific constraints. Machine learning technologists refer to this issue as a tradeoff between flexibility and resource-efficiency. In the case of Uber, data scientists were building models in Python machine learning frameworks which needed to be refactored by the Michelangelo team to match the constraints of Apache Spark pipelines.

Overcoming this limitation meant extending the capabilities of Michelangelo to support models authored in mainstream machine learning frameworks while keeping a consistent model for training and optimization.

Enter PyML
The goal of Uber’s PyML is to streamline the development of machine learning applications and bridge the gap between experimentation and production runtimes. To accomplish that, PyML focuses on three main aspects:

1) Provide a standard contract for machine learning prediction models.

2) Enable a consistent model for packaging and deploying machine learning models using Docker containers.

3) Enable Michelangelo-integrated runtimes for online and offline prediction models.

The following figure illustrates the basic architecture principles of PyML.


A Standard Machine Learning Contract
PyML models can be authored different machine learning frameworks such as TensorFlow, PyTorch or Scikit-Learn. The models can use two main types of datasets: DataFrames, which store tabular structured data, and Tensors, which store named multidimensional arrays. After the models are created, they are adapted to a standard PyML contract definition which is essentially a class that inherits from DataFrameModel or TensorModel abstract classes, respectively. In both cases, users only need to implement two methods: a constructor to load their model parameters and a predict() method that accepts and returns either DataFrames or Tensors.


Packaging and Deployment
After the PyML models are created, they can be packaged into Docker containers using a consistent structure. PyML’s introduces a standard deployment format based on four fundamental artifacts:


Using that structure, a developer can package and deploy a PyML model using the following code. The PyML Docker image will contain the model and all the corresponding dependencies. The models will be immediately available for execution in the Michelangelo console.

Offline and Online Predictions
PyML supports both batch(offline) and online execution models for predictions. Offline predictions are modeled as an abstraction over PySpark. In that context, PyML users simply provide a SQL query with column names and types matching the inputs expected by their model, and the name of a destination Hive table in which to store output predictions. Behind the scenes, PyML starts a containerized PySpark job using the same image and Python environment as for serving the model online, ensuring that there are no differences between the offline and online predictions. Executing offline predictions is relatively straightforward as illustrated in the following code:


The standard two-operation (init, predict) contract of PyML models simplifies the implementation of online predictions. PyML enables online predictions by enabling lightweight gRPC interfaces for the Docker containers which are used by a common online prediction Service as shown in the following figure. Upon request, the online prediction service will launch the corresponding PyML model-specific Docker image as a nested Docker container via Mesos’ API. When the container is launched, it starts the PyML RPC server and begins listening for prediction requests on a Unix domain socket from the online prediction service.


PyML addresses one of the most important challenges in large scale machine learning applications by bridging the gap between experimentation and runtime environments. Beyond its specific technological contributions, the architecture of PyML can be adapted to different technology stacks should serve as an important reference for organizations starting their machine learning journey.

8
Software Engineering / Report: Can Kotlin compete with Java?
« on: July 03, 2018, 12:33:50 PM »
Java continues to dominate the programming language space for developers, but a new report reveals that Kotlin may soon knock it out of the top spot for mobile development. Packt released the results of its 2018 Skills Up report designed to look at the trends and tools software developers are using today.

The 2018 Skills Up report surveyed more than 8,000 developers and technology experts in four broad categories: app development, web development, security and systems admin, and data.

Kotlin is a statically typed programming language developed by JetBrains and supported by Google’s Android operating system. While Kotlin didn’t make it onto the list of top programming languages app developers are currently using overall, 71 percent of respondents stated that Kotlin is a serious contender for Java.

“Java beware: respondents say that Kotlin might just topple you from your throne. With adoption by Google for Android development, is this the beginning of the end of Java for mobile?” the report stated. “Kotlin has been around since 2011, but only recently has it started to really capture the imagination of engineers. Google has done a lot to reinforce its reputation — the fact that it was fully supported in Android Studio 3.0 in 2017 has ensured it is now one of the most popular Android development languages. We expect to see it competing closely with Java by the end of the year.”

Rounding out the application developer top five are JavaScript, Python, C# and SQL. Java is more popular when developing for mobile while Python was more favored by higher-earning app developers, and C# was found to be more popular among developers building enterprise and desktop applications.

“In 2018, we’ve seen C-based languages heavily lose out in favor of languages that can write more easily for the web. Only among desktop developers and game scripting does C# still hold the top spot: every other developer is looking to have the capacity to build for the browser, or for mobile.”

The top tools for mobile development, according to the report, included Android Studio, Xcode, macOs, Xamarin and iOS SDK. Android Studio has the most developers using it with 39 percent of the respondents, while Xcode only saw 17 percent of respondents using it. However, 50 percent of developers who make $70,000 or more cited using Xcode, iOS SDK and/or macOS.

Additionally, the report found the top tools for enterprise and desktop included .NET, Visual Studio and Java EE while MySQL, SQL Server and SQLite came out on top for the most commonly used databases.

App developers also found there is potential for using Swift outside of mobile development.
For web development, the report found the top languages included JavaScript, HTML/CSS, PhP, Python and Java. However, the report noted that app development and web development are beginning to no longer be considered as two separate entities, with web and app developers sharing a majority of the same toolchains.

“In 2018, working in tech almost always means working with the web. As more and more applications migrate to the browser and the cloud and as sites become ever more sophisticated, web development knowledge becomes a greater and greater priority,” the report stated.

The top front-end tools and frameworks for web development included JQuery, Bootstrap, npm, Angular and Webpack while the top back-end tools included Node.js, ASP.NET Core, Express.js and Laravel.

Sixty-five percent of web developers also found that conversational UI and chatbots have a strong future in the webUI space.

When looking at security and systems administration, the report found Python and Bash as the top used scripting languages followed by Shell, PowerShell and JavaScript. The top security tools include Wireshark, Nmap, Kali Linux and Metasploit. For system admin and virtualization tools, developers are using Linux, Windows OS, Docker, Ubuntu Server and Windows Server.

Other security and systems admin findings included that IoT is being held back by security issues, and a majority of organizations don’t treat cybersecurity with enough seriousness.

Python continued to top the list of languages when looking at data, followed by SQL, R and JavaScript. The top data libraries, tools and frameworks included Excel, NumPy, Anaconda and Pandas. According to the respondents, the next big areas for data include TensorFlow, deep learning, and machine learning.

Among data developers, 83 percent are excited about the potential of quantum computing, and more than half find AWS is the top cloud provider for Big Data.

Other findings of the report included:

Seventy-two percent of respondents feel like they are a part of a community with other developers
Sixty percent are satisfied with their jobs
Six percent are extremely dissatisfied
The top technical barrier across all industries is dealing with technical debt and legacy problems
Eighty-six percent of respondents agree it is important to develop soft skills such as communication and teamwork.
“Only one thing is certain in the world of tech: change. Working in development is about navigating a constantly evolving industry, keeping up to date with the skills you need to succeed,” the report stated.

9
Researchers have developed a new system designed to tackle complex objects and workflows on Big Data platforms. Computer science researchers from Rice University’s DARPA-funded Pliny project has announced PlinyCompute.

The project is funded through DARPA’s Mining and Understanding Software Enclaves (MUSE) initiative. The Pliny project aims to create systems that automatically detect and fix errors in programs. PlinyCompute is “a system purely for developing high-performance, Big Data codes.”

“With machine learning, and especially deep learning, people have seen what complex analytics algorithms can do when they’re applied to Big Data,” Chris Jermaine, a Rice computer science professor who is leading the platform’s development, said in the announcement. “Everyone, from Fortune 500 executives to neuroscience researchers are clamoring for more and more complex algorithms, but systems programmers have mostly bad options for providing that today. HPC can provide the performance, but it takes years to learn to write code for HPC, and perhaps worse, a tool or library that might take days to create with Spark can take months to program on HPC.”

According to Jermaine, while Spark was developed for Big Data and supports things such as load balancing, fault tolerance and resource allocation, it wasn’t designed for complex computation. “Spark is built on top of the Java Virtual Machine, or JVM, which manages runtimes and abstracts away most of the details regarding memory management,” said Jia Zou, a research scientist at Rice. “Spark’s performance suffers from its reliance on the JVM, especially as computational demands increase for tasks like training deep neural networks for deep learning.”

Zou continued that PlinyCompute was designed for high performance, and has found to be at least twice as fast and 50 times faster at complex computation over Spark. However, PlinyCompute requires programmers to write libraries and models in C++ while Spark requires Java-based coding. Because of this, Jermaine says programmers might find it difficult to write code for PlinyCompute.

“There’s more flexibility with PlinyCompute,” Jermaine said. “That can be a challenge for people who are less experienced and knowledgeable about C++, but we also ran a side-by-side comparison of the number of lines of code that were needed to complete various implementations, and for the most part there was no significant difference between PlinyCompute and Spark.”

10
Computer scientists have a history of borrowing ideas from nature, such as evolution. When it comes to optimising computer programs, a very interesting evolutionary-based approach has emerged over the past five or six years that could bring incalculable benefits to industry and eventually consumers. We call it genetic improvement.

Genetic improvement involves writing an automated “programmer” who manipulates the source code of a piece of software through trial and error with a view to making it work more efficiently. This might include swapping lines of code around, deleting lines and inserting new ones – very much like a human programmer. Each manipulation is then tested against some quality measure to determine if the new version of the code is an improvement over the old version. It is about taking large software systems and altering them slightly to achieve better results.

These interventions can bring a variety of benefits in the realm of what programmers describe as the functional properties of a piece of software. They might improve how fast a program runs, for instance, or remove bugs. They can also be used to help transplant old software to new hardware. 

The potential doesn’t stop there. Because genetic improvement operates on source code, it can also improve the so-called non-functional properties. These include all the features that are not concerned purely with just the input-output behaviour of programs, such as the amount of bandwidth or energy that the software consumes. These are often particularly tricky for a human programmer to deal with, given the already challenging problem of building correctly functioning software in the first place.

We have seen a few examples of genetic improvement beginning to be recognised in recent years – albeit still within universities for the moment. A good early one dates from 2009, where such an automated “programmer” built by the University of New Mexico and University of Virginia fixed 55 out of 105 bugs in various different kinds of software, ranging from a media player to a Tetris game. For this it won $5,000 (£3,173) and a Gold Humie Award, which is awarded for achievements produced by genetic and evolutionary computation.

In the past year, UCL in London has overseen two research projects that have demonstrated the field’s potential (full disclosure: both have involved co-author William Langdon). The first involved a genetic-improvement program that could take a large complex piece of software with more than 50,000 lines of code and speed up its functionality by 70 times.

The second carried out the first automated wholesale transplant of one piece of software into a larger one by taking a linguistic translator called Babel and inserting it into an instant-messaging system called Pidgin.

Nature and computers
To understand the scale of the opportunity, you have to appreciate that software is a unique engineering material. In other areas of engineering, such as electrical and mechanical engineering, you might build a computational model before you build the final product, since it allows you to push your understanding and test a particular design. On the other hand, software is its own model. A computational model of software is still a computer program. It is a true representation of the final product, which maximises your ability to optimise it with an automated programmer.

As we mentioned at the beginning, there is a rich tradition of computer scientists borrowing ideas from nature. Nature inspired genetic algorithms, for example, which crunch through the millions of possible answers to a real-life problem with many variables to come up with the best one. Examples include anything from devising a wholesale road distribution network to fine-tuning the design of an engine.

Though the evolution metaphor has become something of a millstone in this context, as discussed here, genetic algorithms have had a number of successes producing results which are either comparable with human programs or even better.

Evolution also inspired genetic programming, which attempts to build programs from scratch using small sets of instructions. It is limited, however. One of its many criticisms is that it cannot even evolve the sort of program that would typically be expected of a first-year undergraduate, and will not therefore scale up to the huge software systems that are the backbone of large multinationals.

This makes genetic improvement a particularly interesting deviation from this discipline. Instead of trying to rewrite the whole program from scratch, it succeeds by making small numbers of tiny changes. It doesn’t even have to confine itself to genetic improvement as such. The Babel/Pidgin example showed that it can extend to transplanting a piece of software into a program in a similar way to how surgeons transplant body organs from donors to recipients. This is a reminder that the overall goal is automated software engineering. Whatever nature can teach us when it comes to developing this fascinating new field, we should grab it with both hands.

11
By Chris Baraniuk

Candidates hoping to land their dream job are increasingly being asked to play video games, with companies like Siemens, E.ON and Walmart filtering out hundreds of applicants before the interview stage based partly on how they perform. Played on either smartphones or computers, the games’ designers say they can help improve workplace diversity, but there are questions over how informative the results really are.

To the casual observer, many of the games might seem almost nonsensical. One series of tests by UK-based software house Arctic Shores includes a trial where the player must tap a button frantically to inflate balloons for a party without bursting them. In another, the candidate taps a logo matching the one displayed on screen, at an ever more blistering pace.

Afterwards, a personality profile is built using data on how someone performed, says Robert Newry at Arctic Shores. They claim the traits that can be measured include a person’s willingness to deliberate, seek novel approaches to tasks and even their tendency for social dominance. “What we are measuring is not your reaction skills, your dexterity,” says Newry. “It’s the way you go about approaching and solving the challenge that is put in front of you.”

Using the games, Siemens UK doubled the proportion of female candidates that made it past the initial stages of graduate recruitment than in the previous year, according to data recently released by Arctic Shores. Another company that makes such tests, Pymetrics, says its assessments have boosted recruitment of under-represented groups, with one financial services firm increasing the number of minority candidates offered technical roles by 20 per cent. However, it’s not clear if the boost could simply be down to an increased focus on or awareness of diversity in the workplace.

The games are meant to offer a form of psychometric testing and are based on techniques developed for measuring personality traits. But whereas in academic research the tests are generally calibrated the same way for all participants, when used for recruitment they are often tweaked depending on how existing employees at a company play them.

“We go into these companies and say, ‘Your individuals may be different. Let’s use your high performers to put together a data set,’” says Frida Polli, co-founder of Pymetrics. In other words, if your gameplay matches that of someone already at the firm, you’re more likely to advance to the next stage of recruitment.“You can develop a game-based assessment as rigorous as any traditional psychometric assessment,” says Richard Landers at Old Dominion University in Virginia. “But I don’t know how many companies actually succeed at that.” This is because it takes time and money to show that any assessment’s measurement of a given trait is statistically reliable.

Landers performed an independent review of game-like intelligence tests by Australia-based firm Revelian and says the results were reliable. Arctic Shores have also ran a study with around 300 participants to validate their games.

Caryn Lerman at the University of Pennsylvania has studied brain-training apps and says that although people’s improved performance at these can be tracked over time, they generally have no observable impact on cognitive ability in the real world. She is sceptical that playing the games well corresponds to ability to do a good day’s work in the office.

Although the game-based tests are mandatory, a company’s decision to interview someone may be based on other factors as well, such as their academic record. But in trying to find new ways of shortlisting the best of the bunch, companies risk alienating unsuccessful candidates, says Margaret Beier at Rice University in Texas. They might even expose themselves to lawsuits. “If I apply for a job, play games that seem totally unrelated to it and then don’t get that job, I might have a lot of questions about that assessment,” she says.

12
Java Forum / Java will no longer have ‘major’ releases
« on: May 13, 2018, 06:39:28 PM »
Remember when a new number meant a software release was a sighnificant, or major, one? For Java, that pattern is over. Java 9 was the last “major” release, Oracle says.

All versions after that—including the recently released Java 10 and the forthcoming Java 11—are what the industry typically calls “point releases,” because they were usually numbered x.1, x.2, and so on to indicate an intermediate, more “minor” release. (Oracle has called those point releases “feature releases.”)

[ The new Java is coming! Discover the Java 11 JDK roadmap. • The new Java versions are here! Learn everything you need to know about what’s new in Java SE 10 and what’s new in Java EE 8. | Keep up with hot topics in programming with InfoWorld’s App Dev Report newsletter. ]
As of Java 10, Oracle has put Java on a twice-annual release schedule, and although those releases get whole numbers in their versions, they’re more akin to point releases. Oracle recently declared, “There are no ‘major releases’ per se any more; that is now a legacy term. Instead, there is a steady stream of ‘feature releases.’”

Under the plan, moving from Java 9 to Versions 10 and 11 is similar to previous moves from Java 8 to Java version 8u20 and 8u40.

Previously, it took about three years to move from Java 7 to 8 and then 9.

Oracle says that Java’s twice-yearly release schedule makes it easier for tools vendors to keep up with changes, because they will work with a stream of smaller updates. Upgrading of tools from Java from version 9 to 10 happened “almost overnight,” Oracle says, compared to the difficulties it said some tools vendors had making the move from Java 8 to 9.

For users who just want stability and can pass on new features of each “feature release,” Oracle is still holding onto that old three-year schedule for its long-term support (LTS) releases. Java 11 will be an LTS release, with commercial support available from Oracle for at least eight additional years.

13
Driving down energy use and costs should be a number one priority for British companies according to SSE Enterprise Energy Solutions – the UK’s leading provider of energy management services.

Information and Communications Technology (ICT) typically accounts for 12% of business energy use yet ICT energy management is often overlooked as a way to realise cost savings.

SSE Enterprise Energy Solutions recently delivered a 9% cost saving for Glasgow City Council across its school estate. The pilot project ran across 29 high schools with a total of 9,000 devices and delivered savings of £4,500 per week. Its success has led to the council putting in place a schools ICT policy with energy efficiency at its core.

Kevin Greenhorn, Managing Director of SSE Enterprise Energy Solutions, said: “Technology can revolutionise how organisations manage their energy consumption and ICT is one of the first places to start. We can give visibility across all ICT assets and a central solution which reduces energy costs and carbon emissions – without the need to rely on people to change their behaviour.”

He added: “Not only does it reduce costs but it provides analytical data which informs future planning and sustainable procurement decisions. It’s the right solution for organisations looking to take steps to reduce energy consumption beyond traditional building systems, such as heating, ventilation, air conditioning and lighting.”

SSE Enterprise Energy Solutions offers an Energy ICT solution which allows asset identification and management of every single machine across multiple sites. Software is installed centrally on a standard server and the solution allows for remote control of desktop computers, monitors, laptops, routers, wireless access points, printers, copiers, telephones and other devices.

These devices can be powered down when not in use, for example evenings and weekends, and their energy consumption monitored when in use. In Glasgow City Council’s case this has enabled them to produce a weekly summary which details the savings delivered in carbon, energy and pounds which is then reported at executive level.

Andrew McKenzie, Energy ICT Director for SSE Enterprise Energy Solutions, said: “This software is a genuinely exciting solution that allows organisations to access the last untapped area of energy efficiency, the ICT network.”

He added: “It ticks all the right boxes in terms of saving money, energy and carbon, and enables truly innovative and bespoke energy optimisation across ICT networks that align with building control system strategies. Organisations are therefore able to take significant steps towards total energy management.”

Andrew Mouat, Glasgow City Council’s Principal Officer for Carbon Management, said working with SSE Enterprise Energy Solutions has helped give them the ICT security and control they need, while also delivering efficiencies for the finance and facilities teams. He said: “This technology is delivering real value for Glasgow City Council and taxpayers. The cost savings and associated CO2 reduction speak for themselves. Prudent management of our infrastructure creates efficiencies but importantly also gives us the detailed analytics to inform our planning and be more focused in our ICT and building management. 

“It is contributing to Glasgow becoming a smart, sustainable Future City.”

14
আন্তর্জাতিক মানের স্বীকৃতি পেতে যাচ্ছে বাংলাদেশের দশটি ডাটা সেন্টার। আগামী এক বছরের মধ্যেই এই স্বীকৃতি মিলতে পারে।রাজধানীর বঙ্গবন্ধু আন্তর্জাতিক সম্মেলন কেন্দ্রে অনুষ্ঠিত হয়েছে দুই দিনব্যাপী ডাটা সেন্টার প্রযুক্তি সম্মেলনের শেষ দিনে এসব কথা জানান আন্তর্জাতিক প্রতিষ্ঠান আপটাইম ইনিস্টিটিউটের দক্ষিণ এশীয় অঞ্চলের ব্যবস্থাপনা পরিচালক জন ডাফিন।

তিনি বলেন, জাতীয় ডাটা সেন্টারের পাশাপাশি বেসরকারি খাতের বেশ কটি প্রতিষ্ঠান বৈশ্বিক মানসনদের প্রক্রিয়ার মধ্যে আছে।

তিনি আরও বলেন, এ বছরের মধ্যে দুটি প্রতিষ্ঠানের প্রক্রিয়া শেষ হবে। আগামী বছর নাগাদ দশটি প্রতিষ্ঠানকে ‘মান স্বীকৃতি’ দেওয়ার আশা রাখছি।

বাংলাদেশী প্রযুক্তি প্রতিষ্ঠান ডিসি আইকন ও ডাটা সেন্টার প্রফেশনাল সোসাইটি অব বাংলাদেশের যৌথ আয়োজনে দ্বিতীয়বারের মত আয়োজিত সম্মেলনে যোগ দেয় নয় দেশের তথ্য ব্যবস্থাপনা খাতের বিশেষজ্ঞ ও প্রযুক্তিবিদ।

শুক্রবার সম্মেলনটির শেষ দিন ছিল। সম্মেলনে উদ্ভাবনী প্রযুক্তি প্রদর্শনীর পাশাপাশি সংশ্লিষ্ট বিষয়ে ৫০টি সেমিনার অনুষ্ঠিত হয়েছে।

15
IT Forum / Best Linux server distro of 2018
« on: April 26, 2018, 11:46:04 AM »
1. Debian
Debian is over 20-years-old and in part owes that longevity to the emphasis placed on producing a stable operating system. This is crucial if you want to set up a server as updates can sometimes clash badly with existing software.

There are three branches of Debian, named 'Unstable', 'Testing' and 'Stable'. To become part of the Stable current release, packages must have been reviewed for several months as part of the Testing release. This results in a much more reliable system – but don't expect Debian to incorporate much 'bleeding edge' software as a result.


You can get started with Debian using a minimal Network Boot image which is less than 30MB in size. For a faster setup, download the larger network installer which at just under 300MB contains more packages.


2. Ubuntu Server
While Ubuntu is best known for bringing desktop Linux to the masses, its Server variant is also extremely competitive. Canonical, the company behind Ubuntu, has developed LTS (Long Term Support) versions of Ubuntu Server, which like the desktop flavour can be updated up to five years after the date of release, saving you the trouble of upgrading your server repeatedly. Canonical also periodically releases versions of Ubuntu Server at the same time as the latest desktop distro (i.e. 18.04).

If you're intent on building your own cloud platform, you can also download Ubuntu Cloud Server. Canonical claims that over 55% of OpenStack clouds already run on Ubuntu. For a fee, Canonical will even set up a managed cloud for you using BootStack.


3. OpenSUSE
OpenSUSE (formerly SUSE Linux) is a Linux distro specifically designed for developers and system admins wishing to run their own server. The easy-to-use installer can be configured to use 'Text Mode' rather than install a desktop environment to get your server up and running.

OpenSUSE will automatically download the minimum required packages for you, meaning only essential software is installed. The YaST Control Center allows you to configure network settings, such as setting up a static IP for your server. You can also use the built in Zypper package manager to download and install essential server software such as postfix.


4. Fedora Server
Advertisement

Fedora is a community developed operating system based on the commercial Linux distro Red Hat. Fedora Server is a special implementation of the OS, allowing you to deploy and manage your server using the Rolekit tool. The operating system also includes a powerful PostgreSQL Database Server.

Fedora Server also includes FreeIPA, enabling you to manage authentication credentials, access control information and perform auditing from one central location.

You can download the full 2.3GB ISO image of Fedora Server using the link below. The same page contains a link to a minimal 511MB NetInstall Image from Fedora's Other Downloads section for a faster barebones setup.


5. CentOS
Like Fedora, CentOS is a community developed distribution of Linux, originally based on the commercial OS Red Hat Enterprise Linux. In light of this, the developers behind CentOS 7 have promised to provide full updates for the OS until the end of 2020, with maintenance updates until the end of June 2024 – which should save the trouble of performing a full upgrade on your server in the near future.

You can avoid unnecessary packages by installing the 'minimal' ISO from the CentOS website, which at 792MB can fit onto a 90 minute CD-R. If you're eager to get started, the site also offers preconfigured AWS instances and Docker images.

Pages: [1] 2 3 ... 9