Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - farzanaSadia

Pages: [1] 2 3 ... 8
Have you ever used your credit card at a new store or location only to have it declined? Has a sale ever been blocked because you charged a higher amount than usual?

Consumers’ credit cards are declined surprisingly often in legitimate transactions. One cause is that fraud-detecting technologies used by a consumer’s bank have incorrectly flagged the sale as suspicious. Now MIT researchers have employed a new machine-learning technique to drastically reduce these false positives, saving banks money and easing customer frustration.

Using machine learning to detect financial fraud dates back to the early 1990s and has advanced over the years. Researchers train models to extract behavioral patterns from past transactions, called “features,” that signal fraud. When you swipe your card, the card pings the model and, if the features match fraud behavior, the sale gets blocked.

Behind the scenes, however, data scientists must dream up those features, which mostly center on blanket rules for amount and location. If any given customer spends more than, say, $2,000 on one purchase, or makes numerous purchases in the same day, they may be flagged. But because consumer spending habits vary, even in individual accounts, these models are sometime inaccurate: A 2015 report from Javelin Strategy and Research estimates that only one in five fraud predictions is correct and that the errors can cost a bank $118 billion in lost revenue, as declined customers then refrain from using that credit card.

The MIT researchers have developed an “automated feature engineering” approach that  extracts more than 200 detailed features for each individual transaction — say, if a user was present during purchases, and the average amount spent on certain days at certain vendors. By doing so, it can better pinpoint when a specific card holder’s spending habits deviate from the norm.

Tested on a dataset of 1.8 million transactions from a large bank, the model reduced false positive predictions by 54 percent over traditional models, which the researchers estimate could have saved the bank 190,000 euros (around $220,000) in lost revenue.

“The big challenge in this industry is false positives,” says Kalyan Veeramachaneni, a principal research scientist at MIT’s Laboratory for Information and Decision Systems (LIDS) and co-author of a paper describing the model, which was presented at the recent European Conference for Machine Learning. “We can say there’s a direct connection between feature engineering and [reducing] false positives. … That’s the most impactful thing to improve accuracy of these machine-learning models.”

Paper co-authors include: lead author Roy Wedge '15, a former researcher in the Data to AI Lab at LIDS; James Max Kanter ’15, SM ’15; and Sergio Iglesias Perez of Banco Bilbao Vizcaya Argentaria.

Extracting “deep” features

Three years ago, Veeramachaneni and Kanter developed Deep Feature Synthesis (DFS), an automated approach that extracts highly detailed features from any data, and decided to apply it to financial transactions.

Enterprises will sometimes host competitions where they provide a limited dataset along with a prediction problem such as fraud. Data scientists develop prediction models, and a cash prize goes to the most accurate model. The researchers entered one such competition and achieved top scores with DFS.

However, they realized the approach could reach its full potential if trained on several sources of raw data. “If you look at what data companies release, it’s a tiny sliver of what they actually have,” Veeramachaneni says. “Our question was, ‘How do we take this approach to actual businesses?’”

Backed by the Defense Advanced Research Projects Agency’s Data-Driven Discovery of Models program, Kanter and his team at Feature Labs — a spinout commercializing the technology — developed an open-source library for automated feature extraction, called Featuretools, which was used in this research.

The researchers obtained a three-year dataset provided by an international bank, which included granular information about transaction amount, times, locations, vendor types, and terminals used. It contained about 900 million transactions from around 7 million individual cards. Of those transactions, around 122,000 were confirmed as fraud. The researchers trained and tested their model on subsets of that data.

In training, the model looks for patterns of transactions and among cards that match cases of fraud. It then automatically combines all the different variables it finds into “deep” features that provide a highly detailed look at each transaction. From the dataset, the DFS model extracted 237 features for each transaction. Those represent highly customized variables for card holders, Veeramachaneni says. “Say, on Friday, it’s usual for a customer to spend $5 or $15 dollars at Starbucks,” he says. “That variable will look like, ‘How much money was spent in a coffee shop on a Friday morning?’”

It then creates an if/then decision tree for that account of features that do and don’t point to fraud. When a new transaction is run through the decision tree, the model decides in real time whether or not the transaction is fraudulent.

Pitted against a traditional model used by a bank, the DFS model generated around 133,000 false positives versus 289,000 false positives, about 54 percent fewer incidents. That, along with a smaller number of false negatives detected — actual fraud that wasn’t detected — could save the bank an estimated 190,000 euros, the researchers estimate.

Iglesias notes he and his colleagues at BBVA have consistently been able to reproduce the MIT team’s results using the DFS model with additional card and business data, with a minimum increase in computational cost.

Stacking primitives

The backbone of the model consists of creatively stacked “primitives,” simple functions that take two inputs and give an output. For example, calculating an average of two numbers is one primitive. That can be combined with a primitive that looks at the time stamp of two transactions to get an average time between transactions. Stacking another primitive that calculates the distance between two addresses from those transactions gives an average time between two purchases at two specific locations. Another primitive could determine if the purchase was made on a weekday or weekend, and so on.

“Once we have those primitives, there is no stopping us for stacking them … and you start to see these interesting variables you didn’t think of before. If you dig deep into the algorithm, primitives are the secret sauce,” Veeramachaneni says.

One important feature that the model generates, Veeramachaneni notes, is calculating the distance between those two locations and whether they happened in person or remotely. If someone who buys something at, say, the Stata Center in person and, a half hour later, buys something in person 200 miles away, then it’s a high probability of fraud. But if one purchase occurred through mobile phone, the fraud probability drops.

“There are so many features you can extract that characterize behaviors you see in past data that relate to fraud or nonfraud use cases,” Veeramachaneni says.

"In fact, this automated feature synthesis technique, and the overall knowledge provided by MIT in this project, has shown us a new way of refocusing research in other challenges in which we initially have a reduced set of features. For example, we are obtaining equally promising results in the detection of anomalous behavior in internal network traffic or in market operations, just to mention two [examples],” Iglesias adds.

Data Mining and Big Data / Why random forests outperform decision trees
« on: October 28, 2018, 07:41:22 PM »
Random forests consist of multiple single trees each based on a random sample of the training data. They are typically more accurate than single decision trees. The following figure shows the decision boundary becomes more accurate as more trees are added.

Here we’ll provide two intuitive reasons why random forests outperform single decision trees.

Higher resolution in the feature space

Trees are unpruned. While a single decision tree like CART is often pruned, a random forest tree is fully grown and unpruned, and so, naturally, the feature space is split into more and smaller regions.

Trees are diverse. Each random forest tree is learned on a random sample, and at each node, a random set of features are considered for splitting. Both mechanisms create diversity among the trees.

Two random trees each with one split are illustrated below. For each tree, two regions can be assigned with different labels. By combining the two trees, there are four regions that can be labeled differently.

Unpruned and diverse trees lead to a high resolution in the feature space. For continuous features, it means a smoother decision boundary, as shown in the following.

Handling Overfitting

Single decision tree method needs pruning to avoid overfitting. The following shows the decision boundary from an unpruned tree. The boundary is smoother but makes obvious mistakes (overfitting).

So how can random forests build unpruned trees without overfitting? Let’s provide an explanation below.

For the two-class (blue and red) problem below, both splits x1=3 and x2=3 can fully separate the two classes.

The two splits, however, result in very different decision boundaries. In other words, these boundaries conflict with each other in some regions, and may not be reliable.

Now consider random forests. For each random sample used for training a tree, the probability that the red point missing from the sample is.

So roughly 1 out of 3 trees is built with all blue data and always predict class blue. The other 2/3 of the trees have the red point in the training data. Since at each node a random subset of features is considered, we expect roughly 1/3 of the trees use x1, and the rest 1/3 uses x2. The splits from the two types of trees are illustrated below.

Uber has been one of the most active companies trying to accelerate the implementation of real world machine learning solutions. Just this year, Uber has introduced technologies like Michelangelo, and Horovod that focus on key building blocks of machine learning solutions in the real world. This week, Uber introduced another piece of its machine learning stack, this time aiming to short the cycle from experimentation to product. PyML, is a library to enable the rapid development of Python applications in a way that is compatible with their production runtime.

The problem PyML attempts to address is one of those omnipresent challenges in large scale machine learning applications. Typically, there is a tangible mismatch between the tools and frameworks used by data scientists to prototype models and the corresponding production runtimes. For instance, its very common for data scientists to use Python-based frameworks such as PyTorch or Keras for producing experimental models that then need to be adapted to a runtime such as Apache Spark ML Pipelines that brings very specific constraints. Machine learning technologists refer to this issue as a tradeoff between flexibility and resource-efficiency. In the case of Uber, data scientists were building models in Python machine learning frameworks which needed to be refactored by the Michelangelo team to match the constraints of Apache Spark pipelines.

Overcoming this limitation meant extending the capabilities of Michelangelo to support models authored in mainstream machine learning frameworks while keeping a consistent model for training and optimization.

Enter PyML
The goal of Uber’s PyML is to streamline the development of machine learning applications and bridge the gap between experimentation and production runtimes. To accomplish that, PyML focuses on three main aspects:

1) Provide a standard contract for machine learning prediction models.

2) Enable a consistent model for packaging and deploying machine learning models using Docker containers.

3) Enable Michelangelo-integrated runtimes for online and offline prediction models.

The following figure illustrates the basic architecture principles of PyML.

A Standard Machine Learning Contract
PyML models can be authored different machine learning frameworks such as TensorFlow, PyTorch or Scikit-Learn. The models can use two main types of datasets: DataFrames, which store tabular structured data, and Tensors, which store named multidimensional arrays. After the models are created, they are adapted to a standard PyML contract definition which is essentially a class that inherits from DataFrameModel or TensorModel abstract classes, respectively. In both cases, users only need to implement two methods: a constructor to load their model parameters and a predict() method that accepts and returns either DataFrames or Tensors.

Packaging and Deployment
After the PyML models are created, they can be packaged into Docker containers using a consistent structure. PyML’s introduces a standard deployment format based on four fundamental artifacts:

Using that structure, a developer can package and deploy a PyML model using the following code. The PyML Docker image will contain the model and all the corresponding dependencies. The models will be immediately available for execution in the Michelangelo console.

Offline and Online Predictions
PyML supports both batch(offline) and online execution models for predictions. Offline predictions are modeled as an abstraction over PySpark. In that context, PyML users simply provide a SQL query with column names and types matching the inputs expected by their model, and the name of a destination Hive table in which to store output predictions. Behind the scenes, PyML starts a containerized PySpark job using the same image and Python environment as for serving the model online, ensuring that there are no differences between the offline and online predictions. Executing offline predictions is relatively straightforward as illustrated in the following code:

The standard two-operation (init, predict) contract of PyML models simplifies the implementation of online predictions. PyML enables online predictions by enabling lightweight gRPC interfaces for the Docker containers which are used by a common online prediction Service as shown in the following figure. Upon request, the online prediction service will launch the corresponding PyML model-specific Docker image as a nested Docker container via Mesos’ API. When the container is launched, it starts the PyML RPC server and begins listening for prediction requests on a Unix domain socket from the online prediction service.

PyML addresses one of the most important challenges in large scale machine learning applications by bridging the gap between experimentation and runtime environments. Beyond its specific technological contributions, the architecture of PyML can be adapted to different technology stacks should serve as an important reference for organizations starting their machine learning journey.

Software Engineering / Report: Can Kotlin compete with Java?
« on: July 03, 2018, 12:33:50 PM »
Java continues to dominate the programming language space for developers, but a new report reveals that Kotlin may soon knock it out of the top spot for mobile development. Packt released the results of its 2018 Skills Up report designed to look at the trends and tools software developers are using today.

The 2018 Skills Up report surveyed more than 8,000 developers and technology experts in four broad categories: app development, web development, security and systems admin, and data.

Kotlin is a statically typed programming language developed by JetBrains and supported by Google’s Android operating system. While Kotlin didn’t make it onto the list of top programming languages app developers are currently using overall, 71 percent of respondents stated that Kotlin is a serious contender for Java.

“Java beware: respondents say that Kotlin might just topple you from your throne. With adoption by Google for Android development, is this the beginning of the end of Java for mobile?” the report stated. “Kotlin has been around since 2011, but only recently has it started to really capture the imagination of engineers. Google has done a lot to reinforce its reputation — the fact that it was fully supported in Android Studio 3.0 in 2017 has ensured it is now one of the most popular Android development languages. We expect to see it competing closely with Java by the end of the year.”

Rounding out the application developer top five are JavaScript, Python, C# and SQL. Java is more popular when developing for mobile while Python was more favored by higher-earning app developers, and C# was found to be more popular among developers building enterprise and desktop applications.

“In 2018, we’ve seen C-based languages heavily lose out in favor of languages that can write more easily for the web. Only among desktop developers and game scripting does C# still hold the top spot: every other developer is looking to have the capacity to build for the browser, or for mobile.”

The top tools for mobile development, according to the report, included Android Studio, Xcode, macOs, Xamarin and iOS SDK. Android Studio has the most developers using it with 39 percent of the respondents, while Xcode only saw 17 percent of respondents using it. However, 50 percent of developers who make $70,000 or more cited using Xcode, iOS SDK and/or macOS.

Additionally, the report found the top tools for enterprise and desktop included .NET, Visual Studio and Java EE while MySQL, SQL Server and SQLite came out on top for the most commonly used databases.

App developers also found there is potential for using Swift outside of mobile development.
For web development, the report found the top languages included JavaScript, HTML/CSS, PhP, Python and Java. However, the report noted that app development and web development are beginning to no longer be considered as two separate entities, with web and app developers sharing a majority of the same toolchains.

“In 2018, working in tech almost always means working with the web. As more and more applications migrate to the browser and the cloud and as sites become ever more sophisticated, web development knowledge becomes a greater and greater priority,” the report stated.

The top front-end tools and frameworks for web development included JQuery, Bootstrap, npm, Angular and Webpack while the top back-end tools included Node.js, ASP.NET Core, Express.js and Laravel.

Sixty-five percent of web developers also found that conversational UI and chatbots have a strong future in the webUI space.

When looking at security and systems administration, the report found Python and Bash as the top used scripting languages followed by Shell, PowerShell and JavaScript. The top security tools include Wireshark, Nmap, Kali Linux and Metasploit. For system admin and virtualization tools, developers are using Linux, Windows OS, Docker, Ubuntu Server and Windows Server.

Other security and systems admin findings included that IoT is being held back by security issues, and a majority of organizations don’t treat cybersecurity with enough seriousness.

Python continued to top the list of languages when looking at data, followed by SQL, R and JavaScript. The top data libraries, tools and frameworks included Excel, NumPy, Anaconda and Pandas. According to the respondents, the next big areas for data include TensorFlow, deep learning, and machine learning.

Among data developers, 83 percent are excited about the potential of quantum computing, and more than half find AWS is the top cloud provider for Big Data.

Other findings of the report included:

Seventy-two percent of respondents feel like they are a part of a community with other developers
Sixty percent are satisfied with their jobs
Six percent are extremely dissatisfied
The top technical barrier across all industries is dealing with technical debt and legacy problems
Eighty-six percent of respondents agree it is important to develop soft skills such as communication and teamwork.
“Only one thing is certain in the world of tech: change. Working in development is about navigating a constantly evolving industry, keeping up to date with the skills you need to succeed,” the report stated.

Researchers have developed a new system designed to tackle complex objects and workflows on Big Data platforms. Computer science researchers from Rice University’s DARPA-funded Pliny project has announced PlinyCompute.

The project is funded through DARPA’s Mining and Understanding Software Enclaves (MUSE) initiative. The Pliny project aims to create systems that automatically detect and fix errors in programs. PlinyCompute is “a system purely for developing high-performance, Big Data codes.”

“With machine learning, and especially deep learning, people have seen what complex analytics algorithms can do when they’re applied to Big Data,” Chris Jermaine, a Rice computer science professor who is leading the platform’s development, said in the announcement. “Everyone, from Fortune 500 executives to neuroscience researchers are clamoring for more and more complex algorithms, but systems programmers have mostly bad options for providing that today. HPC can provide the performance, but it takes years to learn to write code for HPC, and perhaps worse, a tool or library that might take days to create with Spark can take months to program on HPC.”

According to Jermaine, while Spark was developed for Big Data and supports things such as load balancing, fault tolerance and resource allocation, it wasn’t designed for complex computation. “Spark is built on top of the Java Virtual Machine, or JVM, which manages runtimes and abstracts away most of the details regarding memory management,” said Jia Zou, a research scientist at Rice. “Spark’s performance suffers from its reliance on the JVM, especially as computational demands increase for tasks like training deep neural networks for deep learning.”

Zou continued that PlinyCompute was designed for high performance, and has found to be at least twice as fast and 50 times faster at complex computation over Spark. However, PlinyCompute requires programmers to write libraries and models in C++ while Spark requires Java-based coding. Because of this, Jermaine says programmers might find it difficult to write code for PlinyCompute.

“There’s more flexibility with PlinyCompute,” Jermaine said. “That can be a challenge for people who are less experienced and knowledgeable about C++, but we also ran a side-by-side comparison of the number of lines of code that were needed to complete various implementations, and for the most part there was no significant difference between PlinyCompute and Spark.”

Computer scientists have a history of borrowing ideas from nature, such as evolution. When it comes to optimising computer programs, a very interesting evolutionary-based approach has emerged over the past five or six years that could bring incalculable benefits to industry and eventually consumers. We call it genetic improvement.

Genetic improvement involves writing an automated “programmer” who manipulates the source code of a piece of software through trial and error with a view to making it work more efficiently. This might include swapping lines of code around, deleting lines and inserting new ones – very much like a human programmer. Each manipulation is then tested against some quality measure to determine if the new version of the code is an improvement over the old version. It is about taking large software systems and altering them slightly to achieve better results.

These interventions can bring a variety of benefits in the realm of what programmers describe as the functional properties of a piece of software. They might improve how fast a program runs, for instance, or remove bugs. They can also be used to help transplant old software to new hardware. 

The potential doesn’t stop there. Because genetic improvement operates on source code, it can also improve the so-called non-functional properties. These include all the features that are not concerned purely with just the input-output behaviour of programs, such as the amount of bandwidth or energy that the software consumes. These are often particularly tricky for a human programmer to deal with, given the already challenging problem of building correctly functioning software in the first place.

We have seen a few examples of genetic improvement beginning to be recognised in recent years – albeit still within universities for the moment. A good early one dates from 2009, where such an automated “programmer” built by the University of New Mexico and University of Virginia fixed 55 out of 105 bugs in various different kinds of software, ranging from a media player to a Tetris game. For this it won $5,000 (£3,173) and a Gold Humie Award, which is awarded for achievements produced by genetic and evolutionary computation.

In the past year, UCL in London has overseen two research projects that have demonstrated the field’s potential (full disclosure: both have involved co-author William Langdon). The first involved a genetic-improvement program that could take a large complex piece of software with more than 50,000 lines of code and speed up its functionality by 70 times.

The second carried out the first automated wholesale transplant of one piece of software into a larger one by taking a linguistic translator called Babel and inserting it into an instant-messaging system called Pidgin.

Nature and computers
To understand the scale of the opportunity, you have to appreciate that software is a unique engineering material. In other areas of engineering, such as electrical and mechanical engineering, you might build a computational model before you build the final product, since it allows you to push your understanding and test a particular design. On the other hand, software is its own model. A computational model of software is still a computer program. It is a true representation of the final product, which maximises your ability to optimise it with an automated programmer.

As we mentioned at the beginning, there is a rich tradition of computer scientists borrowing ideas from nature. Nature inspired genetic algorithms, for example, which crunch through the millions of possible answers to a real-life problem with many variables to come up with the best one. Examples include anything from devising a wholesale road distribution network to fine-tuning the design of an engine.

Though the evolution metaphor has become something of a millstone in this context, as discussed here, genetic algorithms have had a number of successes producing results which are either comparable with human programs or even better.

Evolution also inspired genetic programming, which attempts to build programs from scratch using small sets of instructions. It is limited, however. One of its many criticisms is that it cannot even evolve the sort of program that would typically be expected of a first-year undergraduate, and will not therefore scale up to the huge software systems that are the backbone of large multinationals.

This makes genetic improvement a particularly interesting deviation from this discipline. Instead of trying to rewrite the whole program from scratch, it succeeds by making small numbers of tiny changes. It doesn’t even have to confine itself to genetic improvement as such. The Babel/Pidgin example showed that it can extend to transplanting a piece of software into a program in a similar way to how surgeons transplant body organs from donors to recipients. This is a reminder that the overall goal is automated software engineering. Whatever nature can teach us when it comes to developing this fascinating new field, we should grab it with both hands.

By Chris Baraniuk

Candidates hoping to land their dream job are increasingly being asked to play video games, with companies like Siemens, E.ON and Walmart filtering out hundreds of applicants before the interview stage based partly on how they perform. Played on either smartphones or computers, the games’ designers say they can help improve workplace diversity, but there are questions over how informative the results really are.

To the casual observer, many of the games might seem almost nonsensical. One series of tests by UK-based software house Arctic Shores includes a trial where the player must tap a button frantically to inflate balloons for a party without bursting them. In another, the candidate taps a logo matching the one displayed on screen, at an ever more blistering pace.

Afterwards, a personality profile is built using data on how someone performed, says Robert Newry at Arctic Shores. They claim the traits that can be measured include a person’s willingness to deliberate, seek novel approaches to tasks and even their tendency for social dominance. “What we are measuring is not your reaction skills, your dexterity,” says Newry. “It’s the way you go about approaching and solving the challenge that is put in front of you.”

Using the games, Siemens UK doubled the proportion of female candidates that made it past the initial stages of graduate recruitment than in the previous year, according to data recently released by Arctic Shores. Another company that makes such tests, Pymetrics, says its assessments have boosted recruitment of under-represented groups, with one financial services firm increasing the number of minority candidates offered technical roles by 20 per cent. However, it’s not clear if the boost could simply be down to an increased focus on or awareness of diversity in the workplace.

The games are meant to offer a form of psychometric testing and are based on techniques developed for measuring personality traits. But whereas in academic research the tests are generally calibrated the same way for all participants, when used for recruitment they are often tweaked depending on how existing employees at a company play them.

“We go into these companies and say, ‘Your individuals may be different. Let’s use your high performers to put together a data set,’” says Frida Polli, co-founder of Pymetrics. In other words, if your gameplay matches that of someone already at the firm, you’re more likely to advance to the next stage of recruitment.“You can develop a game-based assessment as rigorous as any traditional psychometric assessment,” says Richard Landers at Old Dominion University in Virginia. “But I don’t know how many companies actually succeed at that.” This is because it takes time and money to show that any assessment’s measurement of a given trait is statistically reliable.

Landers performed an independent review of game-like intelligence tests by Australia-based firm Revelian and says the results were reliable. Arctic Shores have also ran a study with around 300 participants to validate their games.

Caryn Lerman at the University of Pennsylvania has studied brain-training apps and says that although people’s improved performance at these can be tracked over time, they generally have no observable impact on cognitive ability in the real world. She is sceptical that playing the games well corresponds to ability to do a good day’s work in the office.

Although the game-based tests are mandatory, a company’s decision to interview someone may be based on other factors as well, such as their academic record. But in trying to find new ways of shortlisting the best of the bunch, companies risk alienating unsuccessful candidates, says Margaret Beier at Rice University in Texas. They might even expose themselves to lawsuits. “If I apply for a job, play games that seem totally unrelated to it and then don’t get that job, I might have a lot of questions about that assessment,” she says.

Java Forum / Java will no longer have ‘major’ releases
« on: May 13, 2018, 06:39:28 PM »
Remember when a new number meant a software release was a sighnificant, or major, one? For Java, that pattern is over. Java 9 was the last “major” release, Oracle says.

All versions after that—including the recently released Java 10 and the forthcoming Java 11—are what the industry typically calls “point releases,” because they were usually numbered x.1, x.2, and so on to indicate an intermediate, more “minor” release. (Oracle has called those point releases “feature releases.”)

[ The new Java is coming! Discover the Java 11 JDK roadmap. • The new Java versions are here! Learn everything you need to know about what’s new in Java SE 10 and what’s new in Java EE 8. | Keep up with hot topics in programming with InfoWorld’s App Dev Report newsletter. ]
As of Java 10, Oracle has put Java on a twice-annual release schedule, and although those releases get whole numbers in their versions, they’re more akin to point releases. Oracle recently declared, “There are no ‘major releases’ per se any more; that is now a legacy term. Instead, there is a steady stream of ‘feature releases.’”

Under the plan, moving from Java 9 to Versions 10 and 11 is similar to previous moves from Java 8 to Java version 8u20 and 8u40.

Previously, it took about three years to move from Java 7 to 8 and then 9.

Oracle says that Java’s twice-yearly release schedule makes it easier for tools vendors to keep up with changes, because they will work with a stream of smaller updates. Upgrading of tools from Java from version 9 to 10 happened “almost overnight,” Oracle says, compared to the difficulties it said some tools vendors had making the move from Java 8 to 9.

For users who just want stability and can pass on new features of each “feature release,” Oracle is still holding onto that old three-year schedule for its long-term support (LTS) releases. Java 11 will be an LTS release, with commercial support available from Oracle for at least eight additional years.

Driving down energy use and costs should be a number one priority for British companies according to SSE Enterprise Energy Solutions – the UK’s leading provider of energy management services.

Information and Communications Technology (ICT) typically accounts for 12% of business energy use yet ICT energy management is often overlooked as a way to realise cost savings.

SSE Enterprise Energy Solutions recently delivered a 9% cost saving for Glasgow City Council across its school estate. The pilot project ran across 29 high schools with a total of 9,000 devices and delivered savings of £4,500 per week. Its success has led to the council putting in place a schools ICT policy with energy efficiency at its core.

Kevin Greenhorn, Managing Director of SSE Enterprise Energy Solutions, said: “Technology can revolutionise how organisations manage their energy consumption and ICT is one of the first places to start. We can give visibility across all ICT assets and a central solution which reduces energy costs and carbon emissions – without the need to rely on people to change their behaviour.”

He added: “Not only does it reduce costs but it provides analytical data which informs future planning and sustainable procurement decisions. It’s the right solution for organisations looking to take steps to reduce energy consumption beyond traditional building systems, such as heating, ventilation, air conditioning and lighting.”

SSE Enterprise Energy Solutions offers an Energy ICT solution which allows asset identification and management of every single machine across multiple sites. Software is installed centrally on a standard server and the solution allows for remote control of desktop computers, monitors, laptops, routers, wireless access points, printers, copiers, telephones and other devices.

These devices can be powered down when not in use, for example evenings and weekends, and their energy consumption monitored when in use. In Glasgow City Council’s case this has enabled them to produce a weekly summary which details the savings delivered in carbon, energy and pounds which is then reported at executive level.

Andrew McKenzie, Energy ICT Director for SSE Enterprise Energy Solutions, said: “This software is a genuinely exciting solution that allows organisations to access the last untapped area of energy efficiency, the ICT network.”

He added: “It ticks all the right boxes in terms of saving money, energy and carbon, and enables truly innovative and bespoke energy optimisation across ICT networks that align with building control system strategies. Organisations are therefore able to take significant steps towards total energy management.”

Andrew Mouat, Glasgow City Council’s Principal Officer for Carbon Management, said working with SSE Enterprise Energy Solutions has helped give them the ICT security and control they need, while also delivering efficiencies for the finance and facilities teams. He said: “This technology is delivering real value for Glasgow City Council and taxpayers. The cost savings and associated CO2 reduction speak for themselves. Prudent management of our infrastructure creates efficiencies but importantly also gives us the detailed analytics to inform our planning and be more focused in our ICT and building management. 

“It is contributing to Glasgow becoming a smart, sustainable Future City.”

আন্তর্জাতিক মানের স্বীকৃতি পেতে যাচ্ছে বাংলাদেশের দশটি ডাটা সেন্টার। আগামী এক বছরের মধ্যেই এই স্বীকৃতি মিলতে পারে।রাজধানীর বঙ্গবন্ধু আন্তর্জাতিক সম্মেলন কেন্দ্রে অনুষ্ঠিত হয়েছে দুই দিনব্যাপী ডাটা সেন্টার প্রযুক্তি সম্মেলনের শেষ দিনে এসব কথা জানান আন্তর্জাতিক প্রতিষ্ঠান আপটাইম ইনিস্টিটিউটের দক্ষিণ এশীয় অঞ্চলের ব্যবস্থাপনা পরিচালক জন ডাফিন।

তিনি বলেন, জাতীয় ডাটা সেন্টারের পাশাপাশি বেসরকারি খাতের বেশ কটি প্রতিষ্ঠান বৈশ্বিক মানসনদের প্রক্রিয়ার মধ্যে আছে।

তিনি আরও বলেন, এ বছরের মধ্যে দুটি প্রতিষ্ঠানের প্রক্রিয়া শেষ হবে। আগামী বছর নাগাদ দশটি প্রতিষ্ঠানকে ‘মান স্বীকৃতি’ দেওয়ার আশা রাখছি।

বাংলাদেশী প্রযুক্তি প্রতিষ্ঠান ডিসি আইকন ও ডাটা সেন্টার প্রফেশনাল সোসাইটি অব বাংলাদেশের যৌথ আয়োজনে দ্বিতীয়বারের মত আয়োজিত সম্মেলনে যোগ দেয় নয় দেশের তথ্য ব্যবস্থাপনা খাতের বিশেষজ্ঞ ও প্রযুক্তিবিদ।

শুক্রবার সম্মেলনটির শেষ দিন ছিল। সম্মেলনে উদ্ভাবনী প্রযুক্তি প্রদর্শনীর পাশাপাশি সংশ্লিষ্ট বিষয়ে ৫০টি সেমিনার অনুষ্ঠিত হয়েছে।

IT Forum / Best Linux server distro of 2018
« on: April 26, 2018, 11:46:04 AM »
1. Debian
Debian is over 20-years-old and in part owes that longevity to the emphasis placed on producing a stable operating system. This is crucial if you want to set up a server as updates can sometimes clash badly with existing software.

There are three branches of Debian, named 'Unstable', 'Testing' and 'Stable'. To become part of the Stable current release, packages must have been reviewed for several months as part of the Testing release. This results in a much more reliable system – but don't expect Debian to incorporate much 'bleeding edge' software as a result.

You can get started with Debian using a minimal Network Boot image which is less than 30MB in size. For a faster setup, download the larger network installer which at just under 300MB contains more packages.

2. Ubuntu Server
While Ubuntu is best known for bringing desktop Linux to the masses, its Server variant is also extremely competitive. Canonical, the company behind Ubuntu, has developed LTS (Long Term Support) versions of Ubuntu Server, which like the desktop flavour can be updated up to five years after the date of release, saving you the trouble of upgrading your server repeatedly. Canonical also periodically releases versions of Ubuntu Server at the same time as the latest desktop distro (i.e. 18.04).

If you're intent on building your own cloud platform, you can also download Ubuntu Cloud Server. Canonical claims that over 55% of OpenStack clouds already run on Ubuntu. For a fee, Canonical will even set up a managed cloud for you using BootStack.

3. OpenSUSE
OpenSUSE (formerly SUSE Linux) is a Linux distro specifically designed for developers and system admins wishing to run their own server. The easy-to-use installer can be configured to use 'Text Mode' rather than install a desktop environment to get your server up and running.

OpenSUSE will automatically download the minimum required packages for you, meaning only essential software is installed. The YaST Control Center allows you to configure network settings, such as setting up a static IP for your server. You can also use the built in Zypper package manager to download and install essential server software such as postfix.

4. Fedora Server

Fedora is a community developed operating system based on the commercial Linux distro Red Hat. Fedora Server is a special implementation of the OS, allowing you to deploy and manage your server using the Rolekit tool. The operating system also includes a powerful PostgreSQL Database Server.

Fedora Server also includes FreeIPA, enabling you to manage authentication credentials, access control information and perform auditing from one central location.

You can download the full 2.3GB ISO image of Fedora Server using the link below. The same page contains a link to a minimal 511MB NetInstall Image from Fedora's Other Downloads section for a faster barebones setup.

5. CentOS
Like Fedora, CentOS is a community developed distribution of Linux, originally based on the commercial OS Red Hat Enterprise Linux. In light of this, the developers behind CentOS 7 have promised to provide full updates for the OS until the end of 2020, with maintenance updates until the end of June 2024 – which should save the trouble of performing a full upgrade on your server in the near future.

You can avoid unnecessary packages by installing the 'minimal' ISO from the CentOS website, which at 792MB can fit onto a 90 minute CD-R. If you're eager to get started, the site also offers preconfigured AWS instances and Docker images.

It might look like witchcraft, but researchers at Nvidia have developed an advanced deep learning image-retouching tool that can intelligently reconstruct incomplete photos.

While removing unwanted artefacts in image editing is nothing new – Adobe Photoshop's Content-Aware tools are pretty much the industry standard – the prototype tool that Nvidia is showcasing looks incredibly impressive.

Don't take our word for it – check out the two-minute video below to get a taste of what this new technology is capable of.

What differentiates Nvidia's new tool from something like Content-Aware Fill in Photoshop is that it analyzes the image and understands what the subject should actually look like; Content-Aware Fill relies on surrounding parts of the image to fill in what it thinks should be there.

Nvidia's tool is a much more sophisticated solution. For instance, when trying to fill a hole where an eye would be in a portrait, as well as using information from the surrounding area Nvidia's deep learning tool knows an eye should be there, and can fill the hole with a realistic computer-generated alternative.   

"Our model can robustly handle holes of any shape, size location, or distance from the image borders," the researchers write. "Previous deep learning approaches have focused on rectangular regions located around the center of the image, and often rely on expensive post-processing. Further, our model gracefully handles holes of increasing size."

For the moment at least there's no word when we're likely to see this tool become more widely available. For now though, it gives us a glimpse into the near future of image editing.

For years, Swami Sivasubramanian’s wife has wanted to get a look at the bears that come out of the woods on summer nights to plunder the trash cans at their suburban Seattle home. So over the Christmas break, Sivasubramanian, the head of Amazon’s AI division, began rigging up a system to let her do just that.­­­­­

So far he has designed a computer model that can train itself to identify bears—and ignore raccoons, dogs, and late-night joggers. He did it using an Amazon cloud service called SageMaker, a machine-learning product designed for app developers who know nothing about machine learning. Next, he’ll install Amazon’s new DeepLens wireless video camera on his garage. The $250 device, which will go on sale to the public in June, contains deep-learning software to put the model’s intelligence into action and send an alert to his wife’s cell phone whenever it thinks it sees an ursine visitor.

Sivasubramanian’s bear detector is not exactly a killer app for artificial intelligence, but its existence is a sign that the capabilities of machine learning are becoming far more accessible. For the past three years, Amazon, Google, and Microsoft have been folding features such as face recognition in online photos and language translation for speech into their respective cloud services—AWS, Google Cloud, and Azure. Now they are in a headlong rush to build on these basic capabilities to create AI-based platforms can be used by almost any type of company, regardless of its size and technical sophistication.

“Machine learning is where the relational database was in the early 1990s: everyone knew it would be useful for essentially every company, but very few companies had the ability to take advantage of it,” says Sivasubramanian.

Amazon, Google, and Microsoft—and to a lesser extent companies like Apple, IBM, Oracle, Salesforce, and SAP—have the massive computing resources and armies of talent required to build this AI utility. And they also have the business imperative to get in on what may be the most lucrative technology mega-trend yet.

“Ultimately, the cloud is how most companies are going to make use of AI—and how technology suppliers are going to make money off of it,” says Nick McQuire, an analyst with CCS Insight.

Inside the Chinese lab that plans to rewire the world with AI
Alibaba is investing huge sums in AI research and resources—and it is building tools to challenge Google and Amazon.
Quantifying the potential financial rewards is difficult, but for the leading AI cloud providers they could be unprecedented. AI could double the size of the $260 billion cloud market in coming years, says Rajen Sheth, senior director of product management in Google’s Cloud AI unit. And because of the nature of machine learning—the more data the system gets, the better the decisions it will make—customers are more likely to get locked in to an initial vendor.

In other words, whoever gets out to the early lead will be very difficult to unseat. “The prize will be to become the operating system of the next era of tech,” says Arun Sundararajan, who studies how digital technologies affect the economy at NYU’s Stern School of Business. And Puneet Shivam, president of Avendus Capital US, an investment bank, says: “The leaders in the AI cloud will become the most powerful companies in history.”

It’s not just Amazon, Google, and Microsoft that are pursuing dominance. Chinese giants such as Alibaba and Baidu are becoming major forces, particularly in Asian markets. Leading enterprise software companies including Oracle, Salesforce, and SAP are embedding machine learning into their apps. And thousands of AI-related startups have ambitions to become tomorrow’s AI leaders.

Amazon, Google, and Microsoft all offer services for recognizing faces and other objects in photos and videos, for turning speech into text and vice versa, and for doing the natural-language processing that allows Alexa, Siri, and other digital assistants to understand your queries (or some of them, anyway).

So far, none of this activity has resulted in much in the way of revenue; none of AI’s biggest players bother to break out sales of their commercial AI services in their earnings calls. But that would quickly change for the company that creates the underlying technologies and developer tools to support the widespread commercialization of machine learning. That’s what Microsoft did for the PC, by creating a Windows platform that millions of developers used to build PC programs. Apple did the same with iOS, which spawned the mobile-app era.

Google jumped out to the early lead in 2015, wooing developers when it open-sourced TensorFlow, the software framework its own AI experts used to create machine-learning tools. But Amazon and Microsoft have created similar technologies since then; they even joined forces in 2017 to create Gluon, an open-source interface designed to make machine learning easier to use with or without TensorFlow.

All three continue to work on ways to make machine learning accessible even to total AI novices. That was the idea behind Amazon’s SageMaker, which is designed to make building machine-learning apps not much more complicated than creating a website. A few weeks after SageMaker was announced last November, Google introduced Cloud AutoML. A company can feed its own unique collection of data into this technology, and it will automatically generate a machine-learning model capable of improving the business. Google says that more than 13,000 companies have asked to try Cloud AutoML.

“There are 20 million organizations in the world that could benefit from machine learning, but they can’t hire people with the necessary background,” says Jeff Dean, head of Google Brain. “To get even 10 million of them using machine learning, we have to make this stuff much easier to use.”

So which of the Big Three is best positioned to win that all-important first-mover advantage? All have immense strengths and few obvious weaknesses.

Take Microsoft. It’s been doing breakthrough work on AI problems such as computer vision and natural-language processing for two decades. It’s got access to massive amounts of valuable data to inform its Azure cloud, including content from Bing, LinkedIn, Skype, and the more than a billion people who use Microsoft Office. Simply put, no other company knows more about what it takes to sell, or help other developers sell, software to businesses and other organizations.
China and the US are bracing for an AI showdown—in the cloud
Alibaba, Amazon, and others are adding ever more capable AI services to their cloud platforms.
Then there’s Amazon. With its Apple-esque secrecy, it was considered an also-ran in AI until about a year ago. But that secrecy appears to have masked sweeping corporate ambitions. For the past seven years, every business planning document at Amazon has had to include an explanation of how the unit would make use of machine learning, says Sivasubramanian. (This requirement appeared on the boilerplate forms managers used for such documents, including a parenthetical clause that read “None is not a good answer,” he says.)

While it still doesn’t publish many papers, Amazon has a 40 percent market share in the cloud market and is moving ferociously to use that position to dominate the AI cloud as well. It’s introduced a slew of new services that were once used only internally. It’s been the most aggressive acquirer of AI startups, spending more than twice as much as Google and four times as much as Microsoft in the past few years, says Jon Nordmark, CEO of, a provider of AI services.

It’s well on its way to making Alexa dominate the next great consumer interface, voice. And while Google has made headlines using AI to defeat Go champions, Amazon is using its expertise in factory robotics and the logistics of delivering millions of packages a day, positioning itself for AI projects that meld digital information with data collected from real-world sensors. “Other companies publish more papers, but Amazon is the one putting boots on the ground and moving ahead,” says Nordmark.

Maybe so. But while Amazon was years ahead of the competition in creating AWS, this time nobody is sitting idly by. The prize is too big, and the opportunities for AI dominance too lucrative.

Travel / Visit / Tour / Budget trips: 20 of the cheapest places to travel
« on: November 16, 2017, 07:41:36 PM »

That ever-growing travel wish list might be putting some pressure on your pocket – but there are plenty of destinations where you’ll get more bang for your buck. From Greece to Guatemala, here are 20 places you can visit without breaking the bank.
1. Thailand

There’s a reason why Thailand remains so popular with backpackers – it’s got idyllic islands, a rich culture, beach-huts aplenty, tantalising cuisine and adventures galore, and all available at often staggeringly low prices. Despite the well-trodden routes through the country, it’s not hard to get away from the crowds – check out Nakhon Si Thammarat for some of the very best food the country has to offer, or hire a motorbike to make the 600km trip along the Mae Hong Son Loop through the forested northern mountains.

Read our tips for backpacking Thailand and travelling solo in Thailand before you go.
2. South Africa

One of the great things about travelling in South Africa is that it’s possible to have a safari experience here – complete with the Big Five – without encountering a budget-breaking bill. Head to Hluhluwe-Imfolozi to see white rhino and to avoid the crowds of Kruger, to the Drakensberg for superlative hiking, and don’t forget to factor in at least a few days in amazing Cape Town.

Start planning your trip with our list of the best road trips across the country.
South Africa
3. Vietnam

Despite a remarkable rate of change over the decades since the end of the American War, Vietnam remains amazing value for Western visitors. The country’s greatest attraction is its sublime countryside, from the limestone karsts of the north to the waterways and paddy fields of the Mekong Delta, with blissful beaches and frenetic cities crammed in between.

Then there’s the cuisine – pull up a stool at a pho stall and for only a couple of dollars you’ll eating some of the best food on offer, shoulder to shoulder with the locals.

Check out our 9 tips for backpacking Vietnam and discover how to get off  the tourist trail before you go.
4. Uruguay

If you’ve already visited Brazil and Argentina, or are just looking for a better value destination, head instead to neighbouring Uruguay. You’ll be relieved to hear you can still find excellent steak here; plus, there plenty of lovely beaches to choose from – head to Cabo Polonio for quieter sands and abundant wildlife – and the gorgeous old capital of Montevideo.

Want to learn more? You’ll find all the information you need to plan a budget trip in our Snapshot Guide to Uruguay.
5. Cuba

Book through Rough Guides’ trusted travel partners

    securityTravel insurance
    directions_carCar rental

Since relations between Cuba and the US started rapidly warming up, there’s never been a better time to visit this Caribbean island. Go now before it changes beyond recognition – and before the prices start to go up and up even more. Hit the salsa clubs of Havana, get caught up in the heady July carnival of Santiago, or dip your toes in the warm Caribbean at Varadero Beach – whatever you do, you’ll find it hard not to leave utterly intoxicated.

Get started with these 12 tips for backpacking Cuba.
6. Prague, Czech Republic

Despite being firmly on the tourist – not to mention bachelor party – trail these days, Prague remains one of Europe’s cheapest capital cities to visit. For just a few Czech Crowns you can enjoy a hearty meal, washed down with decent local beer, of course. The city itself is a beauty, crammed full of history and perfect for leisurely explorations by foot.

Want to explore more of Europe on the cheap? Check out The Rough Guide to Europe on a Budget.
Prague, Czech Republic
7. Greece

Don’t be put off Greece by the country’s ongoing economic crisis – if anything, the financial situation is all the more reason to travel here and to support the local people. The situation does mean that prices are still cheaper than they once were, and that means that you might be able to squeeze an extra island or two into your itinerary. Pay by credit card in advance, but take enough cash with you for your travels, and you’re pretty much guaranteed an amazing trip.

Read these 11 tips by Nick Edwards, co-author of The Rough Guide to Greece, before you go.
8. Guatemala

It’s hard not to fall under the spell of Guatemala and its compelling mix of natural beauty, Maya traditions and colonial legacies. Rock-bottom prices make this one of the best places to study Spanish; once your linguistic skills are up to scratch, jump onto one of the country’s famous camionetas or “chicken buses” to explore, soak up the sights of graceful Antigua, or be wowed by the monumental Maya temples of Tikal.

It’s easy to extend your trip to see more of Central America, too. Check out The Rough Guide to Central America on a Budget for advice, and also discover why you shouldn’t rush through Guatemala City.
9. Bulgaria
Related features
Go solo: the 20 best places to travel alone
Go solo: the 20 best places to travel alone
10 things everyone learns travelling solo
10 things everyone learns travelling solo
On a budget: 15 cheap places to visit in Europe
On a budget: 15 cheap places to visit in Europe

Often unfairly overlooked, Bulgaria has a lot to offer budget travellers – not least some of the most deserted beaches in Europe, at bargain prices. In addition to its appealing coastline, there’s also lots of lovely old towns, including Varna on the coast and ancient Plovdiv, and a number of dramatic mountain ranges that are perfect for exploration on foot or by bike.
10. India

India remains one of the ultimate destinations for budget travellers – there are few countries where you can still travel so extensively and eat so well for so little. If you’re after a beach break, eschew Goa for the gorgeous beaches of the temple town of Gokarna; for amazing food, it’s hard to beat the puris and kebabs of Mumbai’s street stalls; or head to the Golden City of Jaisalmer from where you can explore the seemingly endless sands of the Thar Desert.

Need more inspiration? Discover the most romantic places in India, check out our favourite places off the tourist trail and find out what it was like to write the first ever Rough Guide to the country.
11. Portugal

Portugal remains one of the best bargains in Western Europe, and is especially worth considering if you want to avoid the more crowded resorts and cities of Spain. Skip the Algarve for the ruggedly beautiful Alentejo (with its cheap, fresh seafood) and vibrant, uber cool Lisbon; and don’t forget to put enough euros aside for a pastéis de belém (custard tart) or two.

If you’re not sure where to start, read our top tips for travelling in Portugal and discover the best of Lisbon’s food scene.
12. Bolivia

One of the cheapest countries in South America, Bolivia is also one of it’s most misunderstood. Travelling here may be a little uncomfortable at times, but it’s more than worth it for the wealth of amazing sights on offer. Top of the list is undoubtedly the astounding Salar de Uyuni salt flats, a two or three day tour of which will usually set you back less than £100/$150.

Get The Rough Guide to South America on a Budget to start planning your trip, and be sure to include at least one of these beautiful journeys across the country.
13. Mexico
Related guides
Rough Guides Snapshot South Africa: The Eastern Cape
The Rough Guide to Cape Town, The Winelands & The Garden Route
The Rough Guide to South Africa, Lesotho & Swaziland
View all guideschevron_right

Your budget will definitely stretch to tacos and tequila aplenty in Mexico – which is great news as there’s a lot of ground to cover in this vibrant country. Whether you want to string your hammock up along dazzling white sands, sample some of the country’s best street food in Oaxaca or cool off in a crystal-clear cenote (sinkhole), the country will leave you eager to come back for more.

To kick-start your wanderlust, these are 12 of our favourite places to visit – and here’s why Tijuana should be on your radar.
14. New Orleans, USA

You can’t escape from music in New Orleans – and with buskers on what often seems like every corner, and music in every courtyard and bar, it’s not hard to experience the city’s musical heritage without spending much more than the price of a beer. The city is best experienced slowly, and on foot, and it’s hard to beat people-watching over a cup of coffee and a plate of sugar-dusted beignets at the Café du Monde.

Find out where to sample the city’s best cocktails with our guide.
New Orleans, USA
15. Laos

Even in a region of budget-friendly destinations, Laos stands out. It’s hard not to be captivated by the slow pace of the country; head just north of elegant Luang Prabang to riverside Nong Khiaw, where for small change you can bag a waterside bungalow and watch the boats travel up and down the karst-surrounded river over a cold bottle of Beer Lao.

Get the full lowdown on this enchanting and unspoiled corner of Southeast Asia with The Rough Guide to Laos.
16. The Gambia

Africa’s smallest country is already known for its beautiful beaches, but it’s well worth venturing beyond them to experience its other delights. Top of the list has to be the Chimp Rehabilitation Centre in the River Gambia National Park, where you can watch the primates in their natural habitat, while the birdlife at Baobolong Wetland Reserve is arguably the best place for ornithology on the continent and is at its most atmospheric at sunset.
The Gambia
17. Shanghai, China

Book through Rough Guides’ trusted travel partners

    securityTravel insurance
    directions_carCar rental

The biggest appeal for budget – if not all travellers – to Shanghai is undoubtedly the abundance of amazing street food on offer, from xiao long bao soup dumplings to scallion pancakes and sticky rice parcels (zongzi). It’s still possible to find an accommodation bargain at the lower end of the scale, and much of the city’s appeal lies in exploring its busy streets on foot and experiencing for yourself the juxtaposition between old and new China.

You’ll find recommendations for where to find the city’s best street eats and budget sleeps in The Rough Guide to Shanghai.
Shanghai, China
18. Istanbul, Turkey

With one foot in Europe and the other in Asia, Istanbul is undeniably alluring. Though seeing all the major sights – the Aya Sofya, Blue Mosque and Topkapi Palace to name but a few – can quickly eat into your lira, the city can still be great for tighter budgets. Arguably the best ways to really soak up the city are from a Bosphorus ferry, wandering the streets of the Grand Bazaar, or on a streetside terrace with a freshly-cooked kebab.
Istanbul, Turkey
19. London, England

First things first – London is not cheap. There’s no denying that even staying in hostels, using public transport and eating in cafés is going to massively eat into your budget. But – and it’s a big but – there are few places in the world that can rival the capital city for its plethora of free sights, where you can see the Rosetta Stone and the Lindow Man, works by Monet and Dalí, not to mention dinosaur and blue whale skeletons, for absolutely nothing.

Get off on the right foot by choosing a great area to stay and discover eight things you didn’t know you could do in the Big Smoke.
London, England
20. Egypt

Considering the abundance of mind-blowing ancient sights, you’d expect travel in Egypt to cost a lot more than it does. Sure, if you tick off all the major attractions – including the Pyramids, the Valley of the Kings and Abu Simbel – then costs are going to creep up, but tempered with cheap (and excellent) food and decent budget accommodation, it’s not hard to feel like you’re almost able to live like a Pharaoh.

Note, that due to safety concerns some governments currently advise against travel to certain parts of the country; check the latest advice before you go.

Startup basically stands for the entrepreneurial initiative of taking technology centric ideas to market. These are ideas of products, whether goods or services, to be better alternative to existing products. This is about brining better substitution to market to cause disruption to incumbent industry. Despite the underlying strength of ideas and technology base, it’s a tough journey. And such reality has been the case of high mortality rate in the startup landscape. As high as 90 percent startups suffer death within first three years; or becoming “zombies”—remain afloat with seeming lifelessness. In India, 1,000 startups died in 2016 alone, half of whom were incubated during 2013 and 2014. And adequate epitaph yet to be written on failed startups to understand the cause for finding remedies.  Although we arrange colorful events and promote competition among creative minds to undertake startup initiative, but why don’t we focus on doing postmortem on high mortality?  Due to high mortality, it’s quite important to dissect the journey of failed startups to detect and share patterns to reduce the mortality rate.

    After originating in research based university ecosystems of the USA, startup craze has diffused even in developing countries like India and Bangladesh. Starting from political leaders to academics, startup is being projected as the new vehicle of wealth creation—pursuing disruptive innovation.  Globally, over US$ 125 Bn private equity was invested in the startup world in 2015. This number does not take into account of (i) all the money that employees have “invested” through all the salaries and wages that have not been paid to them, (ii) the billions of dollars of investments made by founders, their friends and family as well as their angel investors, and (iii) also the billions of dollars that has been “invested” by suppliers who did not recover their money from the company that went belly up. If statistics were available, the total amount would be quite large—virtually wasted to pursue ideas. But success of startups is the key to bring better alternative to existing products—for offering better quality products at lower cost to serve our purpose better.

It’s quite ironic that despite such high-mortality rate and loss of so much capital, people often prefer quiet burial. Startup journey could be considered as the process of succeeding in disruptive innovation. It’s about bringing substitute products around new technology core to cause disruption to the exiting industry. For example, the idea could of smartphone based handheld ultrasound machine to cause disruption to existing desktop counterparts. Irrespective of the strength of the idea and underlying new technology, the initial product likely shows up as a primitive alternative to target incumbent products. Such primitive products create a very little wiliness to pay. Suitable customers should be targeted for this primitive product. Additional ideas should be added to complement the first great idea to rapidly improve the quality and reduce the cost of the early primitive offering. Such rapid progress is essential to create new market, and also to cause the disruption to incumbent product’s exiting industry. Both scale and scope advantage (preferable around software), and also the benefit of network externality (by leveraging the ubiquitous connectivity) should be exploited to empower the great idea to succeed. Such disruptive innovation journey is a long one, and moreover, initial great ideas need to be complemented by thousands of additional ideas.

Pages: [1] 2 3 ... 8