Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - farzanaSadia

Pages: [1] 2 3 ... 8
1
Computer scientists have a history of borrowing ideas from nature, such as evolution. When it comes to optimising computer programs, a very interesting evolutionary-based approach has emerged over the past five or six years that could bring incalculable benefits to industry and eventually consumers. We call it genetic improvement.

Genetic improvement involves writing an automated “programmer” who manipulates the source code of a piece of software through trial and error with a view to making it work more efficiently. This might include swapping lines of code around, deleting lines and inserting new ones – very much like a human programmer. Each manipulation is then tested against some quality measure to determine if the new version of the code is an improvement over the old version. It is about taking large software systems and altering them slightly to achieve better results.

These interventions can bring a variety of benefits in the realm of what programmers describe as the functional properties of a piece of software. They might improve how fast a program runs, for instance, or remove bugs. They can also be used to help transplant old software to new hardware. 

The potential doesn’t stop there. Because genetic improvement operates on source code, it can also improve the so-called non-functional properties. These include all the features that are not concerned purely with just the input-output behaviour of programs, such as the amount of bandwidth or energy that the software consumes. These are often particularly tricky for a human programmer to deal with, given the already challenging problem of building correctly functioning software in the first place.

We have seen a few examples of genetic improvement beginning to be recognised in recent years – albeit still within universities for the moment. A good early one dates from 2009, where such an automated “programmer” built by the University of New Mexico and University of Virginia fixed 55 out of 105 bugs in various different kinds of software, ranging from a media player to a Tetris game. For this it won $5,000 (£3,173) and a Gold Humie Award, which is awarded for achievements produced by genetic and evolutionary computation.

In the past year, UCL in London has overseen two research projects that have demonstrated the field’s potential (full disclosure: both have involved co-author William Langdon). The first involved a genetic-improvement program that could take a large complex piece of software with more than 50,000 lines of code and speed up its functionality by 70 times.

The second carried out the first automated wholesale transplant of one piece of software into a larger one by taking a linguistic translator called Babel and inserting it into an instant-messaging system called Pidgin.

Nature and computers
To understand the scale of the opportunity, you have to appreciate that software is a unique engineering material. In other areas of engineering, such as electrical and mechanical engineering, you might build a computational model before you build the final product, since it allows you to push your understanding and test a particular design. On the other hand, software is its own model. A computational model of software is still a computer program. It is a true representation of the final product, which maximises your ability to optimise it with an automated programmer.

As we mentioned at the beginning, there is a rich tradition of computer scientists borrowing ideas from nature. Nature inspired genetic algorithms, for example, which crunch through the millions of possible answers to a real-life problem with many variables to come up with the best one. Examples include anything from devising a wholesale road distribution network to fine-tuning the design of an engine.

Though the evolution metaphor has become something of a millstone in this context, as discussed here, genetic algorithms have had a number of successes producing results which are either comparable with human programs or even better.

Evolution also inspired genetic programming, which attempts to build programs from scratch using small sets of instructions. It is limited, however. One of its many criticisms is that it cannot even evolve the sort of program that would typically be expected of a first-year undergraduate, and will not therefore scale up to the huge software systems that are the backbone of large multinationals.

This makes genetic improvement a particularly interesting deviation from this discipline. Instead of trying to rewrite the whole program from scratch, it succeeds by making small numbers of tiny changes. It doesn’t even have to confine itself to genetic improvement as such. The Babel/Pidgin example showed that it can extend to transplanting a piece of software into a program in a similar way to how surgeons transplant body organs from donors to recipients. This is a reminder that the overall goal is automated software engineering. Whatever nature can teach us when it comes to developing this fascinating new field, we should grab it with both hands.

2
By Chris Baraniuk

Candidates hoping to land their dream job are increasingly being asked to play video games, with companies like Siemens, E.ON and Walmart filtering out hundreds of applicants before the interview stage based partly on how they perform. Played on either smartphones or computers, the games’ designers say they can help improve workplace diversity, but there are questions over how informative the results really are.

To the casual observer, many of the games might seem almost nonsensical. One series of tests by UK-based software house Arctic Shores includes a trial where the player must tap a button frantically to inflate balloons for a party without bursting them. In another, the candidate taps a logo matching the one displayed on screen, at an ever more blistering pace.

Afterwards, a personality profile is built using data on how someone performed, says Robert Newry at Arctic Shores. They claim the traits that can be measured include a person’s willingness to deliberate, seek novel approaches to tasks and even their tendency for social dominance. “What we are measuring is not your reaction skills, your dexterity,” says Newry. “It’s the way you go about approaching and solving the challenge that is put in front of you.”

Using the games, Siemens UK doubled the proportion of female candidates that made it past the initial stages of graduate recruitment than in the previous year, according to data recently released by Arctic Shores. Another company that makes such tests, Pymetrics, says its assessments have boosted recruitment of under-represented groups, with one financial services firm increasing the number of minority candidates offered technical roles by 20 per cent. However, it’s not clear if the boost could simply be down to an increased focus on or awareness of diversity in the workplace.

The games are meant to offer a form of psychometric testing and are based on techniques developed for measuring personality traits. But whereas in academic research the tests are generally calibrated the same way for all participants, when used for recruitment they are often tweaked depending on how existing employees at a company play them.

“We go into these companies and say, ‘Your individuals may be different. Let’s use your high performers to put together a data set,’” says Frida Polli, co-founder of Pymetrics. In other words, if your gameplay matches that of someone already at the firm, you’re more likely to advance to the next stage of recruitment.“You can develop a game-based assessment as rigorous as any traditional psychometric assessment,” says Richard Landers at Old Dominion University in Virginia. “But I don’t know how many companies actually succeed at that.” This is because it takes time and money to show that any assessment’s measurement of a given trait is statistically reliable.

Landers performed an independent review of game-like intelligence tests by Australia-based firm Revelian and says the results were reliable. Arctic Shores have also ran a study with around 300 participants to validate their games.

Caryn Lerman at the University of Pennsylvania has studied brain-training apps and says that although people’s improved performance at these can be tracked over time, they generally have no observable impact on cognitive ability in the real world. She is sceptical that playing the games well corresponds to ability to do a good day’s work in the office.

Although the game-based tests are mandatory, a company’s decision to interview someone may be based on other factors as well, such as their academic record. But in trying to find new ways of shortlisting the best of the bunch, companies risk alienating unsuccessful candidates, says Margaret Beier at Rice University in Texas. They might even expose themselves to lawsuits. “If I apply for a job, play games that seem totally unrelated to it and then don’t get that job, I might have a lot of questions about that assessment,” she says.

3
Java Forum / Java will no longer have ‘major’ releases
« on: May 13, 2018, 06:39:28 PM »
Remember when a new number meant a software release was a sighnificant, or major, one? For Java, that pattern is over. Java 9 was the last “major” release, Oracle says.

All versions after that—including the recently released Java 10 and the forthcoming Java 11—are what the industry typically calls “point releases,” because they were usually numbered x.1, x.2, and so on to indicate an intermediate, more “minor” release. (Oracle has called those point releases “feature releases.”)

[ The new Java is coming! Discover the Java 11 JDK roadmap. • The new Java versions are here! Learn everything you need to know about what’s new in Java SE 10 and what’s new in Java EE 8. | Keep up with hot topics in programming with InfoWorld’s App Dev Report newsletter. ]
As of Java 10, Oracle has put Java on a twice-annual release schedule, and although those releases get whole numbers in their versions, they’re more akin to point releases. Oracle recently declared, “There are no ‘major releases’ per se any more; that is now a legacy term. Instead, there is a steady stream of ‘feature releases.’”

Under the plan, moving from Java 9 to Versions 10 and 11 is similar to previous moves from Java 8 to Java version 8u20 and 8u40.

Previously, it took about three years to move from Java 7 to 8 and then 9.

Oracle says that Java’s twice-yearly release schedule makes it easier for tools vendors to keep up with changes, because they will work with a stream of smaller updates. Upgrading of tools from Java from version 9 to 10 happened “almost overnight,” Oracle says, compared to the difficulties it said some tools vendors had making the move from Java 8 to 9.

For users who just want stability and can pass on new features of each “feature release,” Oracle is still holding onto that old three-year schedule for its long-term support (LTS) releases. Java 11 will be an LTS release, with commercial support available from Oracle for at least eight additional years.

4
Driving down energy use and costs should be a number one priority for British companies according to SSE Enterprise Energy Solutions – the UK’s leading provider of energy management services.

Information and Communications Technology (ICT) typically accounts for 12% of business energy use yet ICT energy management is often overlooked as a way to realise cost savings.

SSE Enterprise Energy Solutions recently delivered a 9% cost saving for Glasgow City Council across its school estate. The pilot project ran across 29 high schools with a total of 9,000 devices and delivered savings of £4,500 per week. Its success has led to the council putting in place a schools ICT policy with energy efficiency at its core.

Kevin Greenhorn, Managing Director of SSE Enterprise Energy Solutions, said: “Technology can revolutionise how organisations manage their energy consumption and ICT is one of the first places to start. We can give visibility across all ICT assets and a central solution which reduces energy costs and carbon emissions – without the need to rely on people to change their behaviour.”

He added: “Not only does it reduce costs but it provides analytical data which informs future planning and sustainable procurement decisions. It’s the right solution for organisations looking to take steps to reduce energy consumption beyond traditional building systems, such as heating, ventilation, air conditioning and lighting.”

SSE Enterprise Energy Solutions offers an Energy ICT solution which allows asset identification and management of every single machine across multiple sites. Software is installed centrally on a standard server and the solution allows for remote control of desktop computers, monitors, laptops, routers, wireless access points, printers, copiers, telephones and other devices.

These devices can be powered down when not in use, for example evenings and weekends, and their energy consumption monitored when in use. In Glasgow City Council’s case this has enabled them to produce a weekly summary which details the savings delivered in carbon, energy and pounds which is then reported at executive level.

Andrew McKenzie, Energy ICT Director for SSE Enterprise Energy Solutions, said: “This software is a genuinely exciting solution that allows organisations to access the last untapped area of energy efficiency, the ICT network.”

He added: “It ticks all the right boxes in terms of saving money, energy and carbon, and enables truly innovative and bespoke energy optimisation across ICT networks that align with building control system strategies. Organisations are therefore able to take significant steps towards total energy management.”

Andrew Mouat, Glasgow City Council’s Principal Officer for Carbon Management, said working with SSE Enterprise Energy Solutions has helped give them the ICT security and control they need, while also delivering efficiencies for the finance and facilities teams. He said: “This technology is delivering real value for Glasgow City Council and taxpayers. The cost savings and associated CO2 reduction speak for themselves. Prudent management of our infrastructure creates efficiencies but importantly also gives us the detailed analytics to inform our planning and be more focused in our ICT and building management. 

“It is contributing to Glasgow becoming a smart, sustainable Future City.”

5
আন্তর্জাতিক মানের স্বীকৃতি পেতে যাচ্ছে বাংলাদেশের দশটি ডাটা সেন্টার। আগামী এক বছরের মধ্যেই এই স্বীকৃতি মিলতে পারে।রাজধানীর বঙ্গবন্ধু আন্তর্জাতিক সম্মেলন কেন্দ্রে অনুষ্ঠিত হয়েছে দুই দিনব্যাপী ডাটা সেন্টার প্রযুক্তি সম্মেলনের শেষ দিনে এসব কথা জানান আন্তর্জাতিক প্রতিষ্ঠান আপটাইম ইনিস্টিটিউটের দক্ষিণ এশীয় অঞ্চলের ব্যবস্থাপনা পরিচালক জন ডাফিন।

তিনি বলেন, জাতীয় ডাটা সেন্টারের পাশাপাশি বেসরকারি খাতের বেশ কটি প্রতিষ্ঠান বৈশ্বিক মানসনদের প্রক্রিয়ার মধ্যে আছে।

তিনি আরও বলেন, এ বছরের মধ্যে দুটি প্রতিষ্ঠানের প্রক্রিয়া শেষ হবে। আগামী বছর নাগাদ দশটি প্রতিষ্ঠানকে ‘মান স্বীকৃতি’ দেওয়ার আশা রাখছি।

বাংলাদেশী প্রযুক্তি প্রতিষ্ঠান ডিসি আইকন ও ডাটা সেন্টার প্রফেশনাল সোসাইটি অব বাংলাদেশের যৌথ আয়োজনে দ্বিতীয়বারের মত আয়োজিত সম্মেলনে যোগ দেয় নয় দেশের তথ্য ব্যবস্থাপনা খাতের বিশেষজ্ঞ ও প্রযুক্তিবিদ।

শুক্রবার সম্মেলনটির শেষ দিন ছিল। সম্মেলনে উদ্ভাবনী প্রযুক্তি প্রদর্শনীর পাশাপাশি সংশ্লিষ্ট বিষয়ে ৫০টি সেমিনার অনুষ্ঠিত হয়েছে।

6
IT Forum / Best Linux server distro of 2018
« on: April 26, 2018, 11:46:04 AM »
1. Debian
Debian is over 20-years-old and in part owes that longevity to the emphasis placed on producing a stable operating system. This is crucial if you want to set up a server as updates can sometimes clash badly with existing software.

There are three branches of Debian, named 'Unstable', 'Testing' and 'Stable'. To become part of the Stable current release, packages must have been reviewed for several months as part of the Testing release. This results in a much more reliable system – but don't expect Debian to incorporate much 'bleeding edge' software as a result.


You can get started with Debian using a minimal Network Boot image which is less than 30MB in size. For a faster setup, download the larger network installer which at just under 300MB contains more packages.


2. Ubuntu Server
While Ubuntu is best known for bringing desktop Linux to the masses, its Server variant is also extremely competitive. Canonical, the company behind Ubuntu, has developed LTS (Long Term Support) versions of Ubuntu Server, which like the desktop flavour can be updated up to five years after the date of release, saving you the trouble of upgrading your server repeatedly. Canonical also periodically releases versions of Ubuntu Server at the same time as the latest desktop distro (i.e. 18.04).

If you're intent on building your own cloud platform, you can also download Ubuntu Cloud Server. Canonical claims that over 55% of OpenStack clouds already run on Ubuntu. For a fee, Canonical will even set up a managed cloud for you using BootStack.


3. OpenSUSE
OpenSUSE (formerly SUSE Linux) is a Linux distro specifically designed for developers and system admins wishing to run their own server. The easy-to-use installer can be configured to use 'Text Mode' rather than install a desktop environment to get your server up and running.

OpenSUSE will automatically download the minimum required packages for you, meaning only essential software is installed. The YaST Control Center allows you to configure network settings, such as setting up a static IP for your server. You can also use the built in Zypper package manager to download and install essential server software such as postfix.


4. Fedora Server
Advertisement

Fedora is a community developed operating system based on the commercial Linux distro Red Hat. Fedora Server is a special implementation of the OS, allowing you to deploy and manage your server using the Rolekit tool. The operating system also includes a powerful PostgreSQL Database Server.

Fedora Server also includes FreeIPA, enabling you to manage authentication credentials, access control information and perform auditing from one central location.

You can download the full 2.3GB ISO image of Fedora Server using the link below. The same page contains a link to a minimal 511MB NetInstall Image from Fedora's Other Downloads section for a faster barebones setup.


5. CentOS
Like Fedora, CentOS is a community developed distribution of Linux, originally based on the commercial OS Red Hat Enterprise Linux. In light of this, the developers behind CentOS 7 have promised to provide full updates for the OS until the end of 2020, with maintenance updates until the end of June 2024 – which should save the trouble of performing a full upgrade on your server in the near future.

You can avoid unnecessary packages by installing the 'minimal' ISO from the CentOS website, which at 792MB can fit onto a 90 minute CD-R. If you're eager to get started, the site also offers preconfigured AWS instances and Docker images.

7
It might look like witchcraft, but researchers at Nvidia have developed an advanced deep learning image-retouching tool that can intelligently reconstruct incomplete photos.

While removing unwanted artefacts in image editing is nothing new – Adobe Photoshop's Content-Aware tools are pretty much the industry standard – the prototype tool that Nvidia is showcasing looks incredibly impressive.

Don't take our word for it – check out the two-minute video below to get a taste of what this new technology is capable of.


What differentiates Nvidia's new tool from something like Content-Aware Fill in Photoshop is that it analyzes the image and understands what the subject should actually look like; Content-Aware Fill relies on surrounding parts of the image to fill in what it thinks should be there.

Nvidia's tool is a much more sophisticated solution. For instance, when trying to fill a hole where an eye would be in a portrait, as well as using information from the surrounding area Nvidia's deep learning tool knows an eye should be there, and can fill the hole with a realistic computer-generated alternative.   

"Our model can robustly handle holes of any shape, size location, or distance from the image borders," the researchers write. "Previous deep learning approaches have focused on rectangular regions located around the center of the image, and often rely on expensive post-processing. Further, our model gracefully handles holes of increasing size."

For the moment at least there's no word when we're likely to see this tool become more widely available. For now though, it gives us a glimpse into the near future of image editing.

8
For years, Swami Sivasubramanian’s wife has wanted to get a look at the bears that come out of the woods on summer nights to plunder the trash cans at their suburban Seattle home. So over the Christmas break, Sivasubramanian, the head of Amazon’s AI division, began rigging up a system to let her do just that.­­­­­

So far he has designed a computer model that can train itself to identify bears—and ignore raccoons, dogs, and late-night joggers. He did it using an Amazon cloud service called SageMaker, a machine-learning product designed for app developers who know nothing about machine learning. Next, he’ll install Amazon’s new DeepLens wireless video camera on his garage. The $250 device, which will go on sale to the public in June, contains deep-learning software to put the model’s intelligence into action and send an alert to his wife’s cell phone whenever it thinks it sees an ursine visitor.

Sivasubramanian’s bear detector is not exactly a killer app for artificial intelligence, but its existence is a sign that the capabilities of machine learning are becoming far more accessible. For the past three years, Amazon, Google, and Microsoft have been folding features such as face recognition in online photos and language translation for speech into their respective cloud services—AWS, Google Cloud, and Azure. Now they are in a headlong rush to build on these basic capabilities to create AI-based platforms can be used by almost any type of company, regardless of its size and technical sophistication.

“Machine learning is where the relational database was in the early 1990s: everyone knew it would be useful for essentially every company, but very few companies had the ability to take advantage of it,” says Sivasubramanian.

Amazon, Google, and Microsoft—and to a lesser extent companies like Apple, IBM, Oracle, Salesforce, and SAP—have the massive computing resources and armies of talent required to build this AI utility. And they also have the business imperative to get in on what may be the most lucrative technology mega-trend yet.

“Ultimately, the cloud is how most companies are going to make use of AI—and how technology suppliers are going to make money off of it,” says Nick McQuire, an analyst with CCS Insight.

 
Inside the Chinese lab that plans to rewire the world with AI
Alibaba is investing huge sums in AI research and resources—and it is building tools to challenge Google and Amazon.
Quantifying the potential financial rewards is difficult, but for the leading AI cloud providers they could be unprecedented. AI could double the size of the $260 billion cloud market in coming years, says Rajen Sheth, senior director of product management in Google’s Cloud AI unit. And because of the nature of machine learning—the more data the system gets, the better the decisions it will make—customers are more likely to get locked in to an initial vendor.

In other words, whoever gets out to the early lead will be very difficult to unseat. “The prize will be to become the operating system of the next era of tech,” says Arun Sundararajan, who studies how digital technologies affect the economy at NYU’s Stern School of Business. And Puneet Shivam, president of Avendus Capital US, an investment bank, says: “The leaders in the AI cloud will become the most powerful companies in history.”

It’s not just Amazon, Google, and Microsoft that are pursuing dominance. Chinese giants such as Alibaba and Baidu are becoming major forces, particularly in Asian markets. Leading enterprise software companies including Oracle, Salesforce, and SAP are embedding machine learning into their apps. And thousands of AI-related startups have ambitions to become tomorrow’s AI leaders.

Amazon, Google, and Microsoft all offer services for recognizing faces and other objects in photos and videos, for turning speech into text and vice versa, and for doing the natural-language processing that allows Alexa, Siri, and other digital assistants to understand your queries (or some of them, anyway).

So far, none of this activity has resulted in much in the way of revenue; none of AI’s biggest players bother to break out sales of their commercial AI services in their earnings calls. But that would quickly change for the company that creates the underlying technologies and developer tools to support the widespread commercialization of machine learning. That’s what Microsoft did for the PC, by creating a Windows platform that millions of developers used to build PC programs. Apple did the same with iOS, which spawned the mobile-app era.

Google jumped out to the early lead in 2015, wooing developers when it open-sourced TensorFlow, the software framework its own AI experts used to create machine-learning tools. But Amazon and Microsoft have created similar technologies since then; they even joined forces in 2017 to create Gluon, an open-source interface designed to make machine learning easier to use with or without TensorFlow.

All three continue to work on ways to make machine learning accessible even to total AI novices. That was the idea behind Amazon’s SageMaker, which is designed to make building machine-learning apps not much more complicated than creating a website. A few weeks after SageMaker was announced last November, Google introduced Cloud AutoML. A company can feed its own unique collection of data into this technology, and it will automatically generate a machine-learning model capable of improving the business. Google says that more than 13,000 companies have asked to try Cloud AutoML.

“There are 20 million organizations in the world that could benefit from machine learning, but they can’t hire people with the necessary background,” says Jeff Dean, head of Google Brain. “To get even 10 million of them using machine learning, we have to make this stuff much easier to use.”

So which of the Big Three is best positioned to win that all-important first-mover advantage? All have immense strengths and few obvious weaknesses.

Take Microsoft. It’s been doing breakthrough work on AI problems such as computer vision and natural-language processing for two decades. It’s got access to massive amounts of valuable data to inform its Azure cloud, including content from Bing, LinkedIn, Skype, and the more than a billion people who use Microsoft Office. Simply put, no other company knows more about what it takes to sell, or help other developers sell, software to businesses and other organizations.
 
China and the US are bracing for an AI showdown—in the cloud
Alibaba, Amazon, and others are adding ever more capable AI services to their cloud platforms.
Then there’s Amazon. With its Apple-esque secrecy, it was considered an also-ran in AI until about a year ago. But that secrecy appears to have masked sweeping corporate ambitions. For the past seven years, every business planning document at Amazon has had to include an explanation of how the unit would make use of machine learning, says Sivasubramanian. (This requirement appeared on the boilerplate forms managers used for such documents, including a parenthetical clause that read “None is not a good answer,” he says.)

While it still doesn’t publish many papers, Amazon has a 40 percent market share in the cloud market and is moving ferociously to use that position to dominate the AI cloud as well. It’s introduced a slew of new services that were once used only internally. It’s been the most aggressive acquirer of AI startups, spending more than twice as much as Google and four times as much as Microsoft in the past few years, says Jon Nordmark, CEO of Iterate.ai, a provider of AI services.

It’s well on its way to making Alexa dominate the next great consumer interface, voice. And while Google has made headlines using AI to defeat Go champions, Amazon is using its expertise in factory robotics and the logistics of delivering millions of packages a day, positioning itself for AI projects that meld digital information with data collected from real-world sensors. “Other companies publish more papers, but Amazon is the one putting boots on the ground and moving ahead,” says Nordmark.

Maybe so. But while Amazon was years ahead of the competition in creating AWS, this time nobody is sitting idly by. The prize is too big, and the opportunities for AI dominance too lucrative.

9
Travel / Visit / Tour / Budget trips: 20 of the cheapest places to travel
« on: November 16, 2017, 07:41:36 PM »


That ever-growing travel wish list might be putting some pressure on your pocket – but there are plenty of destinations where you’ll get more bang for your buck. From Greece to Guatemala, here are 20 places you can visit without breaking the bank.
1. Thailand

There’s a reason why Thailand remains so popular with backpackers – it’s got idyllic islands, a rich culture, beach-huts aplenty, tantalising cuisine and adventures galore, and all available at often staggeringly low prices. Despite the well-trodden routes through the country, it’s not hard to get away from the crowds – check out Nakhon Si Thammarat for some of the very best food the country has to offer, or hire a motorbike to make the 600km trip along the Mae Hong Son Loop through the forested northern mountains.

Read our tips for backpacking Thailand and travelling solo in Thailand before you go.
Thailand
2. South Africa

One of the great things about travelling in South Africa is that it’s possible to have a safari experience here – complete with the Big Five – without encountering a budget-breaking bill. Head to Hluhluwe-Imfolozi to see white rhino and to avoid the crowds of Kruger, to the Drakensberg for superlative hiking, and don’t forget to factor in at least a few days in amazing Cape Town.

Start planning your trip with our list of the best road trips across the country.
South Africa
3. Vietnam

Despite a remarkable rate of change over the decades since the end of the American War, Vietnam remains amazing value for Western visitors. The country’s greatest attraction is its sublime countryside, from the limestone karsts of the north to the waterways and paddy fields of the Mekong Delta, with blissful beaches and frenetic cities crammed in between.

Then there’s the cuisine – pull up a stool at a pho stall and for only a couple of dollars you’ll eating some of the best food on offer, shoulder to shoulder with the locals.

Check out our 9 tips for backpacking Vietnam and discover how to get off  the tourist trail before you go.
Vietnam
4. Uruguay

If you’ve already visited Brazil and Argentina, or are just looking for a better value destination, head instead to neighbouring Uruguay. You’ll be relieved to hear you can still find excellent steak here; plus, there plenty of lovely beaches to choose from – head to Cabo Polonio for quieter sands and abundant wildlife – and the gorgeous old capital of Montevideo.

Want to learn more? You’ll find all the information you need to plan a budget trip in our Snapshot Guide to Uruguay.
Uruguay
5. Cuba

Book through Rough Guides’ trusted travel partners

    securityTravel insurance
    location_cityHotels
    hotelHostels
    directions_carCar rental
    infoTours

Since relations between Cuba and the US started rapidly warming up, there’s never been a better time to visit this Caribbean island. Go now before it changes beyond recognition – and before the prices start to go up and up even more. Hit the salsa clubs of Havana, get caught up in the heady July carnival of Santiago, or dip your toes in the warm Caribbean at Varadero Beach – whatever you do, you’ll find it hard not to leave utterly intoxicated.

Get started with these 12 tips for backpacking Cuba.
Cuba
6. Prague, Czech Republic

Despite being firmly on the tourist – not to mention bachelor party – trail these days, Prague remains one of Europe’s cheapest capital cities to visit. For just a few Czech Crowns you can enjoy a hearty meal, washed down with decent local beer, of course. The city itself is a beauty, crammed full of history and perfect for leisurely explorations by foot.

Want to explore more of Europe on the cheap? Check out The Rough Guide to Europe on a Budget.
Prague, Czech Republic
7. Greece

Don’t be put off Greece by the country’s ongoing economic crisis – if anything, the financial situation is all the more reason to travel here and to support the local people. The situation does mean that prices are still cheaper than they once were, and that means that you might be able to squeeze an extra island or two into your itinerary. Pay by credit card in advance, but take enough cash with you for your travels, and you’re pretty much guaranteed an amazing trip.

Read these 11 tips by Nick Edwards, co-author of The Rough Guide to Greece, before you go.
Greece
8. Guatemala

It’s hard not to fall under the spell of Guatemala and its compelling mix of natural beauty, Maya traditions and colonial legacies. Rock-bottom prices make this one of the best places to study Spanish; once your linguistic skills are up to scratch, jump onto one of the country’s famous camionetas or “chicken buses” to explore, soak up the sights of graceful Antigua, or be wowed by the monumental Maya temples of Tikal.

It’s easy to extend your trip to see more of Central America, too. Check out The Rough Guide to Central America on a Budget for advice, and also discover why you shouldn’t rush through Guatemala City.
Guatemala
9. Bulgaria
Related features
Go solo: the 20 best places to travel alone
Go solo: the 20 best places to travel alone
10 things everyone learns travelling solo
10 things everyone learns travelling solo
On a budget: 15 cheap places to visit in Europe
On a budget: 15 cheap places to visit in Europe

Often unfairly overlooked, Bulgaria has a lot to offer budget travellers – not least some of the most deserted beaches in Europe, at bargain prices. In addition to its appealing coastline, there’s also lots of lovely old towns, including Varna on the coast and ancient Plovdiv, and a number of dramatic mountain ranges that are perfect for exploration on foot or by bike.
Bulgaria
10. India

India remains one of the ultimate destinations for budget travellers – there are few countries where you can still travel so extensively and eat so well for so little. If you’re after a beach break, eschew Goa for the gorgeous beaches of the temple town of Gokarna; for amazing food, it’s hard to beat the puris and kebabs of Mumbai’s street stalls; or head to the Golden City of Jaisalmer from where you can explore the seemingly endless sands of the Thar Desert.

Need more inspiration? Discover the most romantic places in India, check out our favourite places off the tourist trail and find out what it was like to write the first ever Rough Guide to the country.
India
11. Portugal

Portugal remains one of the best bargains in Western Europe, and is especially worth considering if you want to avoid the more crowded resorts and cities of Spain. Skip the Algarve for the ruggedly beautiful Alentejo (with its cheap, fresh seafood) and vibrant, uber cool Lisbon; and don’t forget to put enough euros aside for a pastéis de belém (custard tart) or two.

If you’re not sure where to start, read our top tips for travelling in Portugal and discover the best of Lisbon’s food scene.
Portugal
12. Bolivia

One of the cheapest countries in South America, Bolivia is also one of it’s most misunderstood. Travelling here may be a little uncomfortable at times, but it’s more than worth it for the wealth of amazing sights on offer. Top of the list is undoubtedly the astounding Salar de Uyuni salt flats, a two or three day tour of which will usually set you back less than £100/$150.

Get The Rough Guide to South America on a Budget to start planning your trip, and be sure to include at least one of these beautiful journeys across the country.
Bolivia
13. Mexico
Related guides
Rough Guides Snapshot South Africa: The Eastern Cape
The Rough Guide to Cape Town, The Winelands & The Garden Route
The Rough Guide to South Africa, Lesotho & Swaziland
View all guideschevron_right

Your budget will definitely stretch to tacos and tequila aplenty in Mexico – which is great news as there’s a lot of ground to cover in this vibrant country. Whether you want to string your hammock up along dazzling white sands, sample some of the country’s best street food in Oaxaca or cool off in a crystal-clear cenote (sinkhole), the country will leave you eager to come back for more.

To kick-start your wanderlust, these are 12 of our favourite places to visit – and here’s why Tijuana should be on your radar.
Mexico
14. New Orleans, USA

You can’t escape from music in New Orleans – and with buskers on what often seems like every corner, and music in every courtyard and bar, it’s not hard to experience the city’s musical heritage without spending much more than the price of a beer. The city is best experienced slowly, and on foot, and it’s hard to beat people-watching over a cup of coffee and a plate of sugar-dusted beignets at the Café du Monde.

Find out where to sample the city’s best cocktails with our guide.
New Orleans, USA
15. Laos

Even in a region of budget-friendly destinations, Laos stands out. It’s hard not to be captivated by the slow pace of the country; head just north of elegant Luang Prabang to riverside Nong Khiaw, where for small change you can bag a waterside bungalow and watch the boats travel up and down the karst-surrounded river over a cold bottle of Beer Lao.

Get the full lowdown on this enchanting and unspoiled corner of Southeast Asia with The Rough Guide to Laos.
Laos
16. The Gambia

Africa’s smallest country is already known for its beautiful beaches, but it’s well worth venturing beyond them to experience its other delights. Top of the list has to be the Chimp Rehabilitation Centre in the River Gambia National Park, where you can watch the primates in their natural habitat, while the birdlife at Baobolong Wetland Reserve is arguably the best place for ornithology on the continent and is at its most atmospheric at sunset.
The Gambia
17. Shanghai, China

Book through Rough Guides’ trusted travel partners

    securityTravel insurance
    location_cityHotels
    hotelHostels
    directions_carCar rental
    infoTours

The biggest appeal for budget – if not all travellers – to Shanghai is undoubtedly the abundance of amazing street food on offer, from xiao long bao soup dumplings to scallion pancakes and sticky rice parcels (zongzi). It’s still possible to find an accommodation bargain at the lower end of the scale, and much of the city’s appeal lies in exploring its busy streets on foot and experiencing for yourself the juxtaposition between old and new China.

You’ll find recommendations for where to find the city’s best street eats and budget sleeps in The Rough Guide to Shanghai.
Shanghai, China
18. Istanbul, Turkey

With one foot in Europe and the other in Asia, Istanbul is undeniably alluring. Though seeing all the major sights – the Aya Sofya, Blue Mosque and Topkapi Palace to name but a few – can quickly eat into your lira, the city can still be great for tighter budgets. Arguably the best ways to really soak up the city are from a Bosphorus ferry, wandering the streets of the Grand Bazaar, or on a streetside terrace with a freshly-cooked kebab.
Istanbul, Turkey
19. London, England

First things first – London is not cheap. There’s no denying that even staying in hostels, using public transport and eating in cafés is going to massively eat into your budget. But – and it’s a big but – there are few places in the world that can rival the capital city for its plethora of free sights, where you can see the Rosetta Stone and the Lindow Man, works by Monet and Dalí, not to mention dinosaur and blue whale skeletons, for absolutely nothing.

Get off on the right foot by choosing a great area to stay and discover eight things you didn’t know you could do in the Big Smoke.
London, England
20. Egypt

Considering the abundance of mind-blowing ancient sights, you’d expect travel in Egypt to cost a lot more than it does. Sure, if you tick off all the major attractions – including the Pyramids, the Valley of the Kings and Abu Simbel – then costs are going to creep up, but tempered with cheap (and excellent) food and decent budget accommodation, it’s not hard to feel like you’re almost able to live like a Pharaoh.

Note, that due to safety concerns some governments currently advise against travel to certain parts of the country; check the latest advice before you go.

10
Startup basically stands for the entrepreneurial initiative of taking technology centric ideas to market. These are ideas of products, whether goods or services, to be better alternative to existing products. This is about brining better substitution to market to cause disruption to incumbent industry. Despite the underlying strength of ideas and technology base, it’s a tough journey. And such reality has been the case of high mortality rate in the startup landscape. As high as 90 percent startups suffer death within first three years; or becoming “zombies”—remain afloat with seeming lifelessness. In India, 1,000 startups died in 2016 alone, half of whom were incubated during 2013 and 2014. And adequate epitaph yet to be written on failed startups to understand the cause for finding remedies.  Although we arrange colorful events and promote competition among creative minds to undertake startup initiative, but why don’t we focus on doing postmortem on high mortality?  Due to high mortality, it’s quite important to dissect the journey of failed startups to detect and share patterns to reduce the mortality rate.

    After originating in research based university ecosystems of the USA, startup craze has diffused even in developing countries like India and Bangladesh. Starting from political leaders to academics, startup is being projected as the new vehicle of wealth creation—pursuing disruptive innovation.  Globally, over US$ 125 Bn private equity was invested in the startup world in 2015. This number does not take into account of (i) all the money that employees have “invested” through all the salaries and wages that have not been paid to them, (ii) the billions of dollars of investments made by founders, their friends and family as well as their angel investors, and (iii) also the billions of dollars that has been “invested” by suppliers who did not recover their money from the company that went belly up. If statistics were available, the total amount would be quite large—virtually wasted to pursue ideas. But success of startups is the key to bring better alternative to existing products—for offering better quality products at lower cost to serve our purpose better.

It’s quite ironic that despite such high-mortality rate and loss of so much capital, people often prefer quiet burial. Startup journey could be considered as the process of succeeding in disruptive innovation. It’s about bringing substitute products around new technology core to cause disruption to the exiting industry. For example, the idea could of smartphone based handheld ultrasound machine to cause disruption to existing desktop counterparts. Irrespective of the strength of the idea and underlying new technology, the initial product likely shows up as a primitive alternative to target incumbent products. Such primitive products create a very little wiliness to pay. Suitable customers should be targeted for this primitive product. Additional ideas should be added to complement the first great idea to rapidly improve the quality and reduce the cost of the early primitive offering. Such rapid progress is essential to create new market, and also to cause the disruption to incumbent product’s exiting industry. Both scale and scope advantage (preferable around software), and also the benefit of network externality (by leveraging the ubiquitous connectivity) should be exploited to empower the great idea to succeed. Such disruptive innovation journey is a long one, and moreover, initial great ideas need to be complemented by thousands of additional ideas.
http://techpolicyviews.blogspot.com/2017/10/focus-on-writing-epitaph-on-startup.html

11
 By Rokon Zaman

There has been a concept, called: Constructive destruction, known as Schumpeter's gale. Smart companies destroy their existing products to create space for their more innovative ones. Does it work in the job market? To respond to robotics, should we protect or kill jobs has been a burning question to many of us. By killing jobs, if we cannot find any new job, it might be stupidity. But, if we can create better jobs by killing existing one, answer could be different.

China is turning Robotics and Automation, broadly coined as Fourth Industrial Revolution, into blessing. ‪China is desperately taking the advantage from low cost ‪robots to kill 100 million manufacturing ‪jobs as fast as it can do to slow down the migration of factories to African and South Asian countries. ‪China’s robotic strategy to kill manufacturing jobs to slow down out migration of factories is fuelling the growth of domestic robotics R&D and production. Such capability is crucial for China to create new high paying jobs to innovate robots for elderly care--to handle the liability of one child policy.

As opposed to ‪China's aggressive move to kill manufacturing jobs with ‪Robots, India's frugal ‪innovation and conservative approach to Robotics and automation to protect jobs raises question. Will it contribute to widening the competitiveness gap between ‪India and China?
Introduction of ‪robots into manufacturing not only kill ‪jobs, but also lower the rate of growth of industrial wages. By focusing on labor-intensive jobs, are developing countries deliberately staying on the slow track to suffer from income erosion?

To take the advantage of low cost sensors, actuators and software, should every country of the world, irrespective of development stage, focus on developing domestic ‪Robotic R&D and production capability? Will such strategy contribute to both job and income growth, even in least developed countries? In absence of such strategy, will all developing countries, even supplier of least costly labor, end up in loosing existing manufacturing jobs—suffering from premature deindustrialization? 

12
Software Engineering / Evolving Roles of Software Requirements Engineering
« on: November 14, 2017, 02:58:42 PM »
 Role of software requirements engineering has been evolving through stages. At the beginning of the computer industry, in 1950s, the challenge of selecting software requirements was to simplify the computational tasks, making them executable my machines. At that time, software users were mostly writing programs for themselves. At later stage, the job of in-house software developers was created. Those in-house programmers used to work in close association with major users to acquire, analyze and select requirements, which were technologically feasible to translate into software application. Technology knowledge and the ability to translate users’ requirements into software were most pressing challenge. The challenge at this phase was primarily in the area of assessing technology feasibility in capturing, analyzing and selecting requirements.

With the growth of computational need and expansion of user base of computers, major software customers started contracting out the assignment of getting custom made software delivered. In such contractual engagements, in addition to technology feasibility, the given financial and time budgets were very relevant in deciding about requirements to be translated to software applications. With the growth of software assets, compatibility and capitalization of already developed software assets started to play important role in capturing, analyzing and screening software requirements.

The third phase of software development could be termed as market driven innovation age. Instead of being in-house employees and contractors for target software customers, the focus started to shift to develop software applications targeting many customers. Upon development and launching of the application, customers voluntarily decide about the purchase of such applications. Moreover, the price of such software applications is far lower than the development cost, as the development cost is divided over many customers and the cost of replication of software is virtually zero. Such model of software development became attractive in both the supply and demand sides. In one hand, customers were getting the software at a fraction of cost of original development. On the other hand, it was opening the opportunity in the supply side to make growing profit by offering the same application to a large number of customers. But such model of software development, could be termed as market led innovation, created additional challenges for software requirements engineering.


To innovate software applications, in capturing, screening and selecting software requirements, the first question centered around the likely willingness to pay by target customer groups, and the number of customers likely to be buying the feature at certain price. Software requirements selection pays serous attention to increasing the number of likely customers, as total R&D cost is divided by the number of total customers to determine the per unit (customer) cost. The second challenge is around the likely response of the competition upon the release of the product in the market. In deciding about requirements, the challenges of dealing with the force of imitation, innovation and substitution are taken into consideration. The uncertainty of market response takes the 3rd position. Instead of releasing full blown requirements around certain major features, often time the strategy of seeding, selective release, is given consideration to test the response of the market. With the growth of Internet penetration, the 4th important area to focus is to pay attention to those software requirements which have the potential of creating network externality effect. The network externality effect is being found to be of growing importance to succeed with software innovation, as the perceived value of the product keeps growing with the growth of customer base. The 5th area of the focus is about the management of technology and innovation, and dynamics of public policy. Risk capital financing to support research and development, managing intellectual assets, and managing the development team over a long uncertain period is the 6th area in deciding about software requirements, apparently most challenging area. Apart from technology competence of translating requirements into software features, a number of factors including these six should be taken into consideration in deciding about optimum software requirements, turning software requirements capturing to engineering.
Posted 1st March by Rokon Zaman

13
Microsoft has announced a new partnership with Amazon to create a open-source deep learning library called Gluon. The idea behind Gluon is to make artificial intelligence more accessible and valuable.

According to Microsoft, the library simplifies the process of making deep learning models and will enable developers to run multiple deep learning libraries. This announcement follows their introduction of the Open Neural Network Exchange (ONNX) format, which is another AI ecosystem.

Gluon supports symbolic and imperative programming, which is something not supported by many other toolkits, Microsoft explained. It also will support hybridization of code, allowing compute graphs to be cached and reused in future iterations. It offers a layers library that reuses pre-built building blocks to define model architecture. Gluon natively supports loops and ragged tensors, allowing for high execution efficiency for RNN and LSTM models, as well as supporting sparse data and operations. It also provides the ability to do advanced scheduling on multiple GPUs.

“This is another step in fostering an open AI ecosystem to accelerate innovation and democratization of AI-making it more accessible and valuable to all,” Microsoft wrote in a blog post. “With Gluon, developers will be able to deliver new and exciting AI innovations faster by using a higher-level programming model and the tools and platforms they are most comfortable with.”

The library will be available for Apache MXNet or Microsoft Cognitive Toolkit. It is already available on GitHub for Apache MXNet, with Microsoft Cognitive Toolkit support on the way.

14
Could the recent Equifax data breach been prevented if the credit agency had the right programming tools in place? That’s the question researchers from North Carolina State University set out to answer in their recent study: Can Automated Pull Requests Encourage Software Developers to Upgrade Out-of-Date Dependencies?

According to the researchers, a majority of software relies on external libraries to perform functions. Often times, those libraries are modified to address flaws. In order for programmers to ensure the safety of their code, they have to constantly check the status of their software libraries and update their code to account for any changes.

“This is called ‘upgrading an out-of-date dependency.’ However, for various reasons, many programmers procrastinate, putting off the needed upgrades,” said Chris Parnin, an assistant professor of computer science at North Carolina State University

Parnin explained, this type of procrastination is exactly what happened with the Equifax data breach. “An external library they relied on had made public that it contained a security flaw. And while the external library was patched, Equifax never got around to updating its internal code. So months after the problem was identified, Equifax was still vulnerable and got hacked.”

In the study, the researchers looked at thousands of open source programs on GitHub to assess if tools could get more programmers to update their out-of-date dependencies. In one group, the researchers looked at 2,578 projects that used automated pull requests to notify project owners about necessary upgrades. In another group, the researchers looked at 1,273 projects that did not take advantage of automated pull requests or tools in place for out-of-date dependencies. The results showed 60% or programmers with automated pull requests associated with their programs were more likely to upgrade their projects than those who didn’t use any incentives.

“We also found that the majority of automated pull request projects were using the most up-to-date versions of dependent software, whereas the unincentivized projects were all over the map,” Parnin asid. “The take-home message here is that we have automated tools that can help programmers keep up with upgrades. These tools can’t replace good programmers, but they can make a significant difference. However, it’s still up to programmers to put these tools in place and make use of them.”

15
Ruby has had a reputation as a user-friendly language for building web applications. But its slippage in this month’s RedMonk Programming Language Rankings has raised questions about where exactly the language stands among developers these days.

The twice-yearly RedMonk index ranked Ruby at eighth, the lowest position ever for the language. “Swift and now Kotlin are the obvious choices for native mobile development. Go, Rust, and others are clearer modern choices for infrastructure,” said RedMonk analyst Stephen O’Grady. “The web, meanwhile, where Ruby really made its mark with Rails, is now an aggressively competitive and crowded field.”

Although O’Grady noted that Ruby remains “tremendously popular,” participants on sites such as Hacker News and Quora have increasingly questioned whether Ruby is dying. In the Redmonk rankings, Ruby peaked at fourth place in 2013, reinforcing the perception is in decline, if a slow one.

The rankings were:

    JavaScript
    Java
    Python
    PHP
    C#
    C++
    CSS
    Ruby
    C
    Objective-C

RedMonk’s rankings are based on a formula that examines pull requests on GitHub as well as language discussions on Stack Overflow. The RedMonk rankings’ methodolog differs from those used in the monthly Tiobe and PyPL language popularity rankings, which use formulas based on internet searches.

Pages: [1] 2 3 ... 8