Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - khalid

Pages: [1] 2 3
Many people don’t really know the difference between software architecture and software design. Even for developers, the line is often blurry and they might mix up elements of software architecture patterns and design patterns. As a developer myself, I would like to simplify these concepts and explain the differences between software design and software architecture. In addition, I will show you why it is important for a developer to know a little bit about software architecture and a lot of software design. So, let’s start.

The Definition of Software Architecture
In simple words, software architecture is the process of converting software characteristics such as flexibility, scalability, feasibility, reusability, and security into a structured solution that meets the technical and the business expectations. This definition leads us to ask about the characteristics of a software that can affect a software architecture design. There is a long list of characteristics which mainly represent the business or the operational requirements, in addition to the technical requirements.

The Characteristics of Software Architecture
As explained, software characteristics describe the requirements and the expectations of a software in operational and technical levels. Thus, when a product owner says they are competing in a rapidly changing markets, and they should adapt their business model quickly. The software should be “extendable, modular and maintainable” if a business deals with urgent requests that need to be completed successfully in the matter of time. As a software architect, you should note that the performance and low fault tolerance, scalability and reliability are your key characteristics. Now, after defining the previous characteristics the business owner tells you that they have a limited budget for that project, another characteristic comes up here which is “the feasibility.”

Here you can find a full list of software characteristics, also known as “quality attributes,” here.

Software Architecture Patterns
Most people have probably heard of the term “MicroServices” before. MicroServices is one of many other software architecture patterns such as Layered Pattern, Event-Driven Pattern, Serverless Pattern and many more. Some of them will be discussed later in this article. The Microservices pattern received its reputation after being adopted by Amazon and Netflix and showing its great impact. Now, let’s dig deeper into the architecture patterns.

**A quick note, please don’t mix up design patterns like Factory or adaptor patterns and the architecture patterns. I will discuss them later.

Serverless Architecture
This element refers to the application solution that depends on third-party services to manage the complexity of the servers and backend management. Serverless Architecture is divided into two main categories. The first is “Backend as a service (BaaS)” and the second is “Functions as a Service (FaaS).” The serverless architecture will help you save a lot of time taking care and fixing bugs of deployment and servers regular tasks.
The most famous provider for serverless API is Amazon AWS “Lambda.”

You can read more about this here.

Event-Driven Architecture
This architecture depends on Event Producers and Event Consumers. The main idea is to decouple your system’s parts and each part will be triggered when an interesting event from another part has got triggered. Is it complicated? Let’s simplify it. Assume you design an online store system and it has two parts. A purchase module and a vendor module. If a customer makes a purchase, the purchase module would generate an event of “orderPending” Since the vendor module is interesting in the “orderPending” event, it will be listening, in case one is triggered. Once the vendor module gets this event, it will execute some tasks or maybe fire another event for order more of the product from a certain vendor.

Just remember the event-producer does not know which event-consumer listening to which event. Also, other consumers do not know which of them listens to which events. Therefore, the main idea is decoupling the parts of the system.

If you are interested in learning more about this, click here.

Microservices Architecture
Microservices architecture has become the most popular architecture in the last few years. It depends on developing small, independent modular services where each service solves a specific problem or performs a unique task and these modules communicate with each other through well-defined API to serve the business goal. I do not have to explain more just look at this image.

image from weave-works
Software Design
While software architecture is responsible for the skeleton and the high-level infrastructure of a software, the software design is responsible for the code level design such as, what each module is doing, the classes scope, and the functions purposes, etc.

If you are a developer, it is important for you to know what the SOLID principle is and how a design pattern should solve regular problems.

SOLID refers to Single Responsibility, Open Closed, Liskov substitution, Interface Segregation and Dependency Inversion Principles.

Single Responsibility Principle means that each class has to have one single purpose, a responsibility and a reason to change.
Open Closed Principle: a class should be open for extension, but closed for modification. In simple words, you should be able to add more functionality to the class but do not edit current functions in a way that breaks existing code that uses it.
Liskov substitution principle: this principle guides the developer to use inheritance in a way that will not break the application logic at any point. Thus, if a child class called “XyClass” inherits from a parent class “AbClass”, the child class shall not replicate a functionality of the parent class in a way that change the behavior parent class. So you can easily use the object of XyClass instead of the object of AbClass without breaking the application logic.
Interface Segregation Principle: Simply, since a class can implement multiple interfaces, then structure your code in a way that a class will never be forced to implement a function that is not important to its purpose. So, categorize your interfaces.
Dependency Inversion Principle: If you ever followed TDD for your application development, then you know how decoupling your code is important for testability and modularity. In other words, If a certain Class “ex: Purchase” depends on “Users” Class then the User object instantiation should come from outside the “Purchase” class.

for more:

Abstract. The aim of Search Based Software Engineering (SBSE) research is to move software engineering problems from human-based search to machine-based search, using a variety
of techniques from the metaheuristic search, operations research and evolutionary computation paradigms. The idea is to exploit humans’ creativity and machines’ tenacity and reliability, rather than requiring humans to perform the more tedious, error prone and thereby costly
aspects of the engineering process. SBSE can also provide insights and decision support. This
tutorial will present the reader with a step-by-step guide to the application of SBSE techniques to Software Engineering. It assumes neither previous knowledge nor experience with
Search Based Optimisation. The intention is that the tutorial will cover sufficient material to
allow the reader to become productive in successfully applying search based optimisation to
a chosen Software Engineering problem of interest.
1 Introduction
Search Based Software Engineering (SBSE) is the name given to a body of work in which Search
Based Optimisation is applied to Software Engineering. This approach to Software Engineering
has proved to be very successful and generic. It has been a subfield of software engineering for
ten years [45], the past five of which have been characterised by an explosion of interest and
activity [48]. New application areas within Software Engineering continue to emerge and a body of
empirical evidence has now accrued that demonstrates that the search based approach is definitely
here to stay.
SBSE seeks to reformulate Software Engineering problems as ‘search problems’ [45, 48]. This
is not to be confused with textual or hypertextual searching. Rather, for Search Based Software
Engineering, a search problem is one in which optimal or near optimal solutions are sought in a
search space of candidate solutions, guided by a fitness function that distinguishes between better
and worse solutions. The term SBSE was coined by Harman and Jones [45] in 2001, which was the
first paper to advocate Search Based Optimisation as a general approach to Software Engineering,
though there were other authors who had previously applied search based optimisation to aspects
of Software Engineering.
SBSE has been applied to many fields within the general area of Software Engineering, some of
which are already sufficiently mature to warrant their own surveys. For example, there are surveys
and overviews, covering SBSE for requirements [111], design [78] and testing [3, 4, 65], as well as
general surveys of the whole field of SBSE [21, 36, 48].
This paper does not seek to duplicate these surveys, though some material is repeated from
them (with permission), where it is relevant and appropriate. Rather, this paper aims to provide
those unfamiliar with SBSE with a tutorial and practical guide. The aim is that, having read this
paper, the reader will be able to begin to develop SBSE solutions to a chosen software engineering
problem and will be able to collect and analyse the results of the application of SBSE algorithms.
By the end of the paper, the reader (who is not assumed to have any prior knowledge of SBSE)
should be in a position to prepare their own paper on SBSE. The tutorial concludes with a simple
step-by-step guide to developing the necessary formulation, implementation, experimentation and
results required for the first SBSE paper. The paper is primarily aimed at those who have yet to
tackle this first step in publishing results on SBSE. For those who have already published on SBSE,
many sections can easily be skipped, though it is hoped that the sections on advanced topics, case
studies and the SBSE taxonomy (Sections 7, 8 and 9) will prove useful, even for seasoned Search
Based Software Engineers.
The paper contains extensive pointers to the literature and aims to be sufficiently comprehensive,
complete and self-contained that the reader should be able to move from a position of no prior
knowledge of SBSE to one in which he or she is able to start to get practical results with SBSE
and to consider preparing a paper for publication on these results.
The field of SBSE continues to grow rapidly. Many exciting new results and challenges regularly
appear. It is hoped that this tutorial will allow many more Software Engineering researchers to
explore and experiment with SBSE. We hope to see this work submitted to (and to appear in) the
growing number of conferences, workshops and special issue on SBSE as well as the general software
engineering literature.
The rest of the paper is organised as follows. Section 2 briefly motivates the paper by setting
out some of the characteristics of SBSE that have made it well-suited to a great many Software
Engineering problems, making it very widely studied. Sections 3 and 4 describe the most commonly
used algorithms in SBSE and the two key ingredients of representation and fitness function. Section 5
presents a simple worked example of the application of SBSE principles in Software Engineering,
using Regression Testing as an exemplar. Section 6 presents an overview of techniques commonly
used to understand, analyse and interpret results from SBSE. Section 7 describes some of the more
advanced techniques that can be used in SBSE to go beyond the simple world of single objectives
for which we seek only to find an optimal result. Section 8 presents four case studies of previous
work in SBSE, giving examples of the kinds of results obtained. These cover a variety of topics and
involve very different software engineering activities, illustrating how generic and widely applicable
SBSE is to a wide range of software engineering problem domains. Section 9 presents a taxonomy of
problems so far investigated in SBSE research, mapping these onto the optimisation problems that
have been formulated to address these problems. Section 10 describes the next steps a researcher
should consider in order to conduct (and submit for publication) their first work on SBSE. Finally,
Section 11 presents potential limitations of SBSE techniques and ways to overcome them.
2 Why SBSE?
As pointed out by Harman, Mansouri and Zhang [48] Software Engineering questions are often
phrased in a language that simply cries out for an optimisation-based solution. For example, a
Software Engineer may well find themselves asking questions like these [48]:
1. What is the smallest set of test cases that cover all branches in this program?
2. What is the best way to structure the architecture of this system?
3. What is the set of requirements that balances software development cost and customer satisfaction?
4. What is the best allocation of resources to this software development project?
5. What is the best sequence of refactoring steps to apply to this system?
All of these questions and many more like them, can (and have been) addressed by work on
SBSE [48]. In this section we briefly review some of the motivations for SBSE to give a feeling for
why it is that this approach to Software Engineering has generated so much interest and activity.
1. Generality
As the many SBSE surveys reveal, SBSE is very widely applicable. As explained in Section 3,
we can make progress with an instance of SBSE with only two definitions: a representation of
the problem and a fitness function that captures the objective or objectives to be optimised. Of
course, there are few Software Engineering problems for which there will be no representation,
and the readily available representations are often ready to use ‘out of the box’ for SBSE.
Think of a Software Engineering problem. If you have no way to represent it then you cannot
get started with any approach, so problem representation is a common starting point for any
solution approach, not merely for SBSE. It is also likely that there is a suitable fitness function
with which one could start experimentation since many software engineering metrics are readily
exploitable as fitness functions [42].
2. Robustness.
SBSE’s optimisation algorithms are robust. Often the solutions required need only to lie within
some specified tolerance. Those starting out with SBSE can easily become immersed in ‘parameter tuning’ to get the most performance from their SBSE approach. However, one observation
that almost all those who experiment will find, is that the results obtained are often robust
to the choice of these parameters. That is, while it is true that a great deal of progress and
improvement can be made through tuning, one may well find that all reasonable parameter
choices comfortably outperform a purely random search. Therefore, if one is the first to use
a search based approach, almost any reasonable (non extreme) choice of parameters may well
support progress from the current ‘state of the art’.
3. Scalability Through Parallelism.
Search based optimisation techniques are often referred to as being ‘embarrassingly parallel’
because of their potential for scalability through parallel execution of fitness computations.
Several SBSE authors have demonstrated that this parallelism can be exploited in SBSE work
to obtain scalability through distributed computation [12, 62, 69]. Recent work has also shown
how General Purpose Graphical Processing devices (GPGPUs) can be used to achieve scale up
factors of up to 20 compared to single CPU-based computation [110].
4. Re-unification.
SBSE can also create linkages and relationships between areas in Software Engineering that
would otherwise appear to be completely unrelated. For instance, the problems of Requirements
Engineering and Regression Testing would appear to be entirely unrelated topics; they have their
own conferences and journals and researchers in one field seldom exchange ideas with those from
the other.
However, using SBSE, a clear relationship can be seen between these two problem domains [48].
That is, as optimisation problems they are remarkably similar as Figure 1 illustrates: Both
involve selection and prioritisation problems that share a similar structure as search problems.

for more:

The definition of robotics may not be very clear till today, but they are indeed our future. Not everyone is so interested in the prospect of a radical change that robots are about to bring, but we are embracing the technology and working out how to best collaborate with it. Robots are not new – in fact, we’ve been seeing them since the ever popular Terminator movies as evil assassin robots. The Terminator franchise is significant to begin this narrative, as it is one of the first few movies to feature super-advanced intelligent machines, but as killer robots. But there are other friendly kinds too, unlike the ones in the Terminator movies.

Artificial Intelligence (AI), on the other hand, is the next generation of robotics that involves intelligent machines that work and react like humans. It is often called as machine intelligence and has been around for quite some time now. But what exactly is Artificial Intelligence or AI? It’s the field of computer science that studies the synthesis and analysis of computational agents that act intelligently. With AI, processes become more dynamic and adaptable. Technically speaking, AI is more like a computer program that thinks and reasons much like a human mind. One of the significant contributions to AI came from John McCarthy when he wrote one of the dominant AI programming languages called “LISP”. This begs the question, is robotics same as Artificial Intelligence?


What is Robots?
Robots are programmable machines specifically programmed to carry out a complex series of tasks without any human intervention. People mostly see robots in movies so they have a little sense of what real robots do or look like. But everyone knows the Terminator, Star Wars, Matrix, etc. Nonclassified YouTube videos featuring various versions of robots like a robotic humanoid or a robotic cheetah for that matter, are presenting a different prospective on robotic science. However, on the contrary, robots are becoming more capable and more diverse than ever. Robots are often characterized by their capabilities in performing dull to dangerous tasks, easily and without needing humans to perform them.


What is Artificial Intelligence?
The term “Artificial Intelligence” was first coined by John McCarthy after he wrote a programming language for AI called the “LISP” which is still one of the dominant high-level AI programming languages in widespread use today. He then developed a program called “Advice Taker” which was designed to use knowledge to search for solutions to problems. Early AI programs embodied a little or no domain knowledge but over time, AI became a potential game-changer on all domains. AI has been frequently misunderstood because humans couldn’t get past the idea that machines would make humans obsolete. AI has now become part of our lives, starting to impact everything; how we live, how we interact, how we do what we do, and the evidence is everywhere.


Difference between Robots and Artificial Intelligence
– Most people would think robots and artificial intelligence (AI) are one and the same, but they are very different terms associated with different fields. Robots are hardware and AI is software. In technical terms, robots are machines designed to execute one or more simple to complex tasks automatically with utmost speed and precision, whereas AI is like a computer program that typically demonstrates some of the behaviors associated with human intelligence like learning, planning, reasoning, knowledge sharing, problem solving, and more. AI is a field of computer science that studies intelligent machines that work and react like humans.

– AI is the next generation robotics technology that enables people and machines to work collaboratively in novel ways. In fact, AI systems are so designed to surpass the abilities of machines in dramatically different ways in order to just show up just everywhere we look. In many ways, AI is human intelligence that complements human mind to enhance its ability to perform tasks. Robots are autonomous or semi-autonomous machines that make use of artificial intelligence to enhance their autonomous functions by self-learning. They simply use computer systems for their control and information processing, thereby replicating human actions without the need for human intervention.

– Robots are used in a wide range of domains, especially industrial applications and automotive manufacturing. New generation of robots are more efficient with no custom software required. In addition, robots are widely used in assembly and packing, space and earth exploration, medical surgical applications, laboratory research, weaponry, and so on. The basic application of AI is the popular Tic-Tac-Toe game. AI is also used in speech recognition along with robotics, which is a field of artificial intelligence. There are also other applications of AI in the consumer space from Google’s DeepMind to Apple’s Siri and so on.

Robots vs. Artificial Intelligence: Comparison Chart


Summary of Robots Vs. Artificial Intelligence
Although the terms robots and artificial intelligence are often used interchangeably, they serve very different purposes. The vocabulary of robotics is so deeply a legacy of science fiction, both in literature and science fiction movies. Artificial Intelligence is a much broad field which endured fierce criticism over years, but is now a potential game-changer, driving the research and study of some of the most advanced corporations in the world. AI has taken a big leap of faith to make significant strides in consumer space and in areas such as medical industry, military technology, household appliances, automotive control, and so on.

Read more: Difference Between Robots and Artificial Intelligence | Difference Between

Deep learning (DL) has shown great potentials to revolutionizing communication systems. This article
provides an overview on the recent advancements in DL-based physical layer communications. DL can
improve the performance of each individual block in communication systems or optimize the whole
transmitter/receiver. Therefore, we categorize the applications of DL in physical layer communications
into systems with and without block structures. For the DL-based communication systems with the block
structure, we demonstrate the power of DL in signal compression and signal detection. We also discuss
the recent endeavors in developing DL-based end-to-end communication systems. Finally, the potential
research directions are identified to boost the intelligent physical layer communications.
Index Terms
Deep learning, end-to-end communications, physical layer communications, signal processing.
The idea of using neural networks (NN) to intelligentize machines can be traced to 1942 when a simple
model was proposed to simulate the status of a single neuron. Deep learning (DL) adopts a deep neural
network (DNN) to find data representation at each layer, which could be built by using different types
of machine learning (ML) techniques, including supervised ML, unsupervised ML, and reinforcement
learning. In recent years, DL has shown its overwhelming privilege in many areas, such as computer
vision, robotics, and natural language processing, due to its advanced algorithms and tools in learning
complicated models.
Zhijin Qin is with Queen Mary University of London, London E1 4NS, U.K., (email:
Hao Ye, Geoffrey Ye Li, and Biing-Hwang Fred Juang are with Georgia Institute of Technology, Atlanta, GA 30332 USA,
arXiv:1807.11713v3 [cs.IT] 19 Feb 2019
Different from the aforementioned DL applications, where it is normally difficult to find a concrete
mathematical model for feature representation, various theories and models, from information theory
to channel modelling, have been well developed to describe communication systems [1]. However, the
gap between theory and practice motivates us to work on intelligent communications. Particularly, the
following challenges have been identified in the existing physical layer communications:
• Mathematical model versus practical imperfection: The conventional communication systems rely on
the mathematically expressed models for each block. While in the real-world applications, complex
systems may contain unknown effects that are difficult to be expressed analytically. For example,
it is hard to model underwater acoustic channels or molecular communications. Therefore, a more
adaptive framework is required to handle the challenges.
• Block structures versus global optimality: The traditional communication systems consist of several
processing blocks, such as channel encoding, modulation, and signal detection, which are designed
and optimized within each block locally. Thus the global optimality cannot be guaranteed. Moreover,
the optimal communication system structure varies with environments. As a result, optimal or robust
communication systems for different scenarios are more than desired.
DL could be a pure data-driven method, where the networks/systems are optimized over a large training
data set and a mathematically tractable model is unnecessary. Such a feature motivates us to exploit
DL in communication systems in order to address the aforementioned challenges. In this situation,
communication systems can be optimized for specific hardware configuration and channel to address
the imperfection issues. On the other hand, many models in physical layer communications have been
established by researchers and engineers during the past several decades. Those models can be combined
with DL to design model-driven DL-based communication systems, which can take advantages of both
model-based algorithms and DL [2].
There is evidence that the “learned” algorithms could be executed faster with lower power consumption
than the existing manually “programmed” counterparts as NNs can be highly parallelized on the concurrent
architectures and implemented with low-precision data types. Moreover, the passion on developing
artificial intelligence-powered devices from manufacturers, such as Intel
c MovidiusTM Neural Compute
Stick, has also boosted the boom of DL-based wireless communications.
This article will identify the gains that DL can bring to wireless physical layer communications,
including the systems with the block structure and the end-to-end structure merging those blocks. The rest
of this article is organized as follows. Section II introduces the important basis of DNN and illustrates DLbased communication systems. Section III discusses how to apply DL to block-structured communication
systems. Section IV demonstrates DL-based end-to-end communication systems, where individual block
for a specific function, such as channel estimation or decoding, disappears. Section V concludes this
article with potential research directions in the area of DL-based physical layer communications.
In this section, we will first introduce the basis of DNN, generative adversarial network (GAN),
conditional GAN, and Bayesian optimal estimator, which are widely used in DL-based communication
systems. Then we will discuss the intelligent communication systems with DL.
A. Deep Neural Networks
1) Deep Neural Networks Basis: As aforementioned, research on NN started from the single neuron.
As shown in Fig. 1 (a), the inputs of the NN are {x1, x2, . . . , xn} with the corresponding weights,
{w1, w2, . . . , wn}. The neuron can be represented by a non-linear activation function, σ (•), that takes
the sum of the weighted inputs. The output of the neuron can be expressed as y = σ (
i=1 wixi + b),
where b is the shift of the neuron. An NN can be established by connecting multiple neuron elements
to generate multiple outputs to construct a layered architecture. In the training process, the labelled
data, i.e., a set of input and output vector pairs, is used to adjust the weight set, W, by minimizing a
loss function. In the NN with single neuron element, W = {b, w1, w2, . . . , wn}. The commonly-used
loss functions include mean-squared error (MSE) and categorical cross-entropy. To train the model for a
specific scenario, the loss function can be revised by introducing the l1- or l2-norm of W or activations. l1-
or l2-norm of W can also introduced in the loss function as the regularizer to improve the generalization
capabilities. Stochastic gradient descent (SGD) is one of the most popular algorithms to optimize W.
With the layered architecture, a DNN includes multiple fully connected hidden layers, in which each
of them represents a different feature of the input data. Fig. 1 (b) and (c) show two typical DNN
models: feedforward neural network (FNN) and recurrent neural network (RNN). In FNNs, each neuron
is connected to the adjacent layers while the neurons in the same layers are not connected to each other.
The deep convolutional network (DCN) is developed from the fully connected FNN by only keeping
some of the connections between neurons and their adjacent layers. As a result, DCN can significantly
reduce the number of parameters to be trained [3]. Recently, DL has boosted many applications due to the
powerful algorithms and tools. DCN has shown its great potential for signal compression and recovery
problems, which will be demonstrated in Section III-A.

for more:

Software Project Management / 10 TIPS FOR MANAGING TIME EFFECTIVELY
« on: June 27, 2019, 09:43:00 PM »
Time flies – always. That makes time a variable that can be hard to control and monitor. And once time has slipped away, you never get it back. For companies, time lost equals dollars lost.

In a business environment where employees and skilled professionals are hired based on their input and productivity at a set time, it's a great time to teach them time management skills. Why? They'll not only keep better track of time but also have the opportunity to track their progress in projects and contribute to the company's success.

Being conscious of time will result in self-improvement and goal achievement. That's true in both your work and personal life. What's the best way to manage time effectively? Applying these 10 tips is a good start.

1. Have a Time Check
Know exactly how you spend your time. In an office setting, you should know the tasks that are stealing your time. Then you can do something about it. For example, you may be spending an hour on email instead of completing important projects. Knowing exactly where your time is going can help you make decisions about delegating tasks or buying software to speed up some lower-level processes.

2. Set a Time Limit
Setting a time limit for a task can be fun. In fact, it can be like a game. Some companies actually divide employees into groups, and the group that finishes a project or task first gets a reward. You can apply this principle to any task. Set a definite time limit, such as an hour or two. Then try to finish the task within the allotted time, and feel the excitement as you do it.

3. Use Software Tools for Time Management
Technology is more sophisticated at managing time. Various apps even help track employees' time so that you can monitor their check-ins and check-outs. The internet offers a variety of apps and tools, and some are useful for business management, especially for monitoring and assessing daily processes. For many apps, the advanced functions of the paid versions can also give you added control and better user experience.

4. Have a To-Do List
Having a list is always a time saver. If you have a list, you'll never have to wonder what's on the daily agenda or what to do next. Indeed, a list keeps you focused and motivated, focused on feeling that sweet satisfaction every time you tick off a task from your list. Lists also let you see – and monitor – your progress. Even if you're surrounded by distractions, your list will keep you on the right track.

5. Plan Ahead
Planning ahead is a critical part of time management. Ideally, you should plan ahead for the week or at least the day before. When you know exactly what needs to get done for the day or week, you'll stay organised and focused. You can break tasks across days to see, in advance, how much time is needed to complete a project. Even spending just a few minutes planning ahead can transform how you work.

6. Start with Your Most Important Tasks
Do your most important tasks in the morning. All those stressful tasks, the big bulk of your work, the hardest tasks – do them in the morning. The reason is simple. You have the most energy in the morning, so you will be able to tackle the tasks efficiently and competently. Plus, the feeling of accomplishment at getting the most important stuff done first will make the rest of the day that much better.

7. Delegate and Outsource
You can't do everything by yourself, so cut yourself some slack and delegate. Maybe it's time for you to train someone to do some simple processes in your work or office. That frees you up to focus on the bigger projects or the more complicated tasks. You can even outsource the work to an experienced freelancer and save money.

8. Focus on One Task at a Time
If you have chosen to do a task, see it through to the end – finish it. Avoid doing half work, which means abandoning your current task and doing something else entirely. One example of half-work is writing a report then suddenly checking your email for no reason and writing replies. That's not only bad time management but also bad for your concentration. You'll lose your momentum. Focus on the task at hand, and avoid these pitfalls.

9. Make Some Changes in Your Schedule
If you feel more energised at certain times of the day, change your schedule to embrace that. Make the most of your time. Some people are more energised in the morning, while some are night owls. When you choose the best time schedule for you, you'll enjoy the benefits of being able to do more.

10. Avoid Perfection
Don't let the perfect be the enemy of the good, as they say. Avoid overanalysing everything you do. That doesn't mean be careless, however. Do your best – always. But perfection can drag you down, so don't think about it. Once you've finished a task and given it your best, you have to move on.

Effective time management is ultimately a result of having the right attitude and commitment to your goals. Software tools can help aid in your time management efforts, and there are plenty of calendars and time-tracking devices available to help you manage time effectively.

Whatever tips or tools you use, use your time wisely, but also make time for rest and relaxation to keep you happy and motivated all throughout your life.

What tips and tricks do you use to better manage your time or your team? Tell us in the comments.

Meggie is a writer, social media and content marketing manager who works with AMGtime. She has been interested in marketing and management since she was young and wants to share her creative ideas, and unusual approach to marketing with others. Meggie is deeply convinced that marketing is everything and it's a crucial part of our life. She regularly delivers concepts on how to market products, services and events, how to manage time, employees and more.
source :


Do you know what separates humans from other living beings?

Curiosity. Humans are curious. We question a lot. We are the ones who challenge the status quo of existing rules and strive to build / produce something better. Such curiosity & efforts have promised us a life where electronic devices & machines will probably become our best friend.

Yes, you read it correctly the vision to make machines smart enough to reduce human labour to almost nil. The idea of inter-connected devices where the devices are smart enough to share information with us, to cloud based applications and to each other (device to device).

Smart devices or “Connected devices ” as they are commonly called, are designed in such a way that they capture and utilize every bit of data which you share or use in everyday life. And these devices will use this data to interact with you on daily basis and complete tasks.


How Big is IoT?
This new wave of connectivity is going beyond laptops and smartphones, it’s going towards connected cars, smart homes, connected wearables, smart cities and connected healthcare. Basically a connected life. According to Gartner report, by 2020 connected devices across all technologies will reach to 20.6 billion. Woah! that’s a huge number.


Source: HP

HP did a small survey in which they estimated the rise of connected devices over the years and the results are surprising. Are we moving towards a fully automated world?

These devices will bridge the gap between physical and digital world to improve the quality and productivity of life, society and industries. With IoT catching up Smart homes is the most awaited feature, with brands already getting into the competition with smart applicances. Wearables are another feature trending second on the internet. With launch of Apple Watch and more devices to flow in, these connected devices are going to keep us hooked with the inter-connected world.

A survey conducted by KRC Reserach in UK, US, Japan and Germany the early adopters of IOT has revealed which devices are the customers more likely to use in the coming years. Smart Appliances like thermostat, smart refrigerator to name a few are most liked by the customers and are seem to change the way we operate.


Source: GSMA Report

If you are wondering what impact will IoT have on the economy then for your information as per the Cisco report IoT will generate $14.4 trillion in value across all industries in the next decade. Yes, you are thinking correctly IoT will bring a wave, nobody can forsee.

Now, to give you a glimpse of how applications  of IoT will transform our lives I have listed down few areas where IoT is much awaited and companies are  preparing to surprise you with smart devices. For better understanding, I’ve added Youtube Videos for you, see what our future holds.

Read on and tell us which smart devices are you eager to use.


10 Real World Applications of IoT
1. Smart Home
With IoT creating the buzz, ‘Smart Home’ is the most searched IoT associated feature on Google. But, what is a Smart Home?

Wouldn’t you love if you could switch on air conditioning before reaching home or switch off lights even after you have left home? Or unlock the doors to friends for temporary access even when you are not at home. Don’t be surprised with IoT taking shape companies are building products to make your life simpler and convenient.

Smart Home has become the revolutionary ladder of success in the residential spaces and it is predicted Smart homes will become as common as  smartphones.

The cost of owning a house is the biggest expense in a homeowner’s life. Smart Home products are promised to save time, energy and money. With Smart home companies like Nest, Ecobee, Ring and August, to name a few, will become household brands and are planning to deliver a never seen before experience.

Here’s a brief video which shows you a smart home from the future and how your life will be simplified.

Read more to find out the best smart devices.


2. Wearables
Wearables have experienced a explosive demand in markets all over the world. Companies like Google, Samsung have invested heavily in building such devices. But, how do they work?

Wearable devices are installed with sensors and softwares which collect data and information about the users. This data is later pre-processed to extract essential insights about user.

These devices broadly cover fitness, health and entertainment requirements. The pre-requisite from internet of things technology for wearable applications is to be highly energy efficient or ultra-low power and small sized.

Here are some top examples of wearable IoT devices that fulfill these requirements.

Read more to find latest news making headlines on smart wearables.


3. Connected Cars
The automotive digital technology has focused on optimizing vehicles internal functions. But now, this attention is growing towards enhancing the in-car experience.

A connected car is a vehicle which is able to optimize it’s own operation, maintenance as well as comfort of passengers using onboard sensors and internet connectivity.

Most large auto makers as well as some brave startups are working on connected car solutions. Major brands like Tesla, BMW, Apple, Google are working on bringing the next revolution in automobiles.

Watch the video to experience the future of connected cars.

Read more to know about the updates on connected cars.

4. Industrial Internet
Industrial Internet is the new buzz in the industrial sector, also termed as Industrial Internet of Things ( IIoT ). It is empowering industrial engineering with sensors, software and big data analytics to create brilliant machines.

According to Jeff Immelt, CEO, GE Electric, IIoT is a “beautiful, desirable and investable” asset. The driving philosophy behind IIoT is that, smart machines are more accurate and consistent than humans in communicating through data. And, this data can help companies pick  inefficiencies and problems sooner.

IIoT holds great potential for quality control and sustainability. Applications for tracking goods, real time information exchange about inventory among suppliers and retailers and automated delivery will increase the supply chain efficiency. According to GE the improvement industry productivity will generate $10 trillion to $15 trillion in GDP worldwide over next 15 years.

The video explains emergence of IIoT in industries very accurately.

Read more to know the latest on IIoT.


5. Smart Cities
Smart city is another powerful application of IoT generating curiosity among world’s population. Smart surveillance, automated transportation, smarter energy management systems, water distribution, urban security and environmental monitoring all are examples of internet of things applications for smart cities.

IoT will solve major problems faced by the people living in cities like pollution, traffic congestion and shortage of energy supplies etc. Products like cellular communication enabled Smart Belly trash will send alerts to municipal services when a bin needs to be emptied.

By installing sensors and using web applications, citizens can find free available parking slots across the city. Also, the sensors can detect meter tampering issues, general malfunctions and any installation issues in the electricity system.

To understand better the functioning of Smart Cities check out this video.

Read more to know more about Smart Cities.


6. IoT in agriculture
With the continous increase in world’s population, demand for food supply is extremely raised. Governments are helping farmers to use advanced  techniques and research to increase food production. Smart farming is one of the fastest growing field in IoT.

Farmers are using meaningful insights from the data to yield better return on investment. Sensing for soil moisture and nutrients, controlling water usage for plant growth and determining custom fertilizer are some simple uses of IoT.

If you are curious,  the video below explains further about this concept.

Read more  to know the latest about IoT in agriculture.


7.Smart Retail
The potential of IoT in the retail sector is enormous. IoT provides an opportunity to retailers to connect with the customers to enhance the in-store experience.

Smartphones will be the way for retailers to remain connected with their consumers even out of store. Interacting through Smartphones and using Beacon technology can help retailers serve their consumers better. They can also track consumers path through a store and improve store layout and place premium products in high traffic areas.

Watch this video to find out how connected retail will make your life easier.

Read more to know the latest technology changing the face of retail.


8.Energy Engagement
Power grids of the future will not only be smart enough but also highly reliable. Smart grid concept is becoming very popular all over world.

The basic idea behind the smart grids is to collect data in an automated fashion and analyze the behavior or electricity consumers and suppliers for improving efficiency as well as economics of electricity use.

Smart Grids will also be able to detect sources of power outages more quickly and at individual household  levels like near by solar panel, making possible distributed energy system.

Here’s a video to explain how smart grid operates.

Read more to know the power of IoT in energy saving.


9. IOT in Healthcare
Connected healthcare yet remains the sleeping giant of the Internet of Things applications. The concept of connected healthcare system and smart medical devices bears enormous potential not just for companies, but also for the well-being of people in general.

Research shows IoT in healthcare will be massive in coming years. IoT in healthcare is aimed at empowering people to live healthier life by wearing connected devices.

The collected data will help in personalized analysis of an individual’s health and provide tailor made strategies to combat illness. The video below explains how IoT can revolutionize treatment and medical help.

Read more to know latest news about IoT in Healthcare.


10.IoT in Poultry and Farming
Livestock monitoring is about animal husbandry and cost saving. Using IoT applications to gather data about the health and well being of the cattle, ranchers knowing early about the sick animal can pull out and help prevent large number of sick cattle.

With the help of the collected data and ranchers can increase the poultry production. Watch this interesting video.


End Notes
The future of IoT is more fascinating than this where billions of things will be talking to each other and human intervention will become least. IoT will bring macro shift in the way we live and work.

I hope you had fun reading about all these powerful and promising applications of Internet of things. There are many more areas where IoT is making an impact. Networked Toys is one application of IoT which will change the playing experience of your kids. IoT can also be used in the detection of environmental issues.

Did you like reading this article? Now am sure you will be able to tell which smart device you are eagerly waiting for. Tell us in the comments below.

And if you are currently related to an IoT related profile. Do share your experience and concerns  in the comments sections.


for more:


Internet of Things / Unleashing the power of IoT
« on: April 18, 2019, 07:13:28 PM »
The Internet of Things (IoT) is changing the objects surrounding us significantly to enrich our lives. From household appliances to industrial machines, the IoT is bringing more 'things' into the digital fold every day. Collectively, this is likely to make the IoT a multi-trillion dollar industry in the near future. According to a global survey conducted by PwC, business leaders worldwide are investing in building their IoT capabilities significantly.

The IoT represents the convergence of advances in miniaturisation, wireless connectivity, increased data storage capacity, increased battery life, powerful solar cells and sensors. Sensors detect and measure changes in position, temperature, light, etc., and they are necessary to turn billions of objects into data-generating 'things' that can report on their status, and, in some cases, interact with their environment.

In Bangladesh, there are plenty of opportunities for IoT to be a differentiator for businesses and human lives. Companies can create maximum value for their businesses and customers by taking advantage of the human-machine collaboration supported by this technology. Agriculture, retail, manufacturing, healthcare and other industries stand to benefit through a connected ecosystem that enables human experts to use data and take insightful decisions.

More details:

By: Arijit Chakraborti

Blockchain / Blockchain Usability Checklist
« on: April 16, 2019, 08:23:19 PM »
How have blockchains become more usable starting from the protocol layer and surrounding infrastructure. To help censorship resistant, immutable, and permissionless blockchains gain wide spread market adoption, a lot of changes have to be made. Every individual layer of the ecosystem needs to become more usable from the base protocol to the end-user interface. These layers must leverage the unique properties of blockchain for meaningful adoption to occur. Because what’s the point of using a censorship resistant blockchain if the interface is censoring?

Today is the first piece in a series of posts detailing blockchain usability for the different blockchain layers (that we have made up completely, mildly modelled after the OSI model). Here we will focus on the base protocol, working our way up in subsequent posts.


by Leland Lee

Blockchain / Solving Blockchain’s Biggest Usability Issues
« on: April 16, 2019, 08:21:55 PM »
Blockchain technology has come a long way since the early days of Bitcoin dark web transactions. We now have specific blockchains for almost every single purpose and industry, and some of the world’s brightest innovators are entering the field. With all of this innovation, though, we’ve yet to see cryptocurrency hold a significant role in the day-to-day life of most people.

The reason? Poor usability. Blockchain technology brings an unmatched level of security and fund ownership. But because the space is still in its salad days, this usually comes at the expense of easy-to-use products. And, for most people, that trade-off isn’t worth it.

Let’s look at some of the biggest usability issues the industry is currently facing and how we may be able to solve them.
Keys, Seeds, and Even More Keys

There are a lot of complex strings you need to store in order to keep your cryptocurrency safe and sound. You’ve got:

    public keys to receive funds and look up balances,
    private keys to access funds and make transactions, and
    recovery seeds as a form of back-up.

Oftentimes, you need your private key or recovery seed just to access your wallet. If you’re used to the classic username and password format of traditional online banking, these new criteria can be intimidating.

More details:

by Steven Buchko

Blockchain / 5 Design Principles For Blockchain
« on: April 16, 2019, 08:20:20 PM »
lockchain is often pitched as the next big thing. However, when it come to design, it’s a totally new realm of challenges. Blockchain acts as a thick layer of complexity on top of traditional products. If you’re a designer, blockchain is a space that needs your help! Here’s the basics to get you up to speed, and what you should be thinking about as a designer.
1. 🚫 No Jargon

Blockchain and cryptocurrency is a formidable space to get involved in. The result is a core group who are passionately involved. But to the average person or designer, outside of the hype bubble, it’s really hard to get excited. There’s so many new and abstract concepts. There’s no easy way to get involved. The industry has a bad reputation as being a get rich quick scheme.

Looking from the outside, you’ll see terms like DLT, Dapp, and altcoins being used. They’re overcomplicated jargon!

As a designer, my mission is to make blockchain technology accessible in the mainstream. The first step in this is removing jargon. I encourage a no nonsense, no jargon, approach to everything. That means ruthlessly reviewing and simplifying copy (unexplained acronyms are enemy #1!).

    Nobody cares what software Netflix runs on. Users only care about what a product lets them do. Focus on value, not jargon.

We want to get more people involved, so we need to make products that are really simple to use and understand, in layman’s terms.
2. ✂️ ️️️Ruthlessly Break Down Barriers to Entry

When I tell my friends and family about cryptocurrency or blockchain, it’s often a blank look staring back. The market is full of people inside the bubble, people who understand. But to outsiders, it’s an unwelcoming, impenetrable bubble.

Unfortunately, if you want to get involved, you really need to be determined. You’ll probably have to battle through terrible UX, and a complete black hole of knowledge. There’s nobody to easily explain core concepts, or walk you through the daunting process. It’s like the first generation of the internet. Where it’s technically there, but not very usable.

Products like Coinbase are really focusing on great and simple user experiences.

    The next wave of blockchain will be to make it useable in the mainstream.

Be ruthless. Radically simplify at every point. Make it so that your parents can understand and use it.

More details:

Blockchain is gaining attention in mainstream media while amassing a cult following of early adopters. Sensational news demonstrates, that for some companies like Longfin, Kodak, or Long Blockchain (formerly Long Island Iced Tea) that acquisition of a blockchain company, announcements of token offerings, or company name changes signaling intent can send stock values soaring.

Diving deeper than hype and speculation, blockchain is still in infancy. Significant barriers to adoption of blockchain fall on accessibility and user experience. In this article, we present the hurdles industry leaders are tackling in developing user interfaces (UI) and user experience (UX) that will improve blockchain usability and result in higher adoption.
What are dApps?

Decentralized applications, otherwise known as dApps, are open-sourced apps that provide incentivization of users and block creation through token economies and adhere to protocols that dictate crypto economic rules for maintaining the consensus nature of the blockchain – the p2p distribution network on which it runs.
To be considered a dApp, an application has to meet a set of criteria:

Open Source: freely available code source, often published to repositories like GitHub

Incentivized: those providing computational power can mine coins. Tokens generated have inherent value

Decentralized: built on a technology that utilizes cryptography and packs transactions into blocks for immutability via consensus

Algorithmic/Protocol: consensus and token generation

Once an application meets requirements to be considered a decentralized application, it is classified into one of three types.

More details:

  Usability: The ease of use and learnability of a human-made object such as a tool or device. In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use. Source: Wikipedia

Achilles’ heel. And maybe one of 2019’s great unknowns.

Like any new technology or platform, blockchain-based decentralized applications (Dapps) require new users to learn and adopt new behavior patterns. But these new users face an extremely steep learning curve — preventing them from engaging with or adopting Dapps.

For the past year, our team here at Blocknative has been reading up and experimenting by building our own Dapp prototypes. Like many Dapp developers, the more we tested these Dapps, the more unsatisfied we became at how difficult they were to use. Fortunately — due to the efforts of key industry players jump-starting the larger conversation — the subject of blockchain usability came to the fore in 2018. Here are few of their most notable contributions:

    The Ethereum Foundation made user experience central to the programming at Devcon 4 in Prague.
    Sarah Baker Mills led the launch of the Rimble Design System at ConsenSys Design.
    Beltran Berrocal published the Web3 UX Design Principles to help democratize Dapp design best practices.
    Austin Griffith introduced Ethereum Meta Transactions to onboard users to Web3 without having to first hold Ether.
    Mitch Kosowski launched ETHPrize as a community research project to identify and solve the most critical issues facing the Ethereum ecosystem — including usability.
    Alex Van de Sande pioneered Universal Logins to replace complicated login sequences with familiar patterns.
    Connie Yang at Coinbase shared her rapid user testing, prototyping, and tools for facilitating debate with the broader Web3 community.
    And much more we’re likely leaving out…¹

This work proved vital — and motivational — to development teams like ours as we struggled with the user-hostile Dapp experiments we’d built. So, out of pure frustration, we decided to tackle the problem head-on and began building a lightweight, automated system to assist in onboarding new users to Ethereum-based, Metamask-enabled Dapps. Simple enough, right?

Not so much. After a few false starts, we realized that moving the needle on Dapp usability requires a well-crafted, end-to-end approach. Our research revealed four core capability areas to be addressed:
by Matt Cutler

One problem facing designers of interactive systems is catering to the wide range of users who will use a particular application. Understanding the user is critical to designing a usable interface. There are a number of ways of addressing this problem, including improved design methodologies using ''intuitive'' interface styles, adaptive interfaces, and better training and user support materials. In this article, we argue that each of these solutions involves pattern recognition in one form or another and that machine learning can therefore aid designers of interactive systems in these areas. We report on experiments that demonstrate the potential of machine learning to user modeling that has application to two of these areas in particular: adaptive systems and design methodologies.

Data analytics is a hot topic, and there’s nothing more popular about it than machine learning. But, how can user experience researchers lean on machine learning to test hypotheses and assumptions and understand more about users? While there are thousands of articles about machine learning, most of them focus on how machine learning can automate work. This article answers a very specific question: Which machine learning methods can be used to answer specific user research questions.

Among the dozens of common machine learning techniques, we’ve zeroed in on 6 key algorithms that UX researchers can apply for achieve significant results. These machine learning algorithms are:

    Decision Trees
    Association Rules
    Process Mining
    Dimensionality Reduction

These algorithms share 3 critical traits for deriving user research value:

    Successfully used to answer questions about users
    Produces human-understandable output
    Appropriate for large data sets

For more details:

by Aaron Powers, Sr. Manager of Experience Measurement, & Jennifer Cardello, Executive Director Of DesignOps

Project Description
The term ‘Big Data’ is used to indicate massive amounts of complex data, be they structured or unstructured, real time or historical. Over the last few years, several important characteristics of Big Data have been highlighted: volume, velocity, and variety, have all received much attention.

However, an important aspect of Big Data is their usability, that is, are they good enough to be used for further analysis. The quality (or veracity) of Big Data may be degraded with the increase of dirty and imperfect data, which may originate from many sources, for instance, missing values, noise, systematic errors, and human errors.

Data pre-processing has been widely used in data mining tasks. In the context of Big Data this becomes extremely challenging. More sophisticated and effective approaches are urgently needed to deal with the quality issues surrounding Big Data and to improve their usability.

In this project we will investigate how machine learning (e.g. classification, deep learning, clustering) can be used to distinguish good data from bad ones and more importantly, to improve the usability of Big Data.

More specifically, we will investigate the following related topics:

(1) How can we extract salient features with respect to data quality so that these features can be used to train a machine learning model?
(2) How can we develop machine learning models specifically for evaluating data quality?
(3) How can we use machine learning to facilitate data imputation with an aim to improve the usability of Big Data.
(4) Can Bio-inspired computing and deep learning be used to develop such machine learning systems?

Pages: [1] 2 3