Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - s.arman

Pages: 1 2 [3] 4 5 ... 18
The words stakeholder and shareholder are often used loosely in business. The two words are commonly thought of as synonyms and are used interchangeably, but there are some key differences between them. These differences reveal how to appropriately manage stakeholders and shareholders in your organization.

For example, a shareholder is always a stakeholder in a corporation, but a stakeholder is not always a shareholder. The distinction lies in their relationship to the corporation and their priorities. Different priorities and levels of authority require different approaches in formality, communication and reporting.

It’s important that these terms are well-defined to avoid confusion. Even if you think you know what they mean, take a moment to refresh yourself.

What Is a Shareholder?
A shareholder is a person or an institution that owns shares or stock in a public or private operation. They are often referred to as members of a corporation, and they have a financial interest in the profitability of the organization or project.

Depending on the applicable laws and rules of the corporation or shareholders’ agreement, shareholders have the right to do the following (and more):

Sell their shares
Vote on those nominated for the board
Nominate directors
Vote on mergers and changes to the corporate charter
Receive dividends
Gain information on publicly traded companies
Sue for a violation of fiduciary duty
Buy new shares
Shareholders have a vested interest in the company or project. That interest is reflected in their desire to see an increase in share price and dividends, if the company is public. If they’re shareholders in a project, then their interests are tied to the project’s success.

The money that is invested in a company by shareholders can be withdrawn for a profit. It can even be invested in other organizations, some of which could be in competition with the other. Therefore, the shareholder is an owner of the company, but not necessarily with the company’s interests first.

What Is a Stakeholder?
We’ve written about what a stakeholder is before, and the definition still stands. A stakeholder can be either an individual, a group or an organization impacted by the outcome of a project. Therefore, they have an interest in the success of a project. They are either from the project group or an outside sponsor.

There are many people who can qualify as a stakeholder, such as:

Senior management
Project leaders
Team members on the project
Customers of the project
Resource managers
Line managers
User group for the project
Subcontractors on the project
Consultant for the project
Therefore, stakeholders can be internal, such as employees, shareholders and managers—but stakeholders can also be external. They are parties that are not directly in a relationship with the organization itself, but still the organization’s actions affect it, such as suppliers, vendors, creditors, the community and public groups. Basically, stakeholders are those who will be impacted by the project when in progress and those who will be impacted by the project when completed.

Stakeholders tend to have a long-term relationship with the organization. It’s not as easy to pull up stakes, so to speak, as it can be for shareholders. However, their relationship to the organization is tied up in ways that make the two reliant on one another. The success of the organization or project is just as critical, if not more so, for the stakeholder over the shareholder. Employees can lose their jobs, while suppliers could lose income.

To read more:

Quality assurance activities are those actions the quality team takes to view the quality requirements, audit the results of control measurements and analyze quality performance in order to ensure that appropriate quality standards and procedures are appropriately implemented within the project.

The Quality Assurance Activities is an article of the Quality Management Section in the Project Implementation Guide. It describes the three kinds of the activities to help the project manager and the quality team to develop a quality assurance plan template, audit quality performance and review project activities, procedures and processes.

There are three key activities of quality assurance. They are Develop a Quality Assurance Plan and Analyze Project Quality. Let’s view each of the activities.

Develop a Quality Assurance Plan.
The first of the quality assurance activities is about planning the overall process for assuring quality. Its purpose is to design a quality assurance plan template (an efficient tool to assure quality in a project) and monitor problems and drawbacks that may appear during the project implementation process. The quality team needs to use such a plan to do the rest of the quality assurance activities, such Audit and Analysis.

The basic steps in creating a quality assurance plan template are:

Set up goals of project assurance (why to assure the project’s quality?)
Assign responsibilities to members of the quality team and determine the hierarchy of management (who will carry out the quality assurance activities?)
Gather relevant information on the project standards and define compliance criteria (how to make quality assurance?)
Identify a set of measurements and metrics to be used to determine quality levels and performance (is the project performed under appropriate quality levels?)

To read more:

Layered (n-tier) architecture
This approach is probably the most common because it is usually built around the database, and many applications in business naturally lend themselves to storing information in tables.

This is something of a self-fulfilling prophecy. Many of the biggest and best software frameworks—like Java EE, Drupal, and Express—were built with this structure in mind, so many of the applications built with them naturally come out in a layered architecture.

The code is arranged so the data enters the top layer and works its way down each layer until it reaches the bottom, which is usually a database. Along the way, each layer has a specific task, like checking the data for consistency or reformatting the values to keep them consistent. It’s common for different programmers to work independently on different layers.

The Model-View-Controller (MVC) structure, which is the standard software development approach offered by most of the popular web frameworks, is clearly a layered architecture. Just above the database is the model layer, which often contains business logic and information about the types of data in the database. At the top is the view layer, which is often CSS, JavaScript, and HTML with dynamic embedded code. In the middle, you have the controller, which has various rules and methods for transforming the data moving between the view and the model.

The advantage of a layered architecture is the separation of concerns, which means that each layer can focus solely on its role. This makes it:



Easy to assign separate "roles"

Easy to update and enhance layers separately

Proper layered architectures will have isolated layers that aren’t affected by certain changes in other layers, allowing for easier refactoring. This architecture can also contain additional open layers, like a service layer, that can be used to access shared services only in the business layer but also get bypassed for speed.

Slicing up the tasks and defining separate layers is the biggest challenge for the architect. When the requirements fit the pattern well, the layers will be easy to separate and assign to different programmers.


Source code can turn into a “big ball of mud” if it is unorganized and the modules don’t have clear roles or relationships.

Code can end up slow thanks to what some developers call the “sinkhole anti-pattern.” Much of the code can be devoted to passing data through layers without using any logic.

Layer isolation, which is an important goal for the architecture, can also make it hard to understand the architecture without understanding every module.

Coders can skip past layers to create tight coupling and produce a logical mess full of complex interdependencies.

Monolithic deployment is often unavoidable, which means small changes can require a complete redeployment of the application.

To read more:

Requirements Engineering / How to Write Software Requirements
« on: April 19, 2019, 12:23:04 PM »
Why Write Good Quality Software Requirements?
Writing software requirements will help capture even the smallest details of the customer needs.
Capturing every details of requirements will help dev achieve great code coverage which will lead to lesser bugs.
Will help the dev in understanding the  business rules better.
Stake holders can give early feedback on what they intend to see with the software.

Process of Writing Good Software Requirements
In Agile software models, customer requirements are more commonly referred to as User Stories. A good user story should contain the following information.

Who the requirement is for
What output will the user expect to see?
What actions will bring about the output?
User stories can also include ‘conditions of satisfaction’. These terms elaborate the user stories with much more clarity and detail.

Since the Agile software model was created, Mike Cohn, the co-founder of Scrum methodology, proposed a template to write an effective software requirement using the Keywords As, When and Then. The template will look like:

As <user> when < this action happens> then <this will be the output>

For more details:

No human, or team of humans, could possibly keep up with the avalanche of information produced by many of today’s physics and astronomy experiments. Some of them record terabytes of data every day — and the torrent is only increasing. The Square Kilometer Array, a radio telescope slated to switch on in the mid-2020s, will generate about as much data traffic each year as the entire internet.

The deluge has many scientists turning to artificial intelligence for help. With minimal human input, AI systems such as artificial neural networks — computer-simulated networks of neurons that mimic the function of brains — can plow through mountains of data, highlighting anomalies and detecting patterns that humans could never have spotted.

Of course, the use of computers to aid in scientific research goes back about 75 years, and the method of manually poring over data in search of meaningful patterns originated millennia earlier. But some scientists are arguing that the latest techniques in machine learning and AI represent a fundamentally new way of doing science. One such approach, known as generative modeling, can help identify the most plausible theory among competing explanations for observational data, based solely on the data, and, importantly, without any preprogrammed knowledge of what physical processes might be at work in the system under study. Proponents of generative modeling see it as novel enough to be considered a potential “third way” of learning about the universe.

For more please visit:

Cloud computing is quickly becoming the standard way for technology companies to access IT infrastructure, software and hardware resources. The technology enables companies to be able to use applications and other resources managed by third party companies that are stored in high-end server computers and networks. Cloud computing systems are mainly set up for business or research purposes. In this article, we explore the different types of cloud computing solutions.

Cloud computing helps businesses to be more efficient and save on software and hardware that are important for different operations. The definition of cloud computing varies depending on your source but what is generally agreed is that it involves access of software or hardware that are in the “cloud” i.e. use of software or hardware remotely. If your company is using specialized applications where you did not have to set up server or buy hardware or software to run them, then you are probably using a cloud application.

Companies can use cloud computing to increase their IT functionality or capacity without having to add software, personnel, invest in additional training or set up new infrastructure. Below are the major types of cloud computing:

1. Infrastructure as a Service (IaaS)
IaaS is the lowest level of cloud solution and refers to cloud-based computing infrastructure as a fully-outsourced service. An IaaS provider will deliver pre-installed and configured hardware or software through a virtualized interface. What the customers accessing the cloud services do with the service is up to them. Examples of IaaS offerings are managed hosting and development environments.
Your web hosting company is an IaaS provider. Some of the major players offering infrastructure as a service solution include Google, IBM, Rackspace Cloud Servers, Amazon EC2 and Verizon.

Benefits of IaaS Solutions
Reduces total cost of ownership and capital expenditures
Users pay for the service that they want, on the go
Access to enterprise-grade IT resources and infrastructure
Users can scale up and down based on their requirements at any time

To read more:

Management Information Systems – MIS vs. Information Technology – IT: An Overview
Management information system (MIS) refers to a large infrastructure used by a business or corporation, whereas information technology (IT) is one component of that infrastructure that is used for collecting and transmitting data.

A management information system helps a business make decisions and coordinate and analyze information. Information Technology supports and facilitates the employment of that system.

For example, IT could be a particular interface that helps users input data into a corporate MIS operation. However, that isn't to say that the scope of IT is narrow. In some ways, IT is a broader field than MIS. The particular goals of a particular IT application can fit neatly into a larger MIS framework; however, the reverse is not necessarily true.

Management Information System
In terms of business decision-making, an information system (IS) is a set of data, computing devices and management methods that support routine company operations. A management information system (MIS) is a specific subset of IS.

A management information system, as used by a company or institution, might be a computerized system consisting of hardware and software that serves as the backbone of information for the company. Specifically, the computerized database may house all the company's financial information and organize it in such a way that it can be accessed to generate reports on operations at different levels of the company.

For more:

Robotics and Embedded Systems / The purpose of embedded systems
« on: April 19, 2019, 12:42:48 AM »
An embedded system is the use of a computer system built into a machine of some sort, usually to provide a means of control" (BCS Glossary of Computing and ICT). Embedded systems are everywhere in our lives, from the TV remote control to the microwave, to control the central heating to the digital alarm clock next to our bed. They are in cars, washing machines, cameras, drones and toys.

An embedded system has a microprocessor in it which is essentially a complete computer system with limited, specific functionality. As far as a user goes, they can usually interact with it through a limited interface. This typically will allow the user to input settings and make selections and also to receive output using text, video or audio  signals, for example.

Characterisitcs of embedded systems
There are a number of common characteristics we can identify in embedded systems.

 They are usually small, sometimes tiny and very light so they can be fitted into many products.
The computer system in an embedded system is usually a single microprocessor.
The microprocessor has been designed to do a limited number of very specific tasks in a product very quickly and efficiently.
The microprocessor can be mass-produced very cheaply.
They require a very tiny amount of power compared to a traditional computer.
They are very reliable because there are no moving parts.
Because the computer system is usually printed onto one board, if it does break down, you just swap the board; it is very easy to maintain.

For details visit here:

Blockchain / How blockchains could change the world
« on: April 18, 2019, 11:59:31 PM »
What impact could the technology behind Bitcoin have? According to Tapscott Group CEO Don Tapscott, blockchains, the technology underpinning the cryptocurrency, could revolutionize the world economy. In this interview with McKinsey’s Rik Kirkland, Tapscott explains how blockchains—an open-source distributed database using state-of-the-art cryptography—may facilitate collaboration and tracking of all kinds of transactions and interactions. Tapscott, coauthor of the new book Blockchain Revolution: How the Technology Behind Bitcoin is Changing Money, Business, and the World, also believes the technology could offer genuine privacy protection and “a platform for truth and trust.” An edited and extended transcript of Tapscott’s comments follows.
In the early 1990s, we said the old media is centralized. It’s one way, it’s one to many; it’s controlled by powerful forces, and everyone is a passive recipient. The new web, the new media, we said, is one to one, it’s many to many; it’s highly distributed, and it’s not centralized. Everyone’s a participant, not an inert recipient. This has an awesome neutrality. It will be what we want it to be, and we can craft a much more egalitarian, prosperous society where everyone gets to share in the wealth that they create. Lots of great things have happened, but overall the benefits of the digital age have been asymmetrical. For example, we have this great asset of data that’s been created by us, and yet we don’t get to keep it. It’s owned by a tiny handful of powerful companies or governments. They monetize that data or, in the case of governments, use it to spy on us, and our privacy is undermined.

For details please visit:

Internet of Things / 4 Layers Of The Internet Of Things
« on: April 18, 2019, 11:43:23 PM »
In today’s age of fast track technology growth, it’s becoming very difficult to keep track of the rise of different technologies. However, there is a common theme underlying most of the modern technology trends. This constant theme is of ‘convergence of technologies’ and the internet of things is the perfect example of this phenomenon.

It’s very nature itself lends to the notion of a convergence of different technologies working together in unison to solve a real business problem or enable new products and services. But the problem is that the various players involved in the IOT ecosystem view the IOT technology stack from their own specific perspective, ending up confusing the audience.

So, “What is the link between IOT, cloud, Analytics, Data Science?” This is still a common question!

This article tries to allay this confusion by describing the 4 layers of an IOT technology stack.

The first layer of Internet of Things consists of Sensor-connected IOT devices:

The second layer consists of IOT gateway devices:

The Third layer of IOT is the Cloud:

And the Final layer is IOT Analytics:

For details visit here:

Data Mining and Big Data / DATA MINING FOR BIG DATA
« on: April 18, 2019, 11:35:51 PM »
Data mining involves exploring and analyzing large amounts of data to find patterns for big data. The techniques came out of the fields of statistics and artificial intelligence (AI), with a bit of database management thrown into the mix.

Generally, the goal of the data mining is either classification or prediction. In classification, the idea is to sort data into groups. For example, a marketer might be interested in the characteristics of those who responded versus who didn’t respond to a promotion.

These are two classes. In prediction, the idea is to predict the value of a continuous variable. For example, a marketer might be interested in predicting those who will respond to a promotion.

Typical algorithms used in data mining include the following:

Classification trees: A popular data-mining technique that is used to classify a dependent categorical variable based on measurements of one or more predictor variables. The result is a tree with nodes and links between the nodes that can be read to form if-then rules.

Logistic regression: A statistical technique that is a variant of standard regression but extends the concept to deal with classification. It produces a formula that predicts the probability of the occurrence as a function of the independent variables.

Neural networks: A software algorithm that is modeled after the parallel architecture of animal brains. The network consists of input nodes, hidden layers, and output nodes. Each unit is assigned a weight. Data is given to the input node, and by a system of trial and error, the algorithm adjusts the weights until it meets a certain stopping criteria. Some people have likened this to a black–box approach.

Clustering techniques like K-nearest neighbors: A technique that identifies groups of similar records. The K-nearest neighbor technique calculates the distances between the record and points in the historical (training) data. It then assigns this record to the class of its nearest neighbor in a data set.

For more details:

A child who has never seen a pink elephant can still describe one — unlike a computer. “The computer learns from data,” says Jiajun Wu, a PhD student at MIT. “The ability to generalize and recognize something you’ve never seen before — a pink elephant — is very hard for machines.”

Deep learning systems interpret the world by picking out statistical patterns in data. This form of machine learning is now everywhere, automatically tagging friends on Facebook, narrating Alexa’s latest weather forecast, and delivering fun facts via Google search. But statistical learning has its limits. It requires tons of data, has trouble explaining its decisions, and is terrible at applying past knowledge to new situations; It can’t comprehend an elephant that’s pink instead of gray. 

To give computers the ability to reason more like us, artificial intelligence (AI) researchers are returning to abstract, or symbolic, programming. Popular in the 1950s and 1960s, symbolic AI wires in the rules and logic that allow machines to make comparisons and interpret how objects and entities relate. Symbolic AI uses less data, records the chain of steps it takes to reach a decision, and when combined with the brute processing power of statistical neural networks, it can even beat humans in a complicated image comprehension test.

For more :

Useful and informative Information.

Artificial Intelligence / Re: Ontology-based Text Document Clustering
« on: April 17, 2019, 03:12:33 PM »
Thanks for sharing

Pages: 1 2 [3] 4 5 ... 18