Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - iftekhar.swe

Pages: [1] 2 3 ... 6
Change, being a fact of life, is inevitable even in software systems. On its own part, software has become both omnipresent and vital in our information-based society which is highly dependent on computers and software. Software need to be updated regularly to ensure preservation and maintenance of their values. There is therefore the need for software to evolve. In this paper, the concept and importance of evolution are explained while emphasis is laid on Lehman’s laws and perspectives of software evolution. Also, the relationships and differences between software maintenance and software evolution are brought to the fore. The laws highlighted that a software system must be frequently modified; otherwise it gradually becomes less adequate in use. It is pointed out that software lifecycle undergoes a number of distinct stages. There is a review of various software development models. Despite the challenges facing software evolution, the emerging trends are open source software evolution and unanticipated software evolution.


Abstract: Most of the software in regular use in businesses and organisations all over the world cannot be completely specified. It cannot be implemented, once and for all. Both the original implementation and the inevitable subsequent evolution (maintenance) are a continual learning experience driven, inter alia, by feedback from the results of the behaviour under execution of the software, as perceived by various stakeholders, by advances and growth in the user organisations and by adaptation to changes in the external world, both independent and as a result of installation and use of the software. Real world, termed type-E, software is essentially evolutionary in nature. The study of the processes of evolution of such software is of considerable interest, as is that of the domains that co-evolve with the software. After briefly discussing the meaning of the term evolution in the context of software, its technology, the software process and related domains, this paper describes some of the facets of the evolution phenomenon and implications to the evolution process as identified during many years of active interest in the topic.

Full Paper:

AbstractSoftware services offer the opportunity to use a component-based approach for the design of applications. However, this needs a deeper understanding of how to develop service-based applications in a systematic manner, and of the set of properties that need to be included in the ‘design model’. We have used a realistic application to explore systematically how service-based designs can be created and described. We first identified the key properties of an SOA (service oriented architecture) and then undertook a single-case case study to explore its use in the development of a design for a large-scale application in energy engineering, modelling this with existing notations wherever possible. We evaluated the resulting design model using two walkthroughs with both domain and application experts. We were able to successfully develop a design model around the ten properties identified, and to describe it by adapting existing design notations. A component-based approach to designing such systems does appear to be feasible. However, it needs the assistance of a more integrated set of notations for describing the resulting design model.

Full Paper:

The growing prevalence of subscription business models and next-generation technologies is fueling large-scale digital transformations to make companies more productive, smarter, and faster. These trends portend a significant change in the way B2B software vendors support newly digital companies.

In the past, the professional-services arms of software companies focused on installing, customizing, and deploying applications for customers. Today, they must help customers to design, implement, and adopt new technologies (for example, machine-learning-based applications and blockchain) and to migrate workloads to the cloud. In short, software companies are now called on to be partners, not just vendors. And this means that the software industry is being challenged to reassess its entire approach to professional services.

We find that many software vendors encounter challenges navigating these shifts. Until now, their primary focus has been research and development, sales, and marketing. For some companies, the professional-services unit was viewed as a cost center or, at most, a low-margin revenue generator. Many professional-services businesses therefore haven’t invested in the new tools and capabilities they need to propel their operations. That’s a mistake. Software vendors must strengthen their professional-services offerings to meet their customers’ new demands and to maintain or increase their market share.

To transform the services business and position it for the future, software companies must act along five dimensions: defining the strategic vision for services, reimagining the services portfolio, investing in skills, adapting the services-sales model, and delivering services more efficiently.

Software Project Management / Comparison of IS and SP Development Processes
« on: September 05, 2018, 06:33:06 PM »
Project management deals with initiating, planning, monitoring, and controlling the activities required to fulfill the project commitments, and reporting their status to the project stakeholders. The software development process deals with the technical aspects required to complete a project or product. A sound development process needs to follow Software Engineering fundamentals and take into consideration requirements analysis, functional and technical specifications, data and object orientation models, documentation standards, software testing, software maintenance, software quality assurance, and configuration management.

In order to be successful in developing software systems or products the project management process and software development process must be integrated. To manage a project one must know some basic methodologies such as: Project Management Institute, Microsoft Solutions Framework, Software Engineering Institute Capability Maturity Model (CMM), IEEE, and Rational Rose Unified Process.

In this paper Information Systems (IS) development is defined as software development done by an organization for a single customer/client. This usually is customized work undertaken by the organization on request by a customer. In contrast, Software Product (SP) development is software development done by an organization for multiple customers. It could encompass periodic new releases and is often shrink wrapped. Exhibit 1 presents a process commonly used for IS development.

Traditional test methodology holds that testing is a separate process out of step with the developer. Developer absence from quality assurance encourages a lack of customer empathy from the development team. Furthermore, the lack of developer involvement in quality allows issues to fester in the code base longer making them more expensive to fix. This methodology is also expensive in organizational employee cost as it encourages hiring a separate QA team to take responsibility.

Continuous delivery promotes developer awareness and  empathy with the end user experience. Developers are tasked with delivering test coverage for the features they produce and overseeing them from development to production environments. This gives developers an opportunity to own and prove the quality of a feature.

Continuous delivery leverages all the aforementioned testing strategies to create a seamless pipeline that automatically delivers completed code tasks. An optimal setup would allow a developer to push recently completed code into the continuous delivery pipeline for evaluation. The pipeline would then run the newly pushed code through the levels of testing. If the code passes the testing, It will be automatically merged and deployed to production. If however, the code fails the tests. The code will be rejected and the developer automatically notified of steps to correct.

Popular established software language development ecosystems have their own subset testing ecosystems. There are many tools available which provide utilities to help instrument and develop testing suites. These tools are usually installed through a package manager specific to the programming language used on the project.

In addition to testing instrumentation, tools for test execution and development are also available. Various test runners can be installed to provide output data from a test suite. A common practice is to measure the “test coverage” throughout a project. A code coverage tool can be used to indicate how much of a code base is adequately covered.

Once a testing suite has been developed and is working correctly on a local project it is generally straightforward to integrate into a CD pipeline. Most hosted CD/CI systems will have guides on how to integrate a testing suite into the pipeline.

Software Quality Assurance and Testing / Finding the Specialist Tester
« on: September 05, 2018, 06:24:46 PM »
I’ve talked about interviewing testers before and I’ve talked specifically about hiring test specialists. Here I’m going to try to be a bit more concise, yet also a bit more expansive, about exactly what I think it means to look for specialist testers.

Everyone who is human can, and does, test. So I’m not looking for “testers.” I’m looking for people who have chosen to specialize in testing or have a strong desire to specialize in testing.

I could talk to you here about writing good tests, choosing good automation tools, writing excellent bug reports, and so on. These are all good tactical-level aspects to consider. But they are profoundly uninteresting in that all the good testers already know this stuff. This is the stuff everyone talks about. Yet none of it is the hallmark of the specialist in and of itself. When looking at specialist testers, you have to realize those tactics are epiphenomenal. They are ways of doing; but for specialists, you want to suss ways of thinking.

harness the intuition
Specialist testers — like any specialist — rely on intuition that is guided by experience. Determining the presence and extent of that intuition is a key skill for you as someone who is seeking a specialist. You have to be able to spot this intuition which, of course, means you must possess it yourself. I can’t stress this part enough: people hiring specialists have to understand what makes someone a specialist.

generalist with specialist tendencies
As a specialist tester, you need a broad set of general skills combined with a deep core of specialist abilities. Working as part of a team to hire specialists, this means you should be able to articulate what those general skills are and what the specialist abilities are.

Certainly people on your team may attach differing level of importance to certain skills or abilities, but there should be broad agreement on what those skills and abilities actually are.

I believe that semantics matter. I do realize not all semantics matter equally. But, still: semantics matter. It’s disappointing when otherwise intelligent people seem to dismiss something simply because they feel it’s just semantics. Let’s talk about this.

I just recently had a conversation with some testers who pulled out the old “don’t want to spend this much time on semantics” card. Even though we had spent, at that time, a grand total of about thirty seconds on the topic. It was extremely disappointing to me, particularly in testers, because, in my view, good testers should have a healthy respect and tolerance for discussing semantics.

We already know that one of the two hard things involves naming things and Michael Bolton has already done a good job dismissing the “it’s just semantics” response.

So instead of me rehashing all that, let’s take a little trip down the rabbit hole.

Title:  Propagation of Requirements Engineering Knowledge in Open Source Development: Causes and Effects – A Social Network Perspective

Abstract:  Popularity of open source software (OSS) development projects has spiked an interest in requirements engineering (RE) practices of such communities that are starkly different from those of traditional software development projects. Past work has focused on characterizing this difference while this work centers on the variations in the propagation of RE knowledge among different OSS project development endeavors. The OSS RE activity in OSS projects is conceptualized as a socio-technical distributed cognitive activity where heterogeneous actors interact with one another and structural artifacts to `compute’ requirements. These coordinated sequences of action are continuously interrupted and shaped by the demands of an ever-changing environment resulting in various social networks visible in the communicative pathways deployed in the projects. We explore how the social network configurations in OSS projects manifesting the flow of RE knowledge respond to the attributes of the environment housing the projects and their effects on the attributes of software requirements produced by such project development endeavors.


There still exists a common misconception that "architecture" and "agile" are competing forces, there being a conflict between them. This simply isn’t the case though. On the contrary, a good software architecture enables agility, helping you embrace and implement change; whether from changes in requirements, business processes, mergers, etc. What is considered a "good architecture" is still up for debate of course but, for me anyway, the core characteristics of a good architecture relate to good modularity reached through an appropriate decomposition strategy. If you've experienced the pain of making a major change to an existing big ball of mud, where seemingly unconnected parts of the codebase break, then you'll appreciate that having a well structured codebase (good modularity) is important.

A big problem I see with teams today is that they adopt, what George Fairbanks calls in his Just Enough Software Architecture book, "architecture indifferent design." In other words, they adopt an architectural style without necessarily considering the trade-offs. In today's world, this is commonly manifested in teams adopting a microservices architectural style simply as a reaction to their existing monolithic codebase being considered a mess. Jokes about these same teams subsequently creating a "distributed big ball of mud" to one side, it turns out that the process of software design and decomposition is hugely important, irrespective of whether you're building a monolithic or a microservices architecture. You don't get agility or a good architecture for free. Some conscious design effort is needed and trade-offs need to be considered. This, again, is why creating that starting point with some up front design is crucially important.

Software architecture has traditionally been associated with big design up front and waterfall-style delivery, where a team would ensure that every last element of the software design was considered before any code was written. In 2001, the "Manifesto for Agile Software Development" suggested that we should value "responding to change over following a plan," which when taken at face value has been misinterpreted to mean that we shouldn’t plan. The net result, and I’ve seen this first hand, is that some software development teams have flipped from doing big design up front to doing no design up front. Both extremes are foolish, and there’s a sweet spot somewhere that is relatively easy to discover if you’re willing to consider that up front design is not necessarily about creating a perfect end-state. Instead, think about up front design as being about creating a starting point and setting a direction for the team. This often missed step can add a tremendous amount of value to a team by encouraging them to understand what they are going to build and whether it is going to work.

In order to arrive at a software design, you need to make some design decisions. In discussing the difference between architecture and design, Grady Booch tells us that "architecture represents the significant decisions, where significance is measured by cost of change." In other words, which decisions are expensive to change at a later date? Following on from this, a good way to think about up front design is to ensure that you’ve made and understood the trade-offs associated with the "significant decisions." These significant decisions are typically related to technology choices and structure (i.e. decomposition strategies, modularity, functional boundaries, etc.) If you’re building a monolithic software system, the choice of programming language is likely to be significant for a number of reasons. Adopting a microservices architecture potentially reduces the significance of which programming language(s) you choose, but introduces other trade-offs that need thinking through. Similarly, adopting a hexagonal architecture allows you to decouple your business logic from your technology choices, but again there are trade-offs.
The up front design process should therefore be about understanding the significant decisions that influence the shape of a software system rather than, for example, understanding the length of every column in a database. In real terms, I’d like teams to really understand what they are going to build, how they are going to build it (at a high-level, anyway) and whether what they’ve designed will have a good chance of actually working. This can be achieved by identifying the highest priority risks and mitigating them as appropriate, writing code if necessary. In summary, up front design should be about stacking the odds of success in your favour.

Requirements Engineering / To Brainstorm or Not to Brainstorm
« on: September 05, 2018, 04:04:19 PM »
When developing new systems, we require creative ideas – not only to be one step ahead of our competitors. In consequence, requirements engineering needs tools for eliciting innovative requirements, the so-called delighters in the Kano model [Kano]. The IREB CPRE Foundation Level [IREB] suggests using creativity techniques to elicit such delighters..........

More Reading:

Requirements Engineering / The context of software requirements engineering
« on: September 05, 2018, 04:01:03 PM »
There is no universally accepted model of the requirements engineering process that will suit all organisations or all projects. The process will vary for many reasons but among the factors that have the most influence are:

The nature of the project. A market-driven project that is developing a software product for general sale imposes different demands on the requirements engineering process than does a customer-driven project that is developing bespoke software for a single customer. For example, a strategy of incremental releases is often used in market-driven projects while this is often unacceptable for bespoke software. Incremental release imposes a need for rigorous prioritisation and resource estimation for the requirements to select the best subset of requirements for each release. Similarly, requirements elicitation is usually easier where there is a single customer than where there are either only potential customers (for a new product) or thousands of existing but heterogeneous existing customers (for a new release of an existing product).

The nature of the application. Software requirements engineering means different things to (for example) information systems and embedded software. In the former case, a software development organisation is usually the lead or the only contractor. The scale and complexity of information systems projects can cause enormous difficulties, but at least the software developer is involved from initial concept right through to hand-over. For embedded software the software developer may be a subcontractor responsible for a single subsystem and have no direct involvement with eliciting the user requirements, partitioning the requirements to subsystems or defining acceptance tests. The requirements will simply be allocated to the software subsystem by a process that is opaque to the software contractor. A complex customer / main contractor / subcontractor project hierarchy is just one of many factors that can greatly complicate resolution of any problems that emerge as the software contractor analyses the allocated requirements.

This latter point introduces one of the most controversial aspects of software requirements engineering: the question of ownership. Is requirements engineering a task of software engineering or of systems engineering? In the case of embedded software it is clear that it is a systems engineering task to elicit the user requirements, derive a system architecture composed of appropriate technologies, and allocate the requirements accordingly. In the case of information systems, it is less clear but broadly the same activities have to be performed despite the absence of non-IT technologies. These similarities are likely to become more apparent if, as seems likely, economic imperatives force increasing use of component architectures. Here, solutions are composed of commercial off-the-shelf (COTS) software components integrated with relatively small amounts of bespoke software. The requirements engineering problem then becomes one of finding a best match between the elicited user requirements and the advertised properties of the COTS components.

Because of ambiguity about 'ownership' of the discipline and the artificiality in distinguishing between system and software requirements, we henceforth omit the word 'software' and talk about 'requirements engineering' except where it is helpful to make a distinction.

This is also reflected by the coverage of requirements engineering by standards. Requirements documentation is the only aspect of requirements engineering that is covered by dedicated standards. Wider requirements engineering issues tend to be covered only as activities of software engineering or systems engineering. Current process improvement and quality standards offer only limited coverage of requirements engineering issues despite its place at the root of software quality problems.

Abstract:  Software reuse plays an important role in the development of new software, due to its potential benefits, which include increased product quality and decreased product cost and schedule. Although software industry has been going through a tremendous development in recent decades, component reuse is still facing numerous challenges and lacking adoption by practitioners. One of the impediments preventing efficient and effective reuse is the difficulty to determine which artifacts are best suited to solve a particular problem in a given context and how easy it will be to reuse. Nevertheless, a good understanding of reusability as well as adequate and easy to use metrics for quantification of reusability are necessary to simplify and accelerate the adoption of component reuse in software development.


Pages: [1] 2 3 ... 6