Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - hasanmahmud

Pages: 1 2 3 [4] 5 6
46
ICT / Complete Guide to Test your Mobile Apps
« on: March 29, 2014, 03:45:28 PM »
 Functional testing:

The functional testing of Mobiles normally consists in the areas of testing user interactions as well as testing the transactions. The various factors which are relevant in functional testing are

    Type of application based upon the business functionality usages (banking, gaming, social or business)
    Target audience type (consumer, enterprise, education)
    Distribution channel which is used to spread the application (e.g. Apple App Store, Google play, direct distribution)

The most fundamental test scenarios in the functional testing can be considered as :

    To validate whether all the required mandatory fields are working as required.
    To validate that the mandatory fields are displayed in the screen in a distinctive way than the non-mandatory fields.
    To validate whether the application works as per as requirement whenever the application starts/stops.
    To validate whether the application goes into minimized mode whenever there is an incoming phone call. In order to validate the same we need to use a second phone, to call the device.
    To validate whether the phone is able to store, process and receive SMS whenever the app is running. In order to validate the same we need to use a second phone to send sms to the device which is being tested and where the application under test is currently running.
    To validate that the device is able to perform required multitasking requirements whenever it is necessary to do so.
    To validate that the application allows necessary social network options such as sharing, posting and navigation etc.
    To validate that the application supports any payment gateway transaction such as Visa, Mastercard, Paypal etc as required by the application.
    To validate that the page scrolling scenarios are being enabled in the application as necessary.
    To validate that the navigation between relevant modules in the application are as per the requirement.
    To validate that the truncation errors are absolutely to an affordable limit.
    To validate that the user receives an appropriate error message like “Network error. Please try after some time” whenever there is any network error.
    To validate that the installed application enables other applications to perform satisfactorily, and it does not eat into the memory of the other applications.
    To validate that the application resumes at the last operation in case of a hard reboot or system crash.
    To validate whether the installation of the application can be done smoothly provided the user has the necessary resources and it does not lead to any significant errors.
    To validate that the application performs auto start facility according to the requirements.
    To validate whether the application performs according to the requirement in all versions of Mobile that is 2g, 3g and 4g.
    To perform regression testing to uncover new software bugs in existing areas of a system after changes have been made to them. Also rerun previously performed tests to determine that the program behavior has not changed due to the changes.
    To validate whether the application provides an available user guide for those who are not familiar to the app

Performance testing:

This type of testing’s fundamental objective is to ensure that the application performs acceptably under certain performance requirements such as access by a huge number of users or the removal of a key infrastructure part like a database server.

The general test scenarios for performance testing in a Mobile application are:

    To determine whether the application performs as per the requirement under different load conditions.
    To determine whether the current network coverage is able to support the application at peak, average and minimum user levels.
    To determine whether the existing client-server configuration setup provides the required optimum performance level.
    To identify the various application and infrastructure bottlenecks which prevent the application to perform at the required acceptability levels.
    To validate whether the response time of the application is as per as the requirements.
    To evaluate product and/or hardware to determine if it can handle projected load volumes.
    To evaluate whether the battery life can support the application to perform under projected load volumes.
    To validate application performance when network is changed to WIFI from 2G/3G or vice versa.
    To validate each of the required the CPU cycle is optimization
    To validate that the battery consumption, memory leaks, resources like GPS, Camera performance is well within required guidelines.
    To validate the application longevity whenever the user load is rigorous.
    To validate the network performance while moving around with the device.
    To validate the application performance when only intermittent phases of connectivity is required.

Security testing:
 The fundamental objective of security testing is to ensure that the application’s data and networking security requirements are met as per guidelines.

The following are the most crucial areas for checking the security of Mobile applications.

    To validate that the application is able to withstand any brute force attack which is an automated process of trial and error used to guess a person’s username, password or credit-card number.
    To validate whether an application is not permitting an attacker to access sensitive content or functionality without proper authentication.
    To validate that the application has a strong password protection system and it does not permit an attacker to obtain, change or recover another user’s password.
    To validate that the application does not suffer from insufficient session expiration.
    To identify the dynamic dependencies and take measures to prevent any attacker for accessing these vulnerabilities.
    To prevent from SQL injection related attacks.
    To identify and recover from any unmanaged code scenarios.
    To ensure whether the certificates are validated, does the application implement Certificate Pinning or not.
    To protect the application and the network from the denial of service attacks.
    To analyze the data storage and data validation requirements.
    To enable the session management for preventing unauthorized users to access unsolicited information.
    To check if any cryptography code is broken and ensure that it is repaired.
    To validate whether the business logic implementation is secured and not vulnerable to any attack from outside.
    To analyze file system interactions, determine any vulnerability and correct these problems.
    To validate the protocol handlers for example trying to reconfigure the default landing page for the application using a malicious iframe.
    To protect against malicious client side injections.
    To protect against malicious runtime injections.
    To investigate file caching and prevent any malicious possibilities from the same.
    To prevent from insecure data storage in the keyboard cache of the applications.
    To investigate cookies and preventing any malicious deeds from the cookies.
    To provide regular audits for data protection analysis.
    Investigate custom created files and preventing any malicious deeds from the custom created files.
    To prevent from buffer overflows and memory corruption cases.
    To analyze different data streams and preventing any vulnerabilities from these.
Usability testing:

 The usability testing process of the Mobile application is performed to have a quick and easy step application with less functionality than a slow and difficult application with many features. The main objective is to ensure that we end up having an easy-to-use, intuitive and similar to industry-accepted interfaces which are widely used.

    To ensure that the buttons should have the required size and be suitable to big fingers.
    To ensure that the buttons are placed in the same section of the screen to avoid confusion to the end users.
    To ensure that the icons are natural and consistent with the application.
    To ensure that the buttons, which have the same function should also have the same color.
    To ensure that the validation for the tapping zoom-in and zoom-out facilities should be enabled.
    To ensure that the keyboard input can be minimized in an appropriate manner.
    To ensure that the application provides a method for going back or undoing an action, on touching the wrong item, within an acceptable duration.
    To ensure that the contextual menus are not overloaded because it has to be used quickly.
    To ensure that the text is kept simple and clear to be visible to the users.
    To ensure that the short sentences and paragraphs are readable to the end users.
    To ensure that the font size is big enough to be readable and not too big or too small.
    To validate the application prompts the user whenever the user starts downloading a large amount of data which may be not conducive for the application performance.
    To validate that the closing of the application is performed from different states and verify if it re-opens in the same state.
    To ensure that all strings are converted into appropriate languages whenever a language translation facility is available.
    To ensure that the application items are always synchronized according to the user actions.
    To ensure that the end user is provided with a user manual which helps the end user to understand and operate the application who may be not familiar with the application’s proceedings

Usability testing is normally performed by manual users since only human beings can understand the sensibility and comfort ability of the other users.

 Compatibility testing:

Compatibility testing on mobile devices is performed to ensure that since mobile devices have different size, resolution, screen, version and hardware so the application should be tested across all the devices to ensure that the application works as desired.

The following are the most prominent areas for compatibility testing.

    To validate that the user Interface of the application is as per the screen size of the device, no text/control is partially invisible or inaccessible.
    To ensure that the text is readable for all users for the application.
    To ensure that the call/alarm functionality is enabled whenever the application is running. The application is minimized or suspended on the event of a call and then whenever the call stops the application is resumed.

Recoverability Testing

    Crash recovery and transaction interruptions
    Validation of the effective application recovery situation post unexpected interruption/crash scenarios.
    Verification of how the application handles a transaction during a power failure (i.e. Battery dies or a sudden manual shutdown of the device)
    The validation of the process where the connection is suspended, the system needs to re-establish for recovering the data directly affected by the suspended connection.

Other Important Checks:

    Installation testing (whether the application can be installed in a reasonable amount of time and with required criterion)
    Uninstallation testing (whether the application can be uninstalled in a reasonable amount of time and with required criterion)
    Network test cases (validation of whether the network is performing under required load or not, whether the network is able to support all the necessary applications during the testing procedures)
    Check Unmapped keys
    Check application splash screen
    Continued keypad entry during interrupts and other times like network issues
    Methods which deal with exiting the application
    Charger effect while an application is running in the background
    Low battery and high performance demand
    Removal of battery while an application is being performed
    Consumption of battery by application
    Check Application side effects

Source : Internet

47
Science and Information / ISTQB practice question
« on: March 29, 2014, 03:40:10 PM »
1)We split testing into distinct stages primarily because:
Each test stage has a different purpose.
It is easier to manage testing in stages.
We can run different tests in different environments.
The more stages we have, the better the testing.
2) Which of the following is likely to benefit most from the use of test tools providing test capture and replay facilities?
Regression testing
Integration testing
System testing
User acceptance testing
3)Which of the following statements is NOT correct?
A minimal test set that achieves 100% LCSAJ coverage will also achieve 100% branch coverage.
A minimal test set that achieves 100% path coverage will also achieve 100% statement coverage.
A minimal test set that achieves 100% path coverage will generally detect more faults than one that achieves 100% statement coverage.
A minimal test set that achieves 100% statement coverage will generally detect more faults than one that achieves 100% branch coverage
4) Analyze the following highly simplified procedure:

Ask: "What type of ticket do you require, single or return?"
IF the customer wants 'return'
Ask: "What rate, Standard or Cheap-day?"
IF the customer replies 'Cheap-day'
Say: "That will be $11:20"
ELSE
Say: "That will be $19:50"
ENDIF
ELSE
Say: "That will be $9:75"
ENDIF

Now decide the minimum number of tests that are needed to ensure that all
the questions have been asked, all combinations have occurred and all
replies given

3
4
5
6
5) Error guessing:
supplements formal test design techniques.
can only be used in component, integration and system testing.
is only performed in user acceptance testing
is not repeatable and should not be used.
6) A Test Plan Outline contains which of the following
i. Test Items
ii. Test Scripts
iii. Test Deliverables
iv. Responsibilities
I,ii,iii are true and iv is false
i,iii,iv are true and ii is false
ii,iii are true and i and iv are false
i,ii are false and iii , iv are true
7) Which of the following is NOT true of test coverage criteria?
Test coverage criteria can be measured in terms of items exercised by a test suite.
A measure of test coverage criteria is the percentage of user requirements covered
A measure of test coverage criteria is the percentage of faults found
Test coverage criteria are often used when specifying test completion criteria
8) In prioritizing what to test the most important objective is to:
find as many faults as possible
test high risk areas.
obtain good test coverage.
test whatever is easiest to test.
9) Given the following sets of test management terms (v-z), and activity descriptions (1-5), which one of the following best pairs the two sets?

v - test control
w - test monitoring
x - test estimation
y - incident management
z - configuration control
1 - calculation of required test resources
2 - maintenance of record of test results
3 - re-allocation of resources when tests overrun
4 - report on deviation from test plan
5 - tracking of anomalous test results

v-3,w-2,x-1,y-5,z-4
v-2,w-5,x-1,y-4,z-3
v-3,w-4,x-1,y-5,z-2
v-2,w-1,x-4,y-3,z-5
10) Which one of the following statements about system testing is NOT true?
System tests are often performed by independent teams.
Functional testing is used more than structural testing.
Faults found during system tests can be very expensive to fix.
End-users should be involved in system tests.
11) Which of the following is false?
Incidents should always be fixed.
An incident occurs when expected and actual results differ.
Incidents can be analyzed to assist in test process improvement.
An incident can be raised against documentation.
12) Enough testing has been performed when:
time runs out.
the required level of confidence has been achieved.
no more faults are found.
the users won't find any serious faults.
13) Which of the following is NOT true of incidents?
Incident resolution is the responsibility of the author of the software under test.
Incidents may be raised against user requirements.
Incidents require investigation and/or correction.
Incidents are raised when expected and actual results differ.
14) Which of the following is not described in a unit test standard?
syntax testing
equivalence partitioning
stress testing
decision coverage
15) Which of the following is false?
In a system two different failures may have different severities.
A system is necessarily more reliable after debugging for the removal of a fault.
A fault need not affect the reliability of a system.
Undetected errors may lead to faults and eventually to incorrect behavior.
16) Which one of the following statements, about capture-replay tools, is NOT correct?
They are used to support multi-user testing
They are used to capture and animate user requirements.
They are the most frequently purchased types of CAST tool.
They capture aspects of user behavior.
17) How would you estimate the amount of re-testing likely to be required?
a) Metrics from previous similar projects
b) Discussions with the development team
c) Time allocated for regression testing
d) a & b
18) Which of the following is true of the V-model?
It states that modules are tested against user requirements.
It only models the testing phase.
It specifies the test techniques to be used.
It includes the verification of designs.
19) The oracle assumption:
is that there is some existing system against which test output may be checked.
is that the tester can routinely identify the correct outcome of a test.
is that the tester knows everything about the software under test.
is that the tests are reviewed by experienced testers.
20) Which of the following characterizes the cost of faults?
They are cheapest to find in the early development phases and the most expensive to fix in the latest test phases.
They are easiest to find during system testing but the most expensive to fix then.
Faults are cheapest to find in the early development phases but the most expensive to fix then.
Although faults are most expensive to find during early development phases, they are cheapest to fix then.
21) Which of the following should NOT normally be an objective for a test?
To find faults in the software.
To assess whether the software is ready for release.
To demonstrate that the software doesn't work
To prove that the software is correct.
22) Which of the following is a form of functional testing?
Boundary value analysis
Usability testing
Performance testing
Security testing
23) Which of the following would NOT normally form part of a test plan?
Features to be tested
Incident reports
Risks
Schedule
24) Which of these activities provides the biggest potential cost saving from the use of CAST?
Test management
Test design
Test execution
Test planning
25) Which of the following is NOT a white box technique?
Statement testing
Path testing
Data flow testing
State transition testing
26) Data flow analysis studies
possible communications bottlenecks in a program.
the rate of change of data values as a program executes.
the use of data on paths through the code.
the intrinsic complexity of the code.
27) In a system designed to work out the tax to be paid:

An employee has $4000 of salary tax free. The next $1500 is taxed at 10%
The next $28000 is taxed at 22%
Any further amount is taxed at 40%
To the nearest whole dollar, which of these is a valid Boundary Value Analysis test case?

$1500
$32001
$33501
$28000
28) An important benefit ,of code inspections is that they:
enable the code to be tested before the execution environment is ready.
can be performed by the person who wrote the code.
can be performed by inexperienced staff.
are cheap to perform.
29) Which of the following is the best source of Expected Outcomes for User Acceptance Test scripts
Actual results
Program specification
User requirements
System specification
30) What is the main difference between a walkthrough and an inspection?
An inspection is lead by the author, whilst a walkthrough is lead by a trained moderator
An inspection has a trained leader, whilst a walkthrough has no leader.
Authors are not present during inspections, whilst they are during walkthroughs
A walkthrough is lead by the author, whilst an inspection is lead by a trained moderator
31)Which one of the following describes the major benefit of verification early in the life cycle?
It allows the identification of changes in user requirements.
It facilitates timely set up of the test environment.
It reduces defect multiplication.
It allows testers to become involved early in the project.
32) Integration testing :
tests the individual components that have been developed.
tests interactions between modules or subsystems.
only uses components that form part of the live system.
tests interfaces to other systems.
33) Static analysis is best described as:
the analysis of batch programs
the reviewing of test plans.
the analysis of program code.
the use of black box testing
34) Alpha testing is
post-release testing by end user representatives at the developer's site
the first testing that is performed.
pre-release testing by end user representatives at the developer's site.
pre-release testing by end user representatives at their sites
35) A failure is:
found in the software; the result of an error.
departure from specified behavior.
an incorrect step, process or data definition in a computer program
a human action that produces an incorrect result
36) In a system designed to work out the tax to be paid:

An employee has $4000 of salary tax free. The next $1500 is taxed at 10%
The next $28000 is taxed at 22%
Any further amount is taxed at 40%
Which of these groups of numbers would fall into the same equivalence class?

$4800; $14000; $28000
$5200; $5500; $28000
$28001; $32000; $35000
$5800; $28000; $32000
37) The most important thing about early test design is that it:
makes test preparation easier.
means inspections are not required.
can prevent fault multiplication
will find all faults.
38) Which of the following statements about reviews is false?
Reviews cannot be performed on user requirements specifications
Reviews are the least effective way of testing code
Reviews are unlikely to find faults in test plans.
Reviews should be performed on specifications, code, and test plans
39) Test Implementation and execution has which of the following major tasks?

i. Developing and prioritizing test cases, creating test data, writing test procedures and optionally preparing the test harnesses and writing automated test scripts.
ii. Creating the test suite from the test cases for efficient test execution.
iii. Verifying that the test environment has been set up correctly.
iv. Determining the exit criteria.

i,ii,iii are true and iv is false
i,iii,iv are true and ii is false
i,ii are true and iii,iv are false
ii,iii,iv are true and i is false
40) A configuration management system would NOT normally provide:
linkage of customer requirements to version numbers
facility to compare test results with expected results
the precise differences in versions of software component source code
restricted access to the source code library

52
1.KAZ Software Limited
2.orbitax
3.CIMSOLUTIONS
4.enosis:
5.codemate
6.Software People :
7.genweb2
8.MEtatube
9.subrasystems
10.rightbrainsolution
11. samsung





56
Science and Information / STLC
« on: March 27, 2014, 07:51:56 PM »
What is STLC (Software Testing LifeCycle)?

The process of testing a software in a well planned and systematic way is known as software testing lifecycle (STLC).

Different organizations have different phases in STLC however generic Software Test Life Cycle (STLC) for waterfall development model consists of the following phases.

1. Requirements Analysis
2. Test Planning
3. Test Analysis
4. Test Design
5. Test Construction and Verification
6. Test Execution and Bug Reporting
7. Final Testing and Implementation
8. Post Implementation


1. Requirements Analysis

In this phase testers analyze the customer requirements and work with developers during the design phase to see which requirements are testable and how they are going to test those requirements.

It is very important to start testing activities from the requirements phase itself because the cost of fixing defect is very less if it is found in requirements phase rather than in future phases.

2. Test Planning

In this phase all the planning about testing is done like what needs to be tested, how the testing will be done, test strategy to be followed, what will be the test environment, what test methodologies will be followed, hardware and software availability, resources, risks etc. A high level test plan document is created which includes all the planning inputs mentioned above and circulated to the stakeholders.

Usually IEEE 829 test plan template is used for test planning.

3. Test Analysis

After test planning phase is over test analysis phase starts, in this phase we need to dig deeper into project and figure out what testing needs to be carried out in each SDLC phase.

Automation activities are also decided in this phase, if automation needs to be done for software product, how will the automation be done, how much time will it take to automate and which features need to be automated.

Non functional testing areas(Stress and performance testing) are also analyzed and defined in this phase.

4. Test Design

In this phase various black-box and white-box test design techniques are used to design the test cases for testing, testers start writing test cases by following those design techniques, if automation testing needs to be done then automation scripts also needs to written in this phase.

5. Test Construction and Verification

In this phase testers prepare more test cases by keeping in mind the positive and negative scenarios, end user scenarios etc. All the test cases and automation scripts need to be completed in this phase and got reviewed by the stakeholders. The test plan document should also be finalized and verified by reviewers.

6. Test Execution and Bug Reporting

Once the unit testing is done by the developers and test team gets the test build, The test cases are executed and defects are reported in bug tracking tool, after the test execution is complete and all the defects are reported. Test execution reports are created and circulated to project stakeholders.

After developers fix the bugs raised by testers they give another build with fixes to testers, testers do re-testing and regression testing to ensure that the defect has been fixed and not affected any other areas of software.

Testing is an iterative process i.e. If defect is found and fixed, testing needs to be done after every defect fix.

After tester assures that defects have been fixed and no more critical defects remain in software the build is given for final testing.

7. Final Testing and Implementation

In this phase the final testing is done for the software, non functional testing like stress, load and performance testing are performed in this phase. The software is also verified in the production kind of environment. Final test execution reports and documents are prepared in this phase.

8. Post Implementation

In this phase the test environment is cleaned up and restored to default state, the process review meeting’s are done and lessons learnt are documented. A document is prepared to cope up similar problems in future releases.


Source : Internet

57
Science and Information / Software Performance Testing With Jmeter
« on: March 27, 2014, 07:47:36 PM »
The Apache JMeter™ desktop application is open source software, a 100% pure Java application designed to load test functional behavior and measure performance. It was originally designed for testing Web Applications but has since expanded to other test functions.

Practice Link : https://jmeter.apache.org/
                        http://www.tutorialspoint.com/jmeter/
                        http://blazemeter.com/blog/jmeter-tutorial-video-series


So Lets Start .........

58
Topics:
1.   GUI Testing:
It is argued by many practitioners and researchers that the proliferation of GUI's poses new challenges for the software testing community. What, if any, are these challenges? See GUI Testing: Pitfalls and Process by Atif Memon, IEEE Computer, vol 35, 8.

2.   Usability Testing:
Usability refers to the extent to which any software product supports its users in carrying out their tasks efficiently and effectively. It is therefore an important element in determining the quality of the software product. Several questions therefore arise. Following are a few that you may want to investigate:
Note: See Jakob Nielsens useit.com site for additional details on usability.
o   Developers should take account of usability early in the software development life cycle. How effectively can usability be addressed at the requirements elicitation phase?
o   The phrase "Fitness for Use" is often used to describe Software Quality. How does this notion relate to Usability Testing and how can it be used to justify the importance of addressing Usability throughout the software development lifecycle?
o   Acceptance Testing is the final phase of software development. Beta Testing is one aspect of Acceptance Testing. How can findings from Beta testing be used to encourage developers to take more account of usability early in the software development lifecycle.
o   The New Usability: It is argued by many practitioners and academics that the nature of emerging products and systems mean that traditional approaches to usability engineering and evaluation are likely to prove inappropriate to the needs of digital consumers. How so? See vol 9,2 (June 2002) of the ACM journal mentioned below for several related articles on this issue. Also, you may be interested in vol 6,4 (i.e. special issue on safety-critical interactive systems) and vol 7,3 (i.e. special issue on mobile systems).
The ACM Transactions on Computer-Human Interaction (TOCHI), is an excellent source for papers on Usability. It is available through DePaul electronic journal library.
Note: Use the journal name as the search string.

3.   Cleanroom Software Development:
A principal objective of the Cleanroom process is the development of software that exhibits zero failures in use. What is the Cleanroom process? See this technical report at the Carnegie Mellon site for details. Also, see IBM Systems Journal, vol 33,1 mentioned below for another Cleanroom paper (also authored by Richard Linger).

4.   Capability Maturity Model:
What is the Capability Maturity Model (CMM)? What does the CMM have to do with Software Quality Assurance and Software Testing? See the Carnegie Mellon SEI-CMM site for details.

5.   Software Quality Engineering:
What is Software Quality Engineering? How is it different from Software Engineering? See the American Society for Quality - Software Division site. Also see the CSQE - Body of Knowledge page.
You may also want to look into Software Quality from the perspective of Total Quality Management. The paper Software Quality by Vic Basili, IBM Systems Journal, vol 33,1 (1994) is an excellent source. It is available through DePaul electronic journal library.
Note: Use the journal name as the search string.

6.   Object Orientation & Testing:
It is argued by some practitioners and researchers that the the Object Oriented development paradigm poses new challenges for the software testing community. What, if any, are these challenges? See the Object-Oriented Testing: Myth and Reality article by Robert Binder that first appeared in Object magazine in 1995 but has been revised as recently as 2001. Robert Binder is the author of Testing Object-Oriented Systems, Addison-Wesley (1999).

7.   Software Tool Implementation:
Consider the Basis Path Testing technique. Since this technique is based on graph theory then, given the ready availability of a number of graph algorithms, one could implement a variety of tools that may be useful to a tester. For example, one could easily implement a function that recieves an adjacency matrix as an input data structure and determines the cyclomatic number of the graph. A slightly more involved implementation would involve using the depth-first search algorithm to determine the connected components of the graph. The number of connected components is required for the following formulation of the cyclomatic number expression:
Vg = E - N + 2(p)

where p is the number of connected components. Note that the expression for cyclomatic number discussed in class (see week #4 lecture notes) is a special case of this expression. That is, where p=1. Of course, many other implementations could be attempted.
Note: See Chapter 7 of Richard Johnsonbaugh's Discrete Math text.

8.   Testing Oracles:
An oracle is any program, process, or body of data that specifies the expected outcome of a set of test cases as applied to a tested software product. See Jim Bieman's paper Designing for Software Testability using automated Oracles that appeared in the Proc. International Test Conf., Sept. 1992.

9.   BeBugging/Mutation Testing:
BeBugging/Mutation Testing is a way of determining the effectiveness of testing. That is, it is a techniques that may be used to determine the number of remaining bugs in a software artifact after testing/review. See theweek #6 lecture notes for an overview. Also, see the following papers on this and related topics:
o   Interface Mutation: An Approach for Integration Testing by Marcio Delamaro et al., IEEE Transactions on Software Engineering, vol 27,3.
o   A Tutorial on Software Fault Injection by Jeffery Voas, IEEE Spectrum (2000)
o   Predicting Fault Detection Effectiveness by Joseph Morgan et al., Proceedings of the Fourth International Software Metrics Symposium (1997).
o   Residual Fault Density Prediction using Regression Methods by Joseph Morgan et al., Proceedings of the Seventh International Symposium on Software Reliability Engineering (1996).

10.   Concurrent Systems:
Testing concurrent systems poses challenges not faced by testers of sequential systems. Increasing interest in concurrent systems development is due in part to Java's built-in support for concurrent programming. Java allows multiple concurrent threads to take place within a single Java program. This enables developers to design applications that are more responsive to user demands, faster, and more easily controlled. See the following papers:
o   Systematically deriving Partial Oracles for Testing Concurrent Programs by Chris Hunter et al., Proceedings of the 24th Australasian Computer Science Conference (2001)
o   Testing Concurrent Programs: A formal Evaluation of Coverage Criteria by Michael Factor et al., Proceedings of the Seventh Isreali Conference on Computer-Based Systems and Software Engineering (1996)

11.   Real-Time and Embedded Systems Testing:
Manufacturers have for several years incorporated embedded computers in so-called smart products such as DVD players, televisions, printers, scanners, and cellular phones. Using embedded computers in devices that previously relied on analog circuitry such as digital cameras, digital camcorders, digital personal recorders, Internet radios, and Internet telephones provides revolutionary performance and functionality that merely improving analog designs could not achieve. However, the challenges faced by software developers has increased dramatically as these devices proliferate. This is especially so for the software tester. See the articleWhat is Embedded Computing by Wayne Wolf, IEEE Computer, Jan, 2002, to get a better idea of the field. Also, see the following papers to get an idea of testing issues:
o   Module Testing Embedded Software by Jason McDonald et al., Proceedings of the Seventh International Conference on Engineering of Complex Computer Systems (2001)
o   System for Automated Validation of Embedded Software by Sridevi Lingamarla et al., Proceedings of the 14th International Conference on Automated Software Engineering (1999)
Note: You may also want to see the ACM journal special issue on safety-critical interactive systems mentioned above.

12.   Wireless and Mobile Systems Testing:
Mobile and Wireless devices such as personal digital assistant (PDA) devices and mobile phones have become commonplace. The developer is faced with several issues. These include user interface design for small screens, memory management for low-memory devices, efficient programming techniques for limited processors, data synchronization for mobile databases, wireless programming and network programming. Many people argue that in addition to these challenges, consumers expect these devices to exhibit high levels of reliability and availability and so this posses additional challenges for software testers. To learn more about mobile computing see the article Mobile Processors Begin To Grow Up by David Clark, IEEE Computer, March 2002. See the Embedded Sytems references for papers. The testing issues are similar. Also, see the ACM journal mentioned above (particularly the new usability issue, vol 9,2 and the special issue on mobile systems, vol 7,3) for related papers.

13.   Agile Methods & Testing:
Agile methods, like extreme programming, seek to increase a software organization's responsiveness while decreasing development overhead. They focus on delivering executable code quickly and view people as the strongest ingredient of software development. What challenges does this approach to software development pose to Software Validation and Verification? See the article Extreme Programming: Rapid Development for Web-Based Applications by Frank Maurer et al., IEEE Internet Computing vol 6,1 for an overview of the topic

59
IT Forum / Load Testing (Software Testing)
« on: March 26, 2014, 07:31:10 PM »

60
QTP Tutorial

HP QuickTest Professional (QTP), an automated functional testing tool that helps testers to perform automated regression testing in order to identify any gaps, errors/defects in contrary to the actual/desired results of the application under test.

This tutorial will give you an indepth understanding on HP QuickTest Professional, it's way of usage, record and play back of tests, object repository, actions, checkpoints, sync points, debugging, test results etc and other related terminologies.

Audience
This tutorial is designed for Softwate Testing Professionals with a need to understand the QTP in enough detail along with its simple overview, and practical examples. This tutorial will give you enough ingredients to start with QTP from where you can take yourself at higher level of expertise.


Prerequisites
Before proceeding with this tutorial you should have a basic understanding of software development life cycle (SDLC). A basic understanding of VBScript is also required. You can also go through the basics of VBScript


USEFUL LINK FOR QTP USERS :

http://www.tutorialspoint.com/qtp/qtp_quick_guide.htm

http://www.tutorialspoint.com/qtp/qtp_useful_resources.htm

http://www.tutorialspoint.com/qtp/qtp_pdf_version.htm


SO Lets Start...................

Pages: 1 2 3 [4] 5 6