In software testing industry, there are 7 principles of testing. It’s very important to learn about this principle because these are nothing but the pillars of your testing efforts.

Principle 1: Testing shows presence of defects: Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but even if no defects are found, it is not a proof of correctness. In other words, one can never assume that there are no defects or the application is 100 percent bug free even if thorough testing is done.

As per this principle, testing is a process which shows defects are present is software. Defects are identified by using different software testing execution techniques. At the same time, testing doesn’t prove that after finding defects that there are no defects present in the system. In many cases, it is identified that defects are present is software even after undergone through the rigorous testing activity. This principle talks about the reduction of a number of defects in software. There are always chances that the software has undiscovered defects, testing should not be considered as a proof of defect free software.

Principle 2: Exhaustive testing is impossible: Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts. For example, if we are testing a text box that accepts numbers between 0 to 100, we would test for boundary values, one less than boundary value, one more than boundary values, few random numbers, middle number, that’s it and assume that if it is working fine for these numbers it will work for other numbers also. We are not testing for each number from 1 to 100.

If we talk about this principle, it says it is not possible to test complete software. Test with all combinations of inputs and outputs. Test with all possible scenarios is not possible. Then you must be thinking how we will test the complete software. See, instead of performing complete or exhaustive testing we go for risk-based testing. Identifying the impact can help us to identify the module which are on high risk.

Don’t think much about it, at this moment it is enough for you to know exhaustive testing is not possible. Later on you will come to know, why principle itself says it not possible.

Principle 3: Early testing: To find defects early, testing activities shall be started as early as possible in the software or system development life cycle, and shall be focused on defined objectives. If the testing team is involved right from the beginning of the requirement gathering and analysis phase they have better understanding and insight into the product and moreover the cost of quality will be much less if the defects are found as early as possible rather than later in the development life cycle.

This principle asks to start testing software in the early stage of software development life cycle. As we are starting testing activity in early stage, this helps us to identify defects and fixed early with a low budget and within assigned time period. It allows to handover ordered software on time with expected quality.

Principle 4: Defect clustering: Testing effort shall be focused proportionally to the expected and later observed defect density of modules. A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures. The Pareto principle of 80:20 works here, that is 80 percent of defects are due to 20 percent of code! This information could prove to be very helpful while testing, if we find one defect in a particular module/area there is pretty high chance of getting many more there itself.

Usually, maximum defects in software lie within the limited set of software areas. If you successfully identify this area, it’s become quite a simple task for you to bring that sensitive area under the focus of testing. It is considered as one of the most efficient ways to perform testing efficiently.

Principle 5: Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the same set of test cases will no longer be able to find any new bugs. To overcome this “Pesticide Paradox”, test cases need to be regularly reviewed and revised, and the new and different tests need to be written to exercise different parts of the software or system to find potentially more defects.

If you are using the same set of test cases repeatedly, then after some time those test cases do not find any new defects. The effectiveness of test cases starts degrading after some round executions, so it is always recommended to review and revise the test cases on a regular interval in order to find new defects. It is allowed to add new scenario or test cases even after the execution of particular test set.

Principle 6: Testing is context dependent: Testing is done differently in different contexts. For example, safety – critical software is tested differently from an e-commerce site. Very true, testing effort should be based on what is to be tested. Testing focus will depend on what is more important for that type of application.

According to this principle; if you are testing web application and mobile application using same testing strategy, then it is wrong. This principle says the testing approach should be different and that’s depending on the application. Strategy for testing web application would be different from android mobile app testing.

Principle 7: Absence–of–errors fallacy: If the system built is unusable and does not fulfil the user’s needs and expectations then finding and fixing defects does not help. As said, if the product does not meet user’s requirements – explicitly mentioned and implicitly implied, that is if it is not fit for use, there is no point in testing, finding defects and fixing it.

This principle points towards the usefulness of the system. In other words finding the defects and fixing it will not help user unless and until the software is not developed according to the requirement.


Source :


Install JDK on OL

    • Download JDK

curl -v -j -k -L -H “Cookie: oraclelicense=accept-securebackup-cookie” http://download.oracle.com/otn-pub/java/jdk/8u141-b15/336fa29ff2bb4ef291e347e091f7f4a7/jdk-8u141-linux-x64.rpm > jdk-8u141-linux-x64.rpm

    • Installation

su – root
chmod +x jdk-8u141-linux-x64.rpm
rpm rpm -ivh jdk-8u141-linux-x64.rpm

logback in java

This is notes about logback in java :

  1. Download logback : http://logback.qos.ch/download.html
  2. Download Slf4j : http://www.slf4j.org/download.html
  3. Add into our project library.
  4. Create logback.xml file for configuration : (http://logback.qos.ch/manual/configuration.html), as per documentation, it is said that :


Logback can be configured either programmatically or with a configuration script expressed in XML or Groovy format. By the way, existing log4j users can convert their log4j.properties files to logback.xml using our PropertiesTranslator web-application.

Let us begin by discussing the initialization steps that logback follows to try to configure itself:

  1. Logback tries to find a file called logback.groovyin the classpath.
  2. If no such file is found, logback tries to find a file called logback-test.xmlin the classpath.
  3. If no such file is found, it checks for the file logback.xmlin the classpath..
  4. If no such file is found, service-provider loading facility (introduced in JDK 1.6) is used to resolve the implementation of com.qos.logback.classic.spi.Configurator interface by looking up the file META-INF\services\ch.qos.logback.classic.spi.Configurator in the class path. Its contents should specify the fully qualified class name of the desired Configurator implementation.
  5. If none of the above succeeds, logback configures itself automatically using the BasicConfigurator which will cause logging output to be directed to the console.

Sample logback.xml file

<?xml version="1.0" encoding="UTF-8"?>
<!--   <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> -->
<!--    <encoder> -->
<!--       <pattern> -->
<!--         %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n -->
<!--      </pattern> -->
<!--     </encoder> -->
<!--   </appender> -->
<!--   <root level="info"> -->
<!--     <appender-ref ref="STDOUT" /> -->
<!--   </root> -->

<consolePlugin />
<logger name="com.test" level="ERROR"/> -->


5. Sample usage in the code

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

private static Logger logger = LoggerFactory.getLogger("com.test");
logger.info("Logged as Info");

Decompile APK

Sorry macha 🙂 , I have no option, I have to break your code

  1. Rename the APK file to ZIP file. (Extract the file, and get the classes.dex)
  2. Download dex2jar : https://github.com/pxb1988/dex2jar
  3. Convert dex file to jar by running command : d2j-dex2jar.bat classes.dex (if you gor error like unsupportedclassversionerror, then check your jdk version, in my case it should be jdk 1.7)
  4. Download jd : http://jd.benow.ca/
  5. Open the jar file using jd.
  6. Done

This time, the APK was obfuscated , hence need more effort on reading the source code. But at least it is readable by human now :).

Thanks for dex2jar and jd developer, you have make my week.

Generate gmail SSL certificate and add into java keystore

first, download openssl application, then
execute following command :

openssl s_client -connect smtp.gmail.com:465;
openssl s_client -connect imap.gmail.com:993;

each of the command will return with certificate like below :


copy and save with file name like : smtp.gmail.cert, if it is smtp certificate, do the same for imap

then execute below command : (do the same for imap)

keytool -import -alias smtp.gmail.com -keystore "C:\Users\OSB\Documents\gmail-keystroke.jks" -file C:\Users\OSB\Documents\smtp_gmail.cert