IBM
Shop Support Downloads
IBM Home Products Consulting Industries News About IBM

IBM : developerWorks : Linux : Linux articles
developerWorks
Using Bash shell scripts for function testing PDF - 87KB
Search  Advanced  Help


 
Contents:
Porting scripts from Korn to Bash
Creating the script
Running the script
Resources
About the author
Rate this article
Related dW content:
Bash by example,
Part 1, Part 2, and Part 3
Save time and effort while getting your app ship-shape

Angel Rivera (rivera@us.ibm.com)
Software Engineer, VisualAge TeamConnection, IBM
March 2001

Function testing is a critical part of software development -- and Bash, which is already loaded in Linux and ready to go, can help you do it quickly and easily. In this article, Angel Rivera explains how to use Bash shell scripts to perform function testing of Linux applications that use line commands. The scripts rely on the return code of the line commands, so you will not be able to use this approach for GUI applications.

Function testing is the phase during a development cycle in which the software application is tested to ensure that the functionality is working as desired and that any errors in the code are properly handled. It is usually done after the unit testing of individual modules, and before a more thorough system test of the entire product under load/stress conditions.

There are many testing tools in the marketplace that offer a lot of functionality to help with the testing efforts. However, they need to be obtained, installed, and configured, which could take up valuable time and effort. Bash can help to speed things along.

The advantages of using Bash shell scripts for function testing are:

  • The Bash shell is already installed and configured in your Linux system. You do not have to spend time in getting it ready.
  • You can create and modify the Bash shell scripts using text editors already provided by Linux, such as vi. You do not need to acquire specialized tools to create the test cases.
  • If you already know how to develop Bourne or Korn shell scripts, then you already know enough to start working with Bash shell scripts. Your learning curve is greatly diminished.
  • The Bash shell provides plenty of programming constructs to develop scripts that have a range from very simple to medium complexity.

Recommendations when porting scripts from Korn to Bash
If you have existing Korn shell scripts that you want to port to Bash, you need to take into account the following:

  • The Korn "print" command is not available in Bash; use the "echo" command instead.
  • You will need to change the first line of the script from:
    #!/usr/bin/ksh
    to:
    #!/bin/bash

Creating Bash shell scripts for function testing
These basic steps and recommendations can be applied to many client/server applications that run in Linux.

  1. Document the prerequisites and main sequence for running scripts
  2. Divide actions into logical groups
  3. Develop an execution sequence based on a common usage scenario
  4. Provide comments and instructions in each shell script
  5. Make an initial backup to create a baseline
  6. Check for input parameters and environment variables
  7. Try to provide "usage" feedback
  8. Try to provide a "silent" running mode
  9. Provide one function to terminate the script when there are errors
  10. When possible, provide functions that do a single task well
  11. Capture the output of each script, while watching the output being produced
  12. Inside each script, capture the return code of each line command
  13. Keep a count of the failed transactions
  14. Highlight the error messages for easy identification in the output file
  15. When possible, generate files "on the fly"
  16. Provide feedback on the progress of the execution of the script
  17. Provide a summary of the execution of the script
  18. Try to provide an output file that is easy to interpret
  19. When possible, provide cleanup scripts and a way to return to the baseline

Each recommendation is detailed below along with a Bash shell script for illustration. To download this script, see the Resources section later in this article.

1. Document the prerequisites and main sequence for running scripts
It is important to document, preferably in a single file with a self-describing title (such as "README-testing.txt"), the main ideas behind the function testing, including the prerequisites, the setup for the server and the client, the overall (or detailed) sequence of the scripts to follow, how to check for success/failures of the scripts, how to perform the cleanup, and to restart the testing.

2. Divide the actions into logical groups
If you have a very small list of actions to be performed, then you could put them all in a single shell script.

However, if you have a large list of actions, it is good to group them into logical sets, such as the server actions in one file and the client actions in another. This way, you will have finer granularity to perform the testing and to maintain the test cases.

3. Develop an execution sequence based on a common usage scenario
Once you have decided on the grouping of the actions, you need to think of performing the actions in a sequence that follows a common usage scenario. The idea is to simulate a real-life end-user situation. As a general rule, try to focus on the 20% of usage cases that test about 80% of the most commonly invoked functions.

For example, let’s assume that the application requires three test groups in a specific sequence. Each test group could be in a file, with a self-describing filename (where possible), and a number that helps to indicate the order of each file in the sequence, such as:


1.	fvt-setup-1:	To perform initial setup.
2.	fvt-server-2: 	To perform server commands.
3.	fvt-client-3: 	To perform client commands.
4.	fvt-cleanup: 	To cleanup the temporary files, 
                        in order to prepare for the repetition
                        of the above test cases. 

4. Provide comments and instructions in each shell script
It is good coding practice to provide pertinent comments and instructions in the header of each shell script. That way, when another tester is assigned to run the scripts, the tester will get a good idea of the scope of the testing done in each script, as well as any prerequisites and warnings.

An example is shown below, from the sample Bash script "test-bucket-1".

#!/bin/bash
#
# Name: test-bucket-1
#
# Purpose:
#    Performs the test-bucket number 1 for Product X.
#    (Actually, this is a sample shell script, 
#     which invokes some system commands 
#     to illustrate how to construct a Bash script) 
#
# Notes:
# 1) The environment variable TEST_VAR must be set 
#    (as an example).
# 2) To invoke this shell script and redirect standard 
#    output and standard error to a file (such as 
#    test-bucket-1.out) do the following (the -s flag 
#    is "silent mode" to avoid prompts to the user):
#
#    ./test-bucket-1  -s  2>&1  | tee test-bucket-1.out
#
# Return codes:
#  0 = All commands were successful
#  1 = At least one command failed, see the output file 
#      and search for the keyword "ERROR".
#
########################################################

5. Make an initial backup to create a baseline
You may need to perform the function testing several times. The first time you run it, you will likely find some errors in your scripts or in the procedures. Therefore, to avoid wasting too much time in recreating the server environment from scratch -- especially if a database is involved -- you may want to make a backup just before starting with the testing.

After you run the function test cases, then you could restore the server from the backup, and you would be ready for the next round of testing.

6. Check for input parameters and environment variables
It is a good idea to validate the input parameters and to check if the necessary environment variables are properly set. If there are problems, display the reason for the problem and how to fix it, and terminate the script.

The tester who is going to run this script will generally appreciate it if the script terminates shortly after being invoked in case a variable is not correct. No one likes to wait a long time in the execution of the script to find out that a variable was not properly set.


# --------------------------------------------
# Main routine for performing the test bucket
# --------------------------------------------

CALLER=`basename $0`         # The Caller name
SILENT="no"                  # User wants prompts
let "errorCounter = 0"

# ----------------------------------
# Handle keyword parameters (flags).
# ----------------------------------

# For more sophisticated usage of getopt in Linux, 
# see the samples file: /usr/lib/getopt/parse.bash

TEMP=`getopt hs $*`
if [ $? != 0 ]
then
 echo "$CALLER: Unknown flag(s)"
 usage
fi

# Note quotes around `$TEMP': they are essential! 
eval set -- "$TEMP"

while true                   
 do
  case "$1" in
   -h) usage "HELP";    shift;; # Help requested
   -s) SILENT="yes";    shift;; # Prompt not needed
   --) shift ; break ;; 
   *) echo "Internal error!" ; exit 1 ;;
  esac
 done

# ------------------------------------------------
# The following environment variables must be set
# ------------------------------------------------

if [ -z "$TEST_VAR" ]
then
  echo "Environment variable TEST_VAR is not set."
  usage
fi

Note the following about this script:

  • The statement CALLER=`basename $0` is used to get the name of the script being run. In that way, you do not need to hard-code the script name in the script. Thus, when you make a copy of the script, it will take less work to adapt the newly derived script.
  • The statement TEMP=`getopt hs $*` is used to get the input arguments when the script is invoked (such as the -h for help and -s for silent mode).
  • The statements [ -z "$X" ] and echo "The environment variable X is not set." and usage are used to test if the string is null (-z) and if so, then perform the echo statement saying that it is not set and invoke the "usage" function discussed below.
  • If your script does not use flags, then you can use the variable "$#", which returns the number of arguments that are being passed to the script.

7. Try to provide "usage" feedback
It is a good idea to provide a "usage" statement that explains how to use the script:


# ----------------------------
# Subroutine to echo the usage
# ----------------------------

usage()
{
 echo "USAGE: $CALLER [-h] [-s]"
 echo "WHERE: -h = help       "
 echo "       -s = silent (no prompts)"
 echo "PREREQUISITES:"
 echo "* The environment variable TEST_VAR must be set,"
 echo "* such as: "
 echo "   export TEST_VAR=1"
 echo "$CALLER: exiting now with rc=1."
 exit 1
}

This "usage" statement can be called when the script is invoked with the "-h" flag, such as:
./test-bucket-1 -h

8. Try to provide a "silent" running mode
You may want a script to have two running modes:

  • A "verbose" mode (you might want this as the default) in which the user is prompted to enter a value or to simply press Enter to continue.
  • A "silent" mode, in which the user is not prompted for data.

The following excerpt illustrates the handling of the invocation flag "-s" to run the script in silent mode:


# -------------------------------------------------
# Everything seems OK, prompt for confirmation
# -------------------------------------------------

if [ "$SILENT" = "yes" ]
then
 RESPONSE="y"
else
 echo "The $CALLER will be performed."
 echo "Do you wish to proceed [y or n]? "
 read RESPONSE                  # Wait for response
 [ -z "$RESPONSE" ] && RESPONSE="n"
fi

case "$RESPONSE" in
 [yY]|[yY][eE]|[yY][eE][sS])
 ;;
 *)
  echo "$CALLER terminated with rc=1."
  exit 1
 ;;
esac

9. Provide one function to terminate the script when there are errors
It is a good idea to provide a central function to terminate the execution of the script when critical errors are encountered. This function could provide additional instructions on what to do in such situations:


# ----------------------------------
# Subroutine to terminate abnormally
# ----------------------------------

terminate()
{
 echo "The execution of $CALLER was not successful."
 echo "$CALLER terminated, exiting now with rc=1."
 dateTest=`date`
 echo "End of testing at: $dateTest"
 echo ""
 exit 1
}

10. When possible, provide functions that perform a simple task well
For example, instead of issuing a big list of long line commands, such as:


# --------------------------------------------------
echo ""
echo "Creating Access lists..."
# --------------------------------------------------

 Access -create -component Development -login ted -authority plead -verbose
 if [ $? -ne 0 ]
 then
  echo "ERROR found in Access -create -component Development -login ted 
    -authority plead"
  let "errorCounter = errorCounter + 1"
 fi

 Access -create -component Development -login pat -authority general -verbose
 if [ $? -ne 0 ]
 then
  echo "ERROR found in Access -create -component Development -login pat 
    -authority general"
  let "errorCounter = errorCounter + 1"
 fi

 Access -create -component Development -login jim -authority general -verbose
 if [ $? -ne 0 ]
 then
  echo "ERROR found in Access -create -component Development -login jim 
    -authority general"
  let "errorCounter = errorCounter + 1"
 fi

... you could create a function such as the following, which also handles the return code and, if needed, increases the error counter:


CreateAccess()
{
 Access -create -component $1 -login $2 -authority $3 -verbose
 if [ $? -ne 0 ]
 then
  echo "ERROR found in Access -create -component $1 -login $2 -authority $3"
  let "errorCounter = errorCounter + 1"
 fi
}

... and then invoke this function in a manner that is easy to read and to expand:


# ------------------------------------------- 
echo ""
echo "Creating Access lists..."
# ------------------------------------------- 

CreateAccess Development ted    projectlead
CreateAccess Development pat    general
CreateAccess Development jim    general

11. Capture the output of each script, while displaying the output being produced
If the script does not automatically send the output to a file, you can exploit some features of the Bash shell to capture the output of the execution of the script, such as:


./test-bucket-1  -s  2>&1  | tee test-bucket-1.out

Let’s analyze the above command:

  • The "2>&1" command:

    Using "2>&1" , we redirect the standard error to standard output. The string "2>&1" indicates that any errors should be sent to the standard output, that is, the UNIX/Linux file id of 2 for standard error, and the file id of 1 for standard output. If you do not use this string, then you will be capturing only the good messages, and the error messages will not be captured.

  • The pipe "|" and the "tee" command:

    There is a good analogy between the UNIX/Linux processes and simple plumbing concepts. In this case, we want to make a pipeline in which the input to the pipeline is the output of the desired script. The next thing to decide is what to do with the output of the pipeline. In this case, we want to capture it in an output file, named "test-bucket-1.out" in our example.

    However, besides capturing the output, we also want to watch the output being produced while the script is running. To this end, we attach a "tee" (T-shape pipe) that permits two things at the same time: placing the output into a file AND displaying the output into the screen. The plumbing analogy would be:

    
     process --> T ---> output file
                 |
                 V
               screen
    

    If you only want to capture the output and you do not want to see the output being displayed on the screen, then you can omit the extra plumbing: ./test-bucket-1 -s 2>&1 > test-bucket-1.out

    The plumbing analogy in this case would be:

    process --> output file

12. Inside each script, capture the return code of each line command
One way to determine the success or failure of the function testing is by counting the line commands that have failed, that is, that have a return code different than 0. The variable "$?" provides the return code of the command recently invoked; in the example below, it provides the return code of the execution of the "ls" command.


# -------------------------------------------
# The commands are called in a subroutine 
# so that return code can be
# checked for possible errors.
# -------------------------------------------
ListFile() 
{ 
  echo "ls -al $1" 
  ls -al $1 
  if [ $? -ne 0 ] 
  then 
     echo "ERROR found in: ls -al $1" 
     let "errorCounter = errorCounter + 1" 
fi 
} 

13. Keep track of the number of failed transactions
One way to determine the success or failure in function testing is by counting the line commands that return a value other than 0. However, in my personal experience, I am accustomed to handling only strings in my Bash shell scripts, rather than integers. The manuals I consulted were not too clear on how to use integers, which is why I want to expand a little bit here on how to use integers and additions to count the number of errors (failures of line commands):

First, you need to initialize the counter variable as follows:



let "errorCounter = 0"

Then, issue the line command and capture the return code using the $? variable. If the return code is different than 0, then increment the counter by one (see the statement in bold blue):


ListFile()
{
 echo "ls -al $1"
 ls -al $1
 if [ $? -ne 0 ]
 then
  echo "ERROR found in: ls -al $1"
  let "errorCounter = errorCounter + 1"
 fi
}

By the way, the integer variables can be displayed as other variables using "echo".

14. Highlight the error messages for easy identification in the output file
When an error (or failed transaction) is encountered, besides increasing the error counter, it is a good idea to print an indication that there was an error. Ideally, the string of characters should have a substring such as ERROR or something similar (see the statement in bold blue), which will allow the tester to quickly find the error in the output file. This output file could be large, and it is important to quickly locate errors.


ListFile()
{
 echo "ls -al $1"
 ls -al $1
 if [ $? -ne 0 ]
 then
  echo "ERROR found in: ls -al $1"
  let "errorCounter = errorCounter + 1"
 fi
}

15. When possible, generate files "on the fly"
In some cases it is necessary to handle files that will be used by the application. You could use existing files or you could add statements in the script to create them. If the files to be used are long, then it is better to have them as separate entities. If the files are small and the contents simple or not relevant (the important point is to have a text file, regardless of its contents), then you could decide to create these temporary files "on the fly".

The following lines of code show an example of how a temporary file is created "on the fly":


cd $HOME/fvt

echo "Creating file softtar.c"

echo "Subject: This is softtar.c" >  softtar.c
echo "This is line 2 of the file" >> softtar.c

The first echo statement uses the single > to force the creation of a new file. The second echo statement uses the double >> to append data to the bottom of an existing file. By the way, in case the file does not exist, then the file will be created.

16. Provide feedback on the progress of the execution of the script
It is a good idea to include echo statements in the script to indicate the logical progress of its execution. You can add something that will quickly identify the purpose of the output.

If the script is going to take more than few seconds to execute, you may want to print the date at the beginning and at the end of the execution of the script. This will allow you to compute the elapsed time.

In the sample script, some echo statements that provide the indication of the progress are shown:



# --------------------------------------------
echo "Subject: Product X, FVT testing"
dateTest=`date`
echo "Begin testing at: $dateTest"
echo ""
echo "Testcase: $CALLER"
echo ""
# --------------------------------------------

# --------------------------------------------
echo ""
echo "Listing files..."
# --------------------------------------------

# The following file should be listed:
ListFile   $HOME/.profile

...

# --------------------------------------------
echo ""
echo "Creating file 1"
# --------------------------------------------

17. Provide a summary of the execution of the script
If you are counting the errors or failed transactions, it is good to indicate whether there were errors. The idea is that the tester could see the bottom of the output file and quickly tell if there were errors or not.

In the following sample script, the code statements provide such a summary of the execution:


# --------------
# Exit
# --------------
if [ $errorCounter -ne 0 ]
then
 echo ""
 echo "*** $errorCounter ERRORS found during ***"
 echo "*** the execution of this test case.  ***"
 terminate
else
 echo ""
 echo "*** Yeah! No errors were found during ***"
 echo "*** the execution of this test case. Yeah! ***"
fi

echo ""
echo "$CALLER complete."
echo ""
dateTest=`date`
echo "End of testing at: $dateTest"
echo ""

exit 0

# end of file

18. Try to provide an output file that is easy to interpret
It is very helpful to provide some key information in the actual output that is generated by the script. In that way, the tester could easily determine if the file that is being viewed is relevant and current. The addition of the date-time stamp is important to give a sense of currency. Also, the summary report helps to determine whether there were errors; if there were errors, then the tester will have to search for the specified keyword, such as ERROR, and identify the individual transactions that failed.

A truncated sample output file is shown below:


Subject: CMVC 2.3.1, FVT testing, Common, Part 1 
Begin testing at: Tue Apr 18 12:50:55 EDT 2000   
                                                 
Database: DB2                                    
Family:   cmpc3db2                               
Testcase: fvt-common-1                           
                                                 
                                                 
Creating Users...                                
User pat was created successfully.               
...

Well done! No errors were found during the 
execution of this test case :)
                                                                           
fvt-common-1 complete.                                                      
                                                                            
End of testing at: Tue Apr 18 12:56:33 EDT 2000

An example of the bottom of the output file when errors are encountered is shown below:


ERROR found in Report -view DefectView

*** 1 ERRORS found during the execution of this test case. ***           
The populate action for the CMVC family was not successful.               
Recreating the family may be necessary before 
running fvt-client-3 again, that is, you must use 'rmdb', 
'rmfamily', 'mkfamily' and 'mkdb -d',       
then issue: fvt-common-1 and optionally, fvt-server-2.                    
fvt-client-3 terminated, exiting now with rc=1.                           
End of testing at: Wed Jan 24 17:06:06 EST 2001

19. When possible, provide cleanup scripts and a way to return to the baseline
The test scripts may generate temporary files; in that case, it is a good practice to have a script that will delete those temporary files. This will avoid mistakes in which the tester may not delete all the temporary files, or worse, delete some needed files that were not temporary.

Running the function-testing Bash shell script
This section describes how to run the Bash shell scripts for function testing. I’m assuming that you’ve executed all the steps in the previous sections.

Setting up required environment variables
Specify the following environment variables in the .profile or manually on demand. This variable is used to illustrate how to handle in the script, the verification that required environment variables must be defined prior to running the script.


   export TEST_VAR=1

Copying the Bash shell scripts into the proper directory
The Bash shell scripts and the associated files need to be copied into the directory structure of the user id who is going to conduct the function testing.

  1. Log into the account. You should be in the home directory. Let's assume that it is /home/tester.
  2. Create a directory for the test cases: mkdir fvt
  3. Copy the Bash shell scripts and the associated files. Obtain the zip file (see Resources) and place it under $HOME. Then unzip it as follows: unzip trfvtbash.zip
  4. Change the proper file permissions, in order to execute the files: chmod u+x *
  5. Change the name to remove the file suffix: mv test-bucket-1.bash test-bucket-1

Running the script
To run the script, perform the following:

  1. Log into the tester user id.
  2. Change to the directory where the scripts were copied: cd $HOME/fvt
  3. From $HOME/fvt run the script: ./test-bucket-1 -s 2>&1 | tee test-bucket-1.out
  4. Look at the bottom of the output file "test-bucket-1.out" and see the conclusion of the summary report.

Resources

  • Download trfvtbash.zip, which contains the sample code and tools referenced in this article. The tools might be updated in the future.
  • To unzip the files, try the Info-ZIP software. Because of the general value of the tools, it is recommended that you add the unzip and zip tools in a directory in the PATH that is accessible to all the users for the machine.

    How to unzip the files:
    To view the contents of the zip file (without actually unpackaging and uncompressing the files), do:
    unzip -l trfvtbash.zip
    To unpackage and uncompress the zip file, do:
    unzip trfvtbash.zip

  • Read Daniel Robbins' three-part series on bash programming on developerWorks: Part 1, Part 2, and Part 3.
  • Visit GNU's bash home page.
  • Check out the Bash Reference Manual.

About the author
Angel Rivera is an advisory software engineer with the VisualAge TeamConnection technical support team, where he is currently the team lead. He has an M.S. in Electrical Engineering from The University of Texas at Austin, and a B.S. in Electronic Systems Engineering from the Instituto Tecnologico y de Estudios Superiores de Monterrey, Mexico. He joined IBM in 1989. He can be reached at
rivera@us.ibm.com.

In developing this article, Angel would like to acknowledge the contribution of Lee Perlov in WebSphere technical support.





PDF - 87KB
What do you think of this article?

Killer! (5) Good stuff (4) So-so; not bad (3) Needs work (2) Lame! (1)

Comments?


Privacy Legal Contact