medver-pytest Tutorial

Overview

PyPi module N/A
git repository https://bitbucket.org/arrizza-public/medver-pytest-tutorial
git command git clone git@bitbucket.org:arrizza-public/medver-pytest-tutorial.git
Verification Report https://arrizza.com/web-ver/medver-pytest-tutorial-report.html
Version Info
  • macOS 14.5, Python 3.10
  • Ubuntu 20.04 focal, Python 3.10
  • Ubuntu 22.04 jammy, Python 3.10

Summary

Note: requires medver-pytest v1.0.0 or better

This repo shows how to use medver-pytest module

medver-pytest is used to write verification test cases within a Python & pytest environment. FDA compliant reports can then be generated.

A User Guide is available doc/medver-pytest User Guide.docx.

The verification test cases are written in Python which use the pytest module to invoke the test cases. During execution, various data is captured. The data is then used by the report facility to generate Test Protocol, Test Report and Trace Matrix documents in docx, pdf and text formats. A summary is also generated indicating percentage of requirements that are complete, passing/failing etc.

See doc/how_to.md for more information about the verification test cases.

This repo uses a simulated IV pump application, a set of requirements and test scripts for it to show how a typical project would be created. A sample set of reports can be generated and reviewed.

To install

./do_install

Set up the python environment

# Note: the do_ver script automatically activates the python environment.
source tools/set_env.sh
source $pybin/activate

Note: see doc/test_process.md for a list of OS versions it has been tested on

The remaining do_xx scripts are mostly for development purposes but can also be useful:

  • do_clean - cleans up all temporary directories
  • do_doc - uses doxygen to generate documents. see out/doc/html/index.html
  • do_lint - uses pylint to ensure high quality scripts
  • do_update - updates to the latest medver-pytest

How to run sample test scripts

There are 3 different samples showing how to run medver-pytest:

  1. Fully automated - see section To run all tests - automated

    • the scripts do not required any interaction from the user/tester
    • they run all the tests in order specified in the do_ver script
  2. Fully automated but run in parallel see section To run all tests - cicd

  3. Manual testing - see section To run manual tests

To run all tests - automated

Sets up the environment and runs all automated test scripts, see the ver/ directory.

./do_ver
./do_ver -k gui       # run only tkinter GUI test cases
./do_ver -k headless  # run only headless test cases
./do_ver -k tp011     # run a specific test case
  • Takes just over 2 minutes to run

    • half of them use a tkinter GUI. Do not touch the GUI as it is being tested. tkinter will report unexpected interactions due to moving the GUI or iconizing it or any other interactions.
    • the remainder are headless (i.e. no GUI)
  • generates 4 reports:

    • reports can be pdf, docx, or txt files in any combination
    • out/ver/test_protocol.pdf - all test protocols, without results
    • out/ver/test_report.pdf - all test protocols including results
    • out/ver/summary.pdf - percentage of test protocols and requirements that passed/failed
    • out/ver/trace.pdf - all requirements, which protocols test them
    • see Generated reports for full description
  • see srs.json for sample requirements

  • see cfg.json for configuration information
  • see ver/test_tp*.py for sample test scripts

To run all tests - cicd

Shows an example of how to run all tests in parallel. This can be used as a basis for setting up a CI/CD set of jobs (e.g. Jenkins)

./do_cicd_all
  • it uses do_cicd_simulator.py to kick off multiple jobs in parallel processes.
  • the job do_cicd_job_init prepares the storage area
  • each job is kicked off in parallel using do_cicd_job_ver to run it
  • the job do_cicd_job_final generates the verification report. The job is run only when all other jobs are complete.
  • the jobs are numbered 1-5 and the data for them are in a corresponding out/jobN directory

To run manual tests

Shows an example of how to manually run tests.

./do_ver_manual

Note:

  • follow the instructions on the dialog boxes precisely
  • each manual test is initiated from a script.
  • the actual value gathered from the tester can be used within the script for additional processing.
  • this ability can be used to:
    • double-check the testers Pass/Fail declarations
    • to perform "semi-automated" runs, i.e. some tests are automated and some are manual

Generated reports

There are 5 types of reports and data:

  • protocol - holds protocol and steps, but not results
  • report - holds protocol, steps and all actual values and results
  • trace - trace matrix from each requirement to one or more protocols that tested them
  • summary - overall summary data
  • other files - logging files

Protocol

  • *_protocol.json - a set of raw data files that holds the protocol and step data generated when the tests were run
  • test_protocol.pdf - contains the protocol report (without results) in PDF format
  • test_results.txt - holds same in information in text format

Protocol Report (with results)

  • *_protocol.json - same as for Protocol
  • test_report.pdf - contains the protocol report with results in PDF format
  • test_report also has .docx and .txt versions

Trace

  • *_trace.json - a set of raw data files that holds the trace information
  • trace.pdf - contains the trace matrix report in PDF format
  • trace also has .docx and .txt versions

Summary

  • *_summary.json - a set of raw data files that holds the summary data of the requirement ids and protocols that they were tested in
  • summary.pdf - contains the summary report in PDF format
  • summary also has .docx and .txt versions

Other

  • medver_pytest_tutorial.txt - a copy of the stdout generated by do_ver script
  • medver_pytest.log - generated logging file during report generation

To change configuration

  • Note: you can add your own cfg entities as needed. see "ptv_ip_addr" and "ptv_ip_port" as examples

  • outdir - location of output and report files

  • statusdir - location of status.json used by manual tests
  • storage_type

    • local - all test json files are stored in a local directory
    • shared - all test json files are created locally and then moved to a shared directory (see cicd_*.json for example)
  • test_run_type (one of: formal, dryrun, dev)

  • formal - this is a formal run, ready for DHF
  • dryrun - a dry run prior to a formal run
  • dev - for development purposes only

  • test_run_id - an id that identifies this test run

  • dts_format - format of the date time stamp in the reports, eg: "%Y-%m-%d %H:%M:%S %Z" uses local timezone

  • use_utc use UTC dates

  • reqmt_json_path - path to json file containing requirement ids and descriptions

  • testers - dict of tester initials and their full names used for manual testing

  • report_types - types of report files to generate: txt, pdf or docx

  • tp_protocol - config related to the test_protocol.* files

    • orientation - landscape or portrait (default)
    • header - config related to the page header
      • left - left aligned header string
      • middle - center aligned header string
      • right - right aligned header string
    • footer - config related to the page footer
      • left: left aligned footer string
      • middle: center aligned footer string
      • right: right aligned footer string
    • Note: left/middle/right can indicate a page number using: ""
  • tp_report - config related to the test_report.* files

    • the sub-fields are the same as for tp_protocol
  • trace - config related to the trace.* files
    • the sub-fields are the same as for tp_protocol
  • summary - config related to the summary.* files
    • the sub-fields are the same as for tp_protocol

- John Arrizza