Skip to content

Implement pytest #39

Closed
campb303 opened this issue Aug 31, 2020 · 16 comments
Closed

Implement pytest #39

campb303 opened this issue Aug 31, 2020 · 16 comments
Assignees
Labels
testing Related to testing tooling Related to tools and utilities for the management of the project

Comments

@campb303
Copy link
Collaborator

Currently there is no infrastructure to test immediate or regressive changes. pytest is a popular testing suite for Python. It should be investigated for usability. The proof of concept test could be to create tests for the Queue class in the ECNQueue Module.

@campb303 campb303 added tooling Related to tools and utilities for the management of the project testing Related to testing labels Aug 31, 2020
@benne238
Copy link
Collaborator

benne238 commented Sep 1, 2020

To demonstrate the functionality of Pytest, a demonstration script will be created with no connection to current scripts within the webQueue2 to demonstrate basic testing of basic data types including numbers, strings, lists, and dictionaries.

@benne238
Copy link
Collaborator

benne238 commented Sep 1, 2020

https://docs.pytest.org/en/stable/getting-started.html#group-multiple-tests-in-a-class
Under "Group Multiple Tests in a Class", this is sort of what we are looking for where we set a bunch of criteria for classes, but in the example its in the script that's running.

@benne238
Copy link
Collaborator

benne238 commented Sep 2, 2020

To run pytest, first ensure that you install pytest by running pip install pytest from the terminal.

Then cd into the directory of the python script you would like to run (use pytester.py in webqueue2/api/pytest for example output).

Run the command: pytest [filename] -v from the terminal. The v(erbose) flag will print the status if you are testing a large number of functions.

pytest will enter test each function in the pytester.py script and check each assertion to determine if the assertion is true. In pytester.py, there isn't anything complex only checking simple assertions such as assert 3==3 or "string" == "string" however this concept should be applicable to checking functions and classes that require any input.

@benne238
Copy link
Collaborator

benne238 commented Sep 7, 2020

Pytest: Post Test Reports

The example code in this guide demonstrates how to output information to a text file every time pytest runs. In this particular example, every time pytest runs, the name of any function that fails is output to a "failures" file in the same directory. Example output of failed functions:

test_example_pytester.py::test_num
test_example_pytester.py::test_string
test_example_pytester.py::test_array
test_example_pytester.py::test_dictionary

This guide discusses using conftest.py and basic custom hooks and plugins.

@benne238
Copy link
Collaborator

benne238 commented Sep 7, 2020

Pytest Post Test Reports: Current Implementation

The way pytest is configured, as of now, is to output the name of any function that failed during the execution of pytest. To determine if the most recent execution of pytest failed, without looking at the output in the terminal, simply search for the existence of the pytest_failures document within the pytest directory. The existence of the pytest_failures indicates that at least function failed during the execution of pytest (pytest_failures will report all failed functions). If pytest_failures doesn't exist, then no functions failed the last time pytest was executed.

@campb303 campb303 added this to the v1 milestone Sep 14, 2020
@campb303 campb303 self-assigned this Sep 17, 2020
@campb303
Copy link
Collaborator Author

Tests for Monday's demo (for the Queue class):
After initialization, Queue has attributes of:

  • self.name (string)
  • self.items (list[Item])
  • self.jsonData (dict) w/ keys of:
    • "name" (str) = self.name
    • "length": (int) = self.items.length

@benne238
Copy link
Collaborator

Pytest with Queue Class

There is now a very basic pytest script within the api folder that demonstrates a very basic method of testing the Queue class for basic assertions.

import pytest
from ECNQueue import Queue, Item

testQueue = Queue("ce")

@pytest.mark.name
def test_Name():
    assert type(testQueue.name) == str
    assert testQueue.name == "ce"

@pytest.mark.Items
def test_Items():
    assert type(testQueue.items) == list
    assert len(testQueue.items) == 48

def test_Json():
    assert type(testQueue.jsonData["name"]) == str
    assert testQueue.name == testQueue.jsonData["name"]
    assert type(len(testQueue.items)) == int
    assert len(testQueue.items) == testQueue.jsonData["length"]
    assert testQueue.items[0].jsonData["number"] == "17"

This script imports the Queue and Item from ecnqueue.py script and parses the ce queue. The script then specifically tests the json output from the queue, the name of the queue and the items within that queue by making basic assertions.

@campb303 campb303 added the high-priority Needs immediate extra focus label Nov 25, 2020
@campb303 campb303 removed the high-priority Needs immediate extra focus label Dec 7, 2020
@benne238
Copy link
Collaborator

benne238 commented Jan 6, 2021

Updates to Backend

Pytest is explicit in that a specific assertion must be made in pytest to test against possible code. All updates to the backend in general must also be reflected in any pytest scripts, where applicable. For example, the current implementation of the getQueueCounts() function within the ECNQueue.py script returns a list of dictionaries:

[
   {
    'name': 'bidc',
    'number_of_items': 5
   },
 
   {
    'name': 'epics', 
    'number_of_items': 6
   } 
]

Pytest can be as granular or as broad as needed, meaning that pytest could simply check that a list of dictionaries is returned when the getQueueCounts() function is called or it can be used very specifically to ensure that each dictionary contains certain keys with specific value types. In this case, pytest could test that the name key has string values and the number_of_items key has int(eger) values within each dictionary returned in the list.

A potential problem exists in the upkeep of pytest scripts, in that any backend changes must also be accommodated for within pytest, depending on the extent and specificity of the pytest scripts. If the name of the key needs to be changed in the backend, pytest needs to be updated to reflect that new name.

@benne238
Copy link
Collaborator

benne238 commented Jan 7, 2021

Testing Function Exceptions

It is possible to test if and which exceptions are thrown by a function using the unittest package in python, adding a level of specificity to any possible testing to the backend modules

@benne238
Copy link
Collaborator

benne238 commented Jan 7, 2021

Exception Handling

This should probably be a separate issue, however, there are some functions in the backend that need to incorporate a level of exception handling that is not currently implemented. Specifically, isValidItemName() within the ECNQueue script will break if a none string argument is passed to it, defeating the purpose of evaluating if an Item Name is valid with this function.

@benne238
Copy link
Collaborator

benne238 commented Jan 8, 2021

Pytest and Backend package requirements

It might be beneficial if pytest and the backend api were included in the requirements.txt, allowing for the ECNQueue.py script and the api.py script to be importable by other python scripts

@campb303
Copy link
Collaborator Author

@benne238

  • Where you see issues like an edge case not managed in isValidItemName(), please feel free to fix that.
  • Adding the ECNQueue and API module to site packages to make them importable is possible with packaging. We'll need to look into continuous integration techniques for reducing the work required to do this. We don't want to have to save, reinstall and restart our tools for every single change when testing.
  • We're looking at test driven development (TDD) so the tediousness of managing tests is mostly an insurance. That being said, TDD is commonplace and there are likely methods to ease the burden.

@campb303
Copy link
Collaborator Author

From this comment:

  • Was pytest added to the project requirements?
  • This comment suggests that it was not.

From this comment:

  • The docs linked to give an example of writing the names of failed tests to a new file if the file to hold failed tests doesn't exist and otherwise append to that file. With this configuration, the file that holds names of failed tests will exist between running tests once it has been created. The comment suggests that the file that holds names of failed tests is removed before subsequent runs. Is this true?
  • The second link to pytest docs is broken.

From this comment:

  • Though we've adopted a no-throw policy unless needed, the ability to react differently to particular errors may come in handy.

From this comment:

  • Has the related issue been created? If so, what issue is it?

@benne238
Copy link
Collaborator

Reply to previous comment:

  • pytest does not seem to be added to the project requirements, the content of requirements.txt is:
# See: https://pip.pypa.io/en/stable/reference/pip_install/#example-requirements-file

# General Utilities
gunicorn
pipdeptree
pylint

# API 
python-dotenv
Flask-RESTful
python-dateutil
Flask-JWT-Extended
# Prevent upgrade to 2.x until Flask-JWT-Extended is updated to support it
PyJWT == 1.*
  • A file called pytest_failures is created when pytest encounters an error in any of the test functions. This file exists and is modified based on the functions that fail, but is removed if/when pytest doesn't encounter a failure in any of the test functions

  • A new issue was not create, however, this commit does fix the issue

@campb303
Copy link
Collaborator Author

campb303 commented Feb 5, 2021

After #8, #76, #159 are completed, pytest should be integrated before proper parsing grammar is implemented.

@campb303 campb303 added the api label Feb 5, 2021
@campb303 campb303 modified the milestones: v1-proof-of-concept, v2-production-ready-read-only Feb 5, 2021
@campb303
Copy link
Collaborator Author

Closing this issue here and reopening on the API GitHub.

Sign in to join this conversation on GitHub.
Labels
testing Related to testing tooling Related to tools and utilities for the management of the project
Projects
None yet
Development

No branches or pull requests

2 participants