This tutorial will make web UI testing easy. We will build a simple yet robust web UI test solution using Python, pytest, and Selenium WebDriver. We will learn strategies for good test design as well as patterns for good automation code. By the end of the tutorial, you’ll be a web test automation champ! Your Python test project can be the foundation for your own test cases, too.
📍 If you are looking for a single Python Package for Android, iOS and Web Testing – there is also an easy open source solution provided by TestProject. With a single executable, zero configurations, and familiar Selenium APIs, you can develop and execute robust Python tests and get automatic HTML test reports as a bonus! All you need is: pip install testproject-python-sdk
. Simply follow this Github link to learn more about it, or read through this great tutorial to get started.
Tutorial Chapters
- Web UI Testing Made Easy with Python, Pytest and Selenium WebDriver (Overview)
- Set Your Test Automation Goals (Chapter 1)
- Create A Python Test Automation Project Using Pytest (Chapter 2)
- Installing Selenium WebDriver Using Python and Chrome (Chapter 3)
- Write Your First Web Test Using Selenium WebDriver, Python and Chrome (Chapter 4)
- Develop Page Object Selenium Tests Using Python (Chapter 5)
- You’re here → How to Read Config Files in Python Selenium Tests (Chapter 6)
- Take Your Python Test Automation To The Next Level (Chapter 7)
- Create Pytest HTML Test Reports (Chapter 7.1)
- Parallel Test Execution with Pytest (Chapter 7.2)
- Scale Your Test Automation using Selenium Grid and Remote WebDrivers (Chapter 7.3)
- Test Automation for Mobile Apps using Appium and Python (Chapter 7.4)
- Create Behavior-Driven Python Tests using Pytest-BDD (Chapter 7.5)
Which Browser?
Our DuckDuckGo search test from the previous chapters works very well… on Chrome. Take another look at the browser
fixture:
@pytest.fixture def browser(): driver = Chrome() driver.implicitly_wait(10) yield driver driver.quit()
Both the driver type and the wait time are hard-coded. That’s fine for a proof of concept, but production-ready tests should be configurable at runtime. Web UI tests should be able to run on any browser. Default timeout values should be adjustable in case some environments are slower than others. Other sensitive data like usernames and passwords should also never appear in source code. How can we handle test data like this?
All of these values are configuration data for the test automation system. They are discrete values that systemically affect how the automation runs. Config data should be provided as inputs whenever tests are launched. Anything related to test configuration or environment should be treated as config data so that automation code can be reusable.
Input Sources
There are a few ways to read inputs into a test automation system:
- Command line arguments
- Environment variables
- System properties
- Config files
- Service API calls
Unfortunately, most core test frameworks don’t support custom command line arguments. Environmental variables and system properties can be difficult to manage and potentially dangerous to handle. Service APIs are a great way to consume inputs, especially for getting secrets (like passwords) from a key management service like AWS KMS or Azure Key Vault. However, paying for such a service may not be permissible, and writing your own may not be sensible. For lean cases, config files may be the best option.
A config file is simply a file that holds config data. Test automation can read it in when tests are launched and use the input values to control the tests. For example, a config file could specify the browser type to be used by the browser
fixture from our example project. As a best practice, config files should be standard formats like JSON, YAML, or INI. They should also be flat files so that they can be diff‘ed.
Our Config File
Let’s write a config file for our test project. We will use JSON because it is simple, popular, and hierarchical. Plus, the json
module is part of Python’s standard library and can easily convert JSON files into dictionaries. Create a new file named tests/config.json
and add the following code:
{ "browser": "chrome", "wait_time": 10 }
JSON uses key-value pairs. As stated earlier, our test project has two config values: the browser choice and the wait time. The browser choice is a string, and the wait time is an integer.
Reading the Config File with Pytest
Fixtures are the best way to read config files with pytest. They can read config files before tests start and then inject values into tests or even other fixtures. Add the following fixture to tests/test_web.py
:
import json @pytest.fixture(scope='session') def config(): with open('tests/config.json') as config_file: data = json.load(config_file) return data
The config fixture reads and parses the tests/config.json
file into the data
dictionary using the json
module. Hard-coded file paths are a fairly common practice. In fact, many tools and automation systems will check for files in multiple locations or with naming patterns. The fixture’s scope is set to “session” so that this fixture will run only once for the entire testing session. There is no need to re-read the same config file for every single test – that’s inefficient!
Both config data inputs are needed when initializing the WebDriver. Update the browser
fixture like this:
@pytest.fixture def browser(config): if config['browser'] == 'chrome': driver = Chrome() else: raise Exception(f'"{config["browser"]}" is not a supported browser') driver.implicitly_wait(config['wait_time']) yield driver driver.quit()
The browser
fixture now has a dependency upon the config
fixture. Even though config
will be run one time for the testing session, browser
will still be called before each test. browser
now has an if-else chain for determining which WebDriver type to use. For now, only Chrome is supported, but we will add more types soon. An exception will be raised if the browser choice is unrecognized. The implicit wait time now uses the config data value, too.
Since browser
still returns a WebDriver instance, tests that use it do not need to be refactored! Let’s run the web tests to make sure the config file works:
$ pipenv run python -m pytest tests/test_web.py ============================= test session starts ============================== platform darwin -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.12.0 rootdir: /Users/andylpk247/Programming/automation-panda/python-webui-testing collected 1 item tests/test_web.py . [100%] =========================== 1 passed in 5.00 seconds ===========================
Adding New Browsers
Now that our project has a config file, we can use it to change the browser. Let’s run the test using Mozilla Firefox instead of Google Chrome. Download and install the latest version of Firefox, and then download the latest version of geckodriver (the driver for Firefox). Make sure geckodriver is on the system path, too.
Update the browser
fixture code to handle Firefox:
from selenium.webdriver import Chrome, Firefox @pytest.fixture def browser(config): if config['browser'] == 'chrome': driver = Chrome() elif config['browser'] == 'firefox': driver = Firefox() else: raise Exception(f'"{config["browser"]}" is not a supported browser') driver.implicitly_wait(config['wait_time']) yield driver driver.quit()
Then, update the config file with the “firefox” option:
{ "browser": "firefox", "wait_time": 10 }
Now, rerun the test. You should see Firefox pop up instead of Chrome!
Validation
Although the config file works, the logic for handling it has a critical weakness: the data is not validated before tests run. The browser
fixture will raise an exception when an unsupported browser choice is given, but this would happen for every single test. Raising an exception once for the whole testing session would be much more efficient. Furthermore, the automation will crash if the config file is missing either the “browser” or “wait_time” keys. Let’s fix these problems.
Add a new fixture for validating the browser choice:
@pytest.fixture(scope='session') def config_browser(config): if 'browser' not in config: raise Exception('The config file does not contain "browser"') elif config['browser'] not in ['chrome', 'firefox']: raise Exception(f'"{config["browser"]}" is not a supported browser') return config['browser']
The config_browser
fixture depends upon the config
fixture. Like config
, it has session scope. It raises an exception if the config file is missing the “browser” key or if the browser choice is unsupported. Finally, it returns the browser choice so that tests and other fixtures can conveniently access the value.
Next, add another fixture for validating the wait time:
@pytest.fixture(scope='session') def config_wait_time(config): return config['wait_time'] if 'wait_time' in config else 10
If the config file specifies a wait time, then the config_wait_time
fixture will return it. Otherwise, it will return a default value of 10 seconds.
Update the browser
fixture one more time to use these new validation fixtures:
@pytest.fixture def browser(config_browser, config_wait_time): if config_browser == 'chrome': driver = Chrome() elif config_browser == 'firefox': driver = Firefox() else: raise Exception(f'"{config_browser}" is not a supported browser') driver.implicitly_wait(config_wait_time) yield driver driver.quit()
Writing separate fixture functions for each config data value makes them simple, concise, and focused. They also let callers declare only the values that they need.
Run the test to make sure everything works on the “happy” path:
$ pipenv run python -m pytest tests/test_web.py ============================= test session starts ============================== platform darwin -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.12.0 rootdir: /Users/andylpk247/Programming/automation-panda/python-webui-testing collected 1 item tests/test_web.py . [100%] =========================== 1 passed in 4.58 seconds ===========================
That’s great! To truly test the validation, though, we must be devious. 😆 Let’s change the “browser” value in tests/config.json
to be “safari” – an unsupported browser. When we rerun the test, we should see a helpful error message:
$ pipenv run python -m pytest tests/test_web.py ============================= test session starts ============================== platform darwin -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.12.0 rootdir: /Users/andylpk247/Programming/automation-panda/python-webui-testing collected 1 item tests/test_web.py E [100%] ==================================== ERRORS ==================================== ________________ ERROR at setup of test_basic_duckduckgo_search ________________ config = {'browser': 'safari', 'wait_time': 10} @pytest.fixture(scope='session') def config_browser(config): # Validate and return the browser choice from the config data if 'browser' not in config: raise Exception('The config file does not contain "browser"') elif config['browser'] not in SUPPORTED_BROWSERS: > raise Exception(f'"{config["browser"]}" is not a supported browser') E Exception: "safari" is not a supported browser tests/conftest.py:30: Exception =========================== 1 error in 0.09 seconds ============================
Awesome! The failure clearly reported the problem. Now, what happens if we remove the browser choice from the config file?
$ pipenv run python -m pytest tests/test_web.py ============================= test session starts ============================== platform darwin -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.12.0 rootdir: /Users/andylpk247/Programming/automation-panda/python-webui-testing collected 1 item tests/test_web.py E [100%] ==================================== ERRORS ==================================== ________________ ERROR at setup of test_basic_duckduckgo_search ________________ config = {'wait_time': 10} @pytest.fixture(scope='session') def config_browser(config): # Validate and return the browser choice from the config data if 'browser' not in config: > raise Exception('The config file does not contain "browser"') E Exception: The config file does not contain "browser" tests/conftest.py:28: Exception =========================== 1 error in 0.10 seconds ============================
Great! Another helpful failure message. For the final test, add a valid browser choice back, but remove the wait time:
$ pipenv run python -m pytest tests/test_web.py ============================= test session starts ============================== platform darwin -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.12.0 rootdir: /Users/andylpk247/Programming/automation-panda/python-webui-testing collected 1 item tests/test_web.py . [100%] =========================== 1 passed in 4.64 seconds ===========================
The test should pass because the wait time is optional. Our changes were good! Remember, sometimes you will need to test your tests.
Final Code
There are two more small things we can do to clean up our test code. First, let’s move our web fixtures to a conftest.py file so they can be used by all tests, not just tests in tests/test_web.py
. Second, let’s pull out some of those literal values as module variables.
Create a new file named tests/conftest.py
with the following code:
import json import pytest from selenium.webdriver import Chrome, Firefox CONFIG_PATH = 'tests/config.json' DEFAULT_WAIT_TIME = 10 SUPPORTED_BROWSERS = ['chrome', 'firefox'] @pytest.fixture(scope='session') def config(): # Read the JSON config file and returns it as a parsed dict with open(CONFIG_PATH) as config_file: data = json.load(config_file) return data @pytest.fixture(scope='session') def config_browser(config): # Validate and return the browser choice from the config data if 'browser' not in config: raise Exception('The config file does not contain "browser"') elif config['browser'] not in SUPPORTED_BROWSERS: raise Exception(f'"{config["browser"]}" is not a supported browser') return config['browser'] @pytest.fixture(scope='session') def config_wait_time(config): # Validate and return the wait time from the config data return config['wait_time'] if 'wait_time' in config else DEFAULT_WAIT_TIME @pytest.fixture def browser(config_browser, config_wait_time): # Initialize WebDriver if config_browser == 'chrome': driver = Chrome() elif config_browser == 'firefox': driver = Firefox() else: raise Exception(f'"{config_browser}" is not a supported browser') # Wait implicitly for elements to be ready before attempting interactions driver.implicitly_wait(config_wait_time) # Return the driver object at the end of setup yield driver # For cleanup, quit the driver driver.quit()
The full contents of tests/test_web.py
should now be much simpler and cleaner:
import pytest from pages.result import DuckDuckGoResultPage from pages.search import DuckDuckGoSearchPage def test_basic_duckduckgo_search(browser): # Set up test case data PHRASE = 'panda' # Search for the phrase search_page = DuckDuckGoSearchPage(browser) search_page.load() search_page.search(PHRASE) # Verify that results appear result_page = DuckDuckGoResultPage(browser) assert result_page.link_div_count() > 0 assert result_page.phrase_result_count(PHRASE) > 0 assert result_page.search_input_value() == PHRASE
Now, that’s Pythonic!
What’s Next?
The code for our example test project is now complete. You can use it as the foundation for new tests. The completed example project is also hosted on GitHub. But just because we are finished coding does not mean that we are done learning. The final chapters will show you ways to take your Python-based web UI test automation to the next level!
Very nice, very clear extensive tutorial. The best I have seen to introduce Selenium and Testing with Pytest for novices to Pytest and Selenium. The good thing is that while introducing Selenium and Pytest, it actually introduces concepts that help in doing automation in the correct way and writing tests in the correct way to make it easy to change configuration for different environments and to maintain tests.