Best Python code snippet using pytest
scheduler.py
Source: scheduler.py
...8level. For instance, all performance tests will run sequentially9(i.e. concurrency=1), since they rely on the availability of the full host10resources, in order to make accurate measurements. Additionally, other tests11may be restricted to running sequentially, if they are per se12concurrency-unsafe. See `PytestScheduler.pytest_runtestloop()`.13Scheduling is achieved by overriding the pytest run loop (i.e.14`pytest_runtestloop()`), and splitting the test session item list across15multiple `fork()`ed worker processes. Since no user code is run before16`pytest_runtestloop()`, each worker becomes a pytest session itself.17Reporting is disabled for worker process, each worker sending its results18back to the main / server process, via an IPC pipe, for aggregation.19"""20import multiprocessing as mp21import os22import re23import sys24from random import random25from select import select26from time import sleep27import pytest28from _pytest.main import ExitCode29from . import mpsing # pylint: disable=relative-beyond-top-level30class PytestScheduler(mpsing.MultiprocessSingleton):31 """A pretty custom test execution scheduler."""32 def __init__(self):33 """Initialize the scheduler.34 Not to be called directly, since this is a singleton. Use35 `PytestScheduler.instance()` to get the scheduler object.36 """37 super().__init__()38 self._mp_singletons = [self]39 self.session = None40 def register_mp_singleton(self, mp_singleton):41 """Register a multi-process singleton object.42 Since the scheduler will be handling the main testing loop, it needs43 to be aware of any multi-process singletons that must be serviced44 during the test run (i.e. polled and allowed to handle method45 execution in the server context).46 """47 self._mp_singletons.append(mp_singleton)48 @staticmethod49 def do_pytest_addoption(parser):50 """Pytest hook. Add concurrency command line option."""51 avail_cpus = len(os.sched_getaffinity(0))52 # Defaulting to a third of the available (logical) CPUs sounds like a53 # good enough plan.54 default = max(1, int(avail_cpus / 3))55 parser.addoption(56 "--concurrency",57 "--concurrency",58 dest="concurrency",59 action="store",60 type=int,61 default=default,62 help="Concurrency level (max number of worker processes to spawn)."63 )64 def pytest_sessionstart(self, session):65 """Pytest hook. Called at pytest session start.66 This will execute in the server context (before the tests are67 executed).68 """69 self.session = session70 def pytest_runtest_logreport(self, report):71 """Pytest hook. Called whenever a new test report is ready.72 This will execute in the worker / child context.73 """74 self._add_report(report)75 def pytest_runtestloop(self, session):76 """Pytest hook. The main test scheduling and running loop.77 Called in the server process context.78 """79 # Don't run tests on test discovery80 if session.config.option.collectonly:81 return True82 # max_concurrency = self.session.config.option.concurrency83 schedule = [84 {85 # Performance batch: tests that measure performance, and need86 # to be run in a non-cuncurrent environment.87 'name': 'performance',88 'concurrency': 1,89 'patterns': [...
spydist.py
Source: spydist.py
...81 self.thread = threading.Thread(target=self.server.start)82 self.thread.start()83 def pytest_sessionfinish(self, session):84 debug("master: pytest_sessionfinish", session)85 def pytest_runtestloop(self):86 if wa.start_slaves_from_master:87 slaves_init(self.logs_path)88 try:89 conn = rpyc.connect("127.0.0.1", self.port)90 while 1:91 if not getattr(conn.root, "has_pending")():92 break93 debug("master: pytest_runtestloop")94 time.sleep(5)95 except KeyboardInterrupt:96 trace("master: interrupted")97 getattr(conn.root, "shutdown")()98 time.sleep(5)99 os._exit(0)100 def pytest_terminal_summary(self, terminalreporter):101 debug("master: pytest_terminal_summary", terminalreporter)102class BatchSlave(object):103 def __init__(self, config, logs_path):104 self.config = config105 self.items = []106 self.logs_path = logs_path107 @pytest.mark.trylast108 def pytest_sessionstart(self, session):109 debug("slave: pytest_sessionstart", session)110 def pytest_sessionfinish(self, session):111 debug("slave: pytest_sessionfinish", session)112 @pytest.hookimpl(trylast=True)113 def pytest_collection_modifyitems(self, session, config, items):114 debug("slave: pytest_collection_modifyitems", session, config, items)115 self.items = items116 def pytest_runtestloop(self):117 def search_nodeid(entries, nodeid):118 for ent in entries:119 if nodeid == ent.nodeid:120 return ent121 return None122 def finish_test(item):123 getattr(conn.root, "finish_test")(item.nodeid)124 def get_test(entries):125 while 1:126 nodeid = getattr(conn.root, "get_test")()127 if not nodeid:128 break129 item = search_nodeid(entries, nodeid)130 if item:...
pydev_runfiles_pytest.py
Source: pydev_runfiles_pytest.py
...91 if hasattr(config.option, 'numprocesses'):92 if config.option.numprocesses:93 self._using_xdist = True94 pydev_runfiles_xml_rpc.notifyTestRunFinished('Unable to show results (py.test xdist plugin not compatible with PyUnit view)')95 def pytest_runtestloop(self, session):96 if self._using_xdist:97 #Yes, we don't have the hooks we'd need to show the results in the pyunit view...98 #Maybe the plugin maintainer may be able to provide these additional hooks?99 return None100 101 #This mock will make all file representations to be printed as Pydev expects, 102 #so that hyperlinks are properly created in errors. Note that we don't unmock it!103 self._MockFileRepresentation()104 105 #Based on the default run test loop: _pytest.session.pytest_runtestloop106 #but getting the times we need, reporting the number of tests found and notifying as each107 #test is run.108 109 start_total = time.time()...
plugin.py
Source: plugin.py
...27 action="store_true",28 dest="find_dependencies_internal",29 help="""For internal use only""",30 )31def pytest_runtestloop(session):32 if session.config.getoption("find_dependencies_internal"):33 return run_tests(session)34 if not session.config.getoption("find_dependencies"):35 return pytest_main.pytest_runtestloop(session)36 if len(session.items) == 1:37 print("Only one test collected: ignoring option --find-dependencies")38 restore_verbosity(session.config)39 return pytest_main.pytest_runtestloop(session)40 if (session.testsfailed and41 not session.config.option.continue_on_collection_errors):42 restore_verbosity(session.config)43 raise session.Interrupted(44 "%d errors during collection" % session.testsfailed)45 DependencyFinder(session).find_dependencies()46 return True47def restore_verbosity(config):48 verbosity = 049 if hasattr(config, "initial_args"):50 for arg in config.initial_args:51 if arg.startswith("-v"):52 verbosity = len(arg) - 153 break...
How to identify the rerun mode using pytest-rerunfailures?
Why a Fatal Python error when testing using pytest-qt?
How can I patch / mock logging.getlogger()
Converting unix timestamp string to readable date
Combining py.test and trio/curio
pytest fixture of fixture, not found
In Django, how do I check if a user is in a certain group?
Python pytest does not show assertion differences
How to print to console in pytest?
Algorithm for extracting first and last lines from sectionalized output file
Yes, there is a way.
config fixture has all the options associated with a particular test run. Use config.option.lf which will return True/False denoted whether it is a normal run/rerun.
I hope you are already familiar with pytest hook functions. For the below example, I used the pytest_configure hook function. But you can use any hook which gives access to config fixture
Example (conftest.py):-
def pytest_configure(config):
print(config.option)
print(config.option.lf)
For the above example result will look like this:-
Namespace(keyword='', markexpr='', maxfail=0, continue_on_collection_errors=False, confcutdir=None, noconftest=False, keepduplicates=False, collect_in_virtualenv=False, importmode='prepend', basetemp=None, durations=None, durations_min=0.005, version=0, plugins=['no:warning'], traceconfig=False, debug=False, showfixtures=False, show_fixtures_per_test=False, verbose=0, no_header=False, no_summary=False, reportchars='fE', disable_warnings=False, showlocals=False, tbstyle='auto', showcapture='all', fulltrace=False, color='auto', code_highlight='yes', capture='fd', runxfail=False, pastebin=None, assertmode='rewrite', xmlpath=None, junitprefix=None, doctestmodules=False, doctestreport='udiff', doctestglob=[], doctest_ignore_import_errors=False, doctest_continue_on_failure=False, last_failed_no_failures='all', stepwise=False, stepwise_skip=False, teamcity=0, no_teamcity=0, markers=False, usepdb=False, usepdb_cls=None, trace=False, lf=False, failedfirst=False, newfirst=False, cacheshow=None, cacheclear=False, pythonwarnings=None, strict_config=False, strict_markers=False, strict=False, inifilename=None, rootdir=None, collectonly=False, pyargs=False, ignore=None, ignore_glob=None, deselect=None, help=False, override_ini=None, setuponly=False, setupshow=False, setupplan=False, log_level=None, log_format=None, log_date_format=None, log_cli_level=None, log_cli_format=None, log_cli_date_format=None, log_file=None, log_file_level=None, log_file_format=None, log_file_date_format=None, log_auto_indent=None, file_or_dir=['test_pytest.py::test_group_a_uppercase'])
False
Check out the latest blogs from LambdaTest on this topic:
Are you comfortable pushing a buggy release to a staging environment?
Were you able to work upon your resolutions for 2019? I may sound comical here but my 2019 resolution being a web developer was to take a leap into web testing in my free time. Why? So I could understand the release cycles from a tester’s perspective. I wanted to wear their shoes and see the SDLC from their eyes. I also thought that it would help me groom myself better as an all-round IT professional.
I still remember the day when our delivery manager announced that from the next phase, the project is going to be Agile. After attending some training and doing some online research, I realized that as a traditional tester, moving from Waterfall to agile testing team is one of the best learning experience to boost my career. Testing in Agile, there were certain challenges, my roles and responsibilities increased a lot, workplace demanded for a pace which was never seen before. Apart from helping me to learn automation tools as well as improving my domain and business knowledge, it helped me get close to the team and participate actively in product creation. Here I will be sharing everything I learned as a traditional tester moving from Waterfall to Agile.
If you use Selenium WebDriver, you probably know that there are many methods to perform specific actions or interact with elements on a web page. The Selenium Python module gives you the methods you need to be able to automate many tasks when working with a web browser online.
Every software project involves some kind of ‘processes’ & ‘practices’ for successful execution & deployment of the project. As the size & scale of the project increases, the degree of complications also increases in an exponential manner. The leadership team should make every possible effort to develop, test, and release the software in a manner so that the release is done in an incremental manner thereby having minimal (or no) impact on the software already available with the customer.
Looking for an in-depth tutorial around pytest? LambdaTest covers the detailed pytest tutorial that has everything related to the pytest, from setting up the pytest framework to automation testing. Delve deeper into pytest testing by exploring advanced use cases like parallel testing, pytest fixtures, parameterization, executing multiple test cases from a single file, and more.
Skim our below pytest tutorial playlist to get started with automation testing using the pytest framework.
https://www.youtube.com/playlist?list=PLZMWkkQEwOPlcGgDmHl8KkXKeLF83XlrP
Get 100 minutes of automation test minutes FREE!!