Best Python code snippet using pytest
test_mark.py
Source: test_mark.py
...802 # reprec_keywords = testdir.inline_run("-k", "FOO")803 # passed_k, skipped_k, failed_k = reprec_keywords.countoutcomes()804 # assert passed_k == 2805 # assert skipped_k == failed_k == 0806def test_addmarker_order():807 node = Node("Test", config=mock.Mock(), session=mock.Mock(), nodeid="Test")808 node.add_marker("foo")809 node.add_marker("bar")810 node.add_marker("baz", append=False)811 extracted = [x.name for x in node.iter_markers()]812 assert extracted == ["baz", "foo", "bar"]813@pytest.mark.filterwarnings("ignore")814def test_markers_from_parametrize(testdir):815 """#3605"""816 testdir.makepyfile(817 """818 from __future__ import print_function819 import pytest820 first_custom_mark = pytest.mark.custom_marker...How do I configure PyCharm to run py.test tests?
How do I convert this bash loop to python?
What are metaclasses in Python?
How to get the total number of tests passed, failed and skipped from pytest
How to uninstall jupyter
Processing pytest test results at runtime
Abstract method inheritance in Python
What does if __name__ == "__main__": do?
What does the "yield" keyword do?
AttributeError: 'module' object has no attribute 'ensuretemp'
Please go to File| Settings | Tools | Python Integrated Tools and change the default test runner to py.test. Then you'll get the py.test option to create tests instead of the unittest one.
Fully Pythonic way to read a file is the following:
with open(...) as f:
for line in f:
<do something with line>
The with statement handles opening and closing the file, including if an exception is raised in the inner block. The for line in f treats the file object f as an iterable, which automatically uses buffered IO and memory management so you don't have to worry about large files.
There should be one -- and preferably only one -- obvious way to do it.
Before understanding metaclasses, you need to master classes in Python. And Python has a very peculiar idea of what classes are, borrowed from the Smalltalk language.
In most languages, classes are just pieces of code that describe how to produce an object. That's kinda true in Python too:
>>> class ObjectCreator(object):
... pass
...
>>> my_object = ObjectCreator()
>>> print(my_object)
<__main__.ObjectCreator object at 0x8974f2c>
But classes are more than that in Python. Classes are objects too.
Yes, objects.
As soon as you use the keyword class, Python executes it and creates
an object. The instruction
>>> class ObjectCreator(object):
... pass
...
creates in memory an object with the name ObjectCreator.
This object (the class) is itself capable of creating objects (the instances), and this is why it's a class.
But still, it's an object, and therefore:
e.g.:
>>> print(ObjectCreator) # you can print a class because it's an object
<class '__main__.ObjectCreator'>
>>> def echo(o):
... print(o)
...
>>> echo(ObjectCreator) # you can pass a class as a parameter
<class '__main__.ObjectCreator'>
>>> print(hasattr(ObjectCreator, 'new_attribute'))
False
>>> ObjectCreator.new_attribute = 'foo' # you can add attributes to a class
>>> print(hasattr(ObjectCreator, 'new_attribute'))
True
>>> print(ObjectCreator.new_attribute)
foo
>>> ObjectCreatorMirror = ObjectCreator # you can assign a class to a variable
>>> print(ObjectCreatorMirror.new_attribute)
foo
>>> print(ObjectCreatorMirror())
<__main__.ObjectCreator object at 0x8997b4c>
Since classes are objects, you can create them on the fly, like any object.
First, you can create a class in a function using class:
>>> def choose_class(name):
... if name == 'foo':
... class Foo(object):
... pass
... return Foo # return the class, not an instance
... else:
... class Bar(object):
... pass
... return Bar
...
>>> MyClass = choose_class('foo')
>>> print(MyClass) # the function returns a class, not an instance
<class '__main__.Foo'>
>>> print(MyClass()) # you can create an object from this class
<__main__.Foo object at 0x89c6d4c>
But it's not so dynamic, since you still have to write the whole class yourself.
Since classes are objects, they must be generated by something.
When you use the class keyword, Python creates this object automatically. But as
with most things in Python, it gives you a way to do it manually.
Remember the function type? The good old function that lets you know what
type an object is:
>>> print(type(1))
<type 'int'>
>>> print(type("1"))
<type 'str'>
>>> print(type(ObjectCreator))
<type 'type'>
>>> print(type(ObjectCreator()))
<class '__main__.ObjectCreator'>
Well, type has also a completely different ability: it can create classes on the fly. type can take the description of a class as parameters,
and return a class.
(I know, it's silly that the same function can have two completely different uses according to the parameters you pass to it. It's an issue due to backward compatibility in Python)
type works this way:
type(name, bases, attrs)
Where:
name: name of the classbases: tuple of the parent class (for inheritance, can be empty)attrs: dictionary containing attributes names and valuese.g.:
>>> class MyShinyClass(object):
... pass
can be created manually this way:
>>> MyShinyClass = type('MyShinyClass', (), {}) # returns a class object
>>> print(MyShinyClass)
<class '__main__.MyShinyClass'>
>>> print(MyShinyClass()) # create an instance with the class
<__main__.MyShinyClass object at 0x8997cec>
You'll notice that we use MyShinyClass as the name of the class
and as the variable to hold the class reference. They can be different,
but there is no reason to complicate things.
type accepts a dictionary to define the attributes of the class. So:
>>> class Foo(object):
... bar = True
Can be translated to:
>>> Foo = type('Foo', (), {'bar':True})
And used as a normal class:
>>> print(Foo)
<class '__main__.Foo'>
>>> print(Foo.bar)
True
>>> f = Foo()
>>> print(f)
<__main__.Foo object at 0x8a9b84c>
>>> print(f.bar)
True
And of course, you can inherit from it, so:
>>> class FooChild(Foo):
... pass
would be:
>>> FooChild = type('FooChild', (Foo,), {})
>>> print(FooChild)
<class '__main__.FooChild'>
>>> print(FooChild.bar) # bar is inherited from Foo
True
Eventually, you'll want to add methods to your class. Just define a function with the proper signature and assign it as an attribute.
>>> def echo_bar(self):
... print(self.bar)
...
>>> FooChild = type('FooChild', (Foo,), {'echo_bar': echo_bar})
>>> hasattr(Foo, 'echo_bar')
False
>>> hasattr(FooChild, 'echo_bar')
True
>>> my_foo = FooChild()
>>> my_foo.echo_bar()
True
And you can add even more methods after you dynamically create the class, just like adding methods to a normally created class object.
>>> def echo_bar_more(self):
... print('yet another method')
...
>>> FooChild.echo_bar_more = echo_bar_more
>>> hasattr(FooChild, 'echo_bar_more')
True
You see where we are going: in Python, classes are objects, and you can create a class on the fly, dynamically.
This is what Python does when you use the keyword class, and it does so by using a metaclass.
Metaclasses are the 'stuff' that creates classes.
You define classes in order to create objects, right?
But we learned that Python classes are objects.
Well, metaclasses are what create these objects. They are the classes' classes, you can picture them this way:
MyClass = MetaClass()
my_object = MyClass()
You've seen that type lets you do something like this:
MyClass = type('MyClass', (), {})
It's because the function type is in fact a metaclass. type is the
metaclass Python uses to create all classes behind the scenes.
Now you wonder "why the heck is it written in lowercase, and not Type?"
Well, I guess it's a matter of consistency with str, the class that creates
strings objects, and int the class that creates integer objects. type is
just the class that creates class objects.
You see that by checking the __class__ attribute.
Everything, and I mean everything, is an object in Python. That includes integers, strings, functions and classes. All of them are objects. And all of them have been created from a class:
>>> age = 35
>>> age.__class__
<type 'int'>
>>> name = 'bob'
>>> name.__class__
<type 'str'>
>>> def foo(): pass
>>> foo.__class__
<type 'function'>
>>> class Bar(object): pass
>>> b = Bar()
>>> b.__class__
<class '__main__.Bar'>
Now, what is the __class__ of any __class__ ?
>>> age.__class__.__class__
<type 'type'>
>>> name.__class__.__class__
<type 'type'>
>>> foo.__class__.__class__
<type 'type'>
>>> b.__class__.__class__
<type 'type'>
So, a metaclass is just the stuff that creates class objects.
You can call it a 'class factory' if you wish.
type is the built-in metaclass Python uses, but of course, you can create your
own metaclass.
__metaclass__ attributeIn Python 2, you can add a __metaclass__ attribute when you write a class (see next section for the Python 3 syntax):
class Foo(object):
__metaclass__ = something...
[...]
If you do so, Python will use the metaclass to create the class Foo.
Careful, it's tricky.
You write class Foo(object) first, but the class object Foo is not created
in memory yet.
Python will look for __metaclass__ in the class definition. If it finds it,
it will use it to create the object class Foo. If it doesn't, it will use
type to create the class.
Read that several times.
When you do:
class Foo(Bar):
pass
Python does the following:
Is there a __metaclass__ attribute in Foo?
If yes, create in-memory a class object (I said a class object, stay with me here), with the name Foo by using what is in __metaclass__.
If Python can't find __metaclass__, it will look for a __metaclass__ at the MODULE level, and try to do the same (but only for classes that don't inherit anything, basically old-style classes).
Then if it can't find any __metaclass__ at all, it will use the Bar's (the first parent) own metaclass (which might be the default type) to create the class object.
Be careful here that the __metaclass__ attribute will not be inherited, the metaclass of the parent (Bar.__class__) will be. If Bar used a __metaclass__ attribute that created Bar with type() (and not type.__new__()), the subclasses will not inherit that behavior.
Now the big question is, what can you put in __metaclass__?
The answer is something that can create a class.
And what can create a class? type, or anything that subclasses or uses it.
The syntax to set the metaclass has been changed in Python 3:
class Foo(object, metaclass=something):
...
i.e. the __metaclass__ attribute is no longer used, in favor of a keyword argument in the list of base classes.
The behavior of metaclasses however stays largely the same.
One thing added to metaclasses in Python 3 is that you can also pass attributes as keyword-arguments into a metaclass, like so:
class Foo(object, metaclass=something, kwarg1=value1, kwarg2=value2):
...
Read the section below for how Python handles this.
The main purpose of a metaclass is to change the class automatically, when it's created.
You usually do this for APIs, where you want to create classes matching the current context.
Imagine a stupid example, where you decide that all classes in your module
should have their attributes written in uppercase. There are several ways to
do this, but one way is to set __metaclass__ at the module level.
This way, all classes of this module will be created using this metaclass, and we just have to tell the metaclass to turn all attributes to uppercase.
Luckily, __metaclass__ can actually be any callable, it doesn't need to be a
formal class (I know, something with 'class' in its name doesn't need to be
a class, go figure... but it's helpful).
So we will start with a simple example, by using a function.
# the metaclass will automatically get passed the same argument
# that you usually pass to `type`
def upper_attr(future_class_name, future_class_parents, future_class_attrs):
"""
Return a class object, with the list of its attribute turned
into uppercase.
"""
# pick up any attribute that doesn't start with '__' and uppercase it
uppercase_attrs = {
attr if attr.startswith("__") else attr.upper(): v
for attr, v in future_class_attrs.items()
}
# let `type` do the class creation
return type(future_class_name, future_class_parents, uppercase_attrs)
__metaclass__ = upper_attr # this will affect all classes in the module
class Foo(): # global __metaclass__ won't work with "object" though
# but we can define __metaclass__ here instead to affect only this class
# and this will work with "object" children
bar = 'bip'
Let's check:
>>> hasattr(Foo, 'bar')
False
>>> hasattr(Foo, 'BAR')
True
>>> Foo.BAR
'bip'
Now, let's do exactly the same, but using a real class for a metaclass:
# remember that `type` is actually a class like `str` and `int`
# so you can inherit from it
class UpperAttrMetaclass(type):
# __new__ is the method called before __init__
# it's the method that creates the object and returns it
# while __init__ just initializes the object passed as parameter
# you rarely use __new__, except when you want to control how the object
# is created.
# here the created object is the class, and we want to customize it
# so we override __new__
# you can do some stuff in __init__ too if you wish
# some advanced use involves overriding __call__ as well, but we won't
# see this
def __new__(upperattr_metaclass, future_class_name,
future_class_parents, future_class_attrs):
uppercase_attrs = {
attr if attr.startswith("__") else attr.upper(): v
for attr, v in future_class_attrs.items()
}
return type(future_class_name, future_class_parents, uppercase_attrs)
Let's rewrite the above, but with shorter and more realistic variable names now that we know what they mean:
class UpperAttrMetaclass(type):
def __new__(cls, clsname, bases, attrs):
uppercase_attrs = {
attr if attr.startswith("__") else attr.upper(): v
for attr, v in attrs.items()
}
return type(clsname, bases, uppercase_attrs)
You may have noticed the extra argument cls. There is
nothing special about it: __new__ always receives the class it's defined in, as the first parameter. Just like you have self for ordinary methods which receive the instance as the first parameter, or the defining class for class methods.
But this is not proper OOP. We are calling type directly and we aren't overriding or calling the parent's __new__. Let's do that instead:
class UpperAttrMetaclass(type):
def __new__(cls, clsname, bases, attrs):
uppercase_attrs = {
attr if attr.startswith("__") else attr.upper(): v
for attr, v in attrs.items()
}
return type.__new__(cls, clsname, bases, uppercase_attrs)
We can make it even cleaner by using super, which will ease inheritance (because yes, you can have metaclasses, inheriting from metaclasses, inheriting from type):
class UpperAttrMetaclass(type):
def __new__(cls, clsname, bases, attrs):
uppercase_attrs = {
attr if attr.startswith("__") else attr.upper(): v
for attr, v in attrs.items()
}
# Python 2 requires passing arguments to super:
return super(UpperAttrMetaclass, cls).__new__(
cls, clsname, bases, uppercase_attrs)
# Python 3 can use no-arg super() which infers them:
return super().__new__(cls, clsname, bases, uppercase_attrs)
Oh, and in Python 3 if you do this call with keyword arguments, like this:
class Foo(object, metaclass=MyMetaclass, kwarg1=value1):
...
It translates to this in the metaclass to use it:
class MyMetaclass(type):
def __new__(cls, clsname, bases, dct, kwargs1=default):
...
That's it. There is really nothing more about metaclasses.
The reason behind the complexity of the code using metaclasses is not because
of metaclasses, it's because you usually use metaclasses to do twisted stuff
relying on introspection, manipulating inheritance, vars such as __dict__, etc.
Indeed, metaclasses are especially useful to do black magic, and therefore complicated stuff. But by themselves, they are simple:
Since __metaclass__ can accept any callable, why would you use a class
since it's obviously more complicated?
There are several reasons to do so:
UpperAttrMetaclass(type), you know
what's going to follow__new__, __init__ and __call__. Which will allow you to do different stuff, Even if usually you can do it all in __new__,
some people are just more comfortable using __init__.Now the big question. Why would you use some obscure error-prone feature?
Well, usually you don't:
Metaclasses are deeper magic that 99% of users should never worry about it. If you wonder whether you need them, you don't (the people who actually need them know with certainty that they need them, and don't need an explanation about why).
Python Guru Tim Peters
The main use case for a metaclass is creating an API. A typical example of this is the Django ORM. It allows you to define something like this:
class Person(models.Model):
name = models.CharField(max_length=30)
age = models.IntegerField()
But if you do this:
person = Person(name='bob', age='35')
print(person.age)
It won't return an IntegerField object. It will return an int, and can even take it directly from the database.
This is possible because models.Model defines __metaclass__ and
it uses some magic that will turn the Person you just defined with simple statements
into a complex hook to a database field.
Django makes something complex look simple by exposing a simple API and using metaclasses, recreating code from this API to do the real job behind the scenes.
First, you know that classes are objects that can create instances.
Well, in fact, classes are themselves instances. Of metaclasses.
>>> class Foo(object): pass
>>> id(Foo)
142630324
Everything is an object in Python, and they are all either instance of classes or instances of metaclasses.
Except for type.
type is actually its own metaclass. This is not something you could
reproduce in pure Python, and is done by cheating a little bit at the implementation
level.
Secondly, metaclasses are complicated. You may not want to use them for very simple class alterations. You can change classes by using two different techniques:
99% of the time you need class alteration, you are better off using these.
But 98% of the time, you don't need class alteration at all.
Use the pytest_terminal_summary hook. The stats are provided by the terminalreporter object. Example:
# conftest.py
def pytest_terminal_summary(terminalreporter, exitstatus, config):
print('passed amount:', len(terminalreporter.stats['passed']))
print('failed amount:', len(terminalreporter.stats['failed']))
print('xfailed amount:', len(terminalreporter.stats['xfailed']))
print('skipped amount:', len(terminalreporter.stats['skipped']))
duration = time.time() - terminalreporter._sessionstarttime
print('duration:', duration, 'seconds')
Unfortunately, _pytest.terminal.TerminalReporter isn't part of the public API yet, so it's best to inspect its code directly.
If you need to access the stats in another hook like pytest_sessionfinish, use the plugin manager, e.g.:
def pytest_sessionfinish(session, exitstatus):
reporter = session.config.pluginmanager.get_plugin('terminalreporter')
print('passed amount:', len(reporter.stats['passed']))
...
However, depending on what hook you are in, you may not get the complete stats/correct duration, so caution advised.
If you don't want to use pip-autoremove (since it removes dependencies shared among other packages) and pip3 uninstall jupyter just removed some packages, then do the following:
sudo may be needed as per your need.
python3 -m pip uninstall -y jupyter jupyter_core jupyter-client jupyter-console jupyterlab_pygments notebook qtconsole nbconvert nbformat jupyterlab-widgets nbclient
The above command will only uninstall jupyter specific packages. I have not added other packages to uninstall since they might be shared among other packages (eg: Jinja2 is used by Flask, ipython is a separate set of packages themselves, tornado again might be used by others).
In any case, all the dependencies are mentioned below(as of 21 Nov, 2020. jupyter==4.4.0 )
If you are sure you want to remove all the dependencies, then you can use Stan_MD's answer.
argon2-cffi
argon2-cffi-bindings
async-generator
attrs
backcall
bleach
cffi
dataclasses
decorator
defusedxml
entrypoints
importlib-metadata
ipykernel
ipython
ipython-genutils
ipywidgets
jedi
Jinja2
jsonschema
jupyter
jupyter-client
jupyter-console
jupyter-core
jupyterlab-pygments
jupyterlab-widgets
MarkupSafe
mistune
nbclient
nbconvert
nbformat
nest-asyncio
notebook
packaging
pandocfilters
parso
pexpect
pickleshare
prometheus-client
prompt-toolkit
ptyprocess
pycparser
Pygments
pyparsing
pyrsistent
python-dateutil
pyzmq
qtconsole
QtPy
Send2Trash
six
terminado
testpath
tornado
traitlets
typing-extensions
wcwidth
webencodings
widgetsnbextension
zipp
pip3 uninstall jupyter
pip3 uninstall jupyter_core
pip3 uninstall jupyter-client
pip3 uninstall jupyter-console
pip3 uninstall jupyterlab_pygments
pip3 uninstall notebook
pip3 uninstall qtconsole
pip3 uninstall nbconvert
pip3 uninstall nbformat
Uninstall jupyter dist-packages:
pip3 uninstall jupyter
Uninstall jupyter_core dist-packages (It also uninstalls following binaries: jupyter, jupyter-migrate,jupyter-troubleshoot):
pip3 uninstall jupyter_core
Uninstall jupyter-client:
pip3 uninstall jupyter-client
Uninstall jupyter-console:
pip3 uninstall jupyter-console
Uninstall jupyter-notebook (It also uninstalls following binaries: jupyter-bundlerextension, jupyter-nbextension, jupyter-notebook, jupyter-serverextension):
pip3 uninstall notebook
Uninstall jupyter-qtconsole :
pip3 uninstall qtconsole
Uninstall jupyter-nbconvert:
pip3 uninstall nbconvert
Uninstall jupyter-trust:
pip3 uninstall nbformat
py.test result files are not really meant to be human readable. I'm not sure if there is a third-party parser for them or not. They are meant to be read by a continuous integration server like Hudson or Travis-ci. As @limelights said you get the xml files for these services with the --junitxml=path flag. More here. or with the --resultlog=path flag. More here.
You've pretty much got all the code there, you can always test it and see if it works ... but as a spoiler, Your design is fine so long as C.test_attribute gets decorated with property.
If you try to make an instance of B, then you'll have problems since the whole abstract interface hasn't been created, but it is fine to create it as a base class for C (and presumably other classes later...)
e.g.:
import abc
class A(object):
__metaclass__ = abc.ABCMeta
@abc.abstractproperty
def foo(self):
pass
class B(A):
def bar(self):
return "bar"
class C(B):
@property
def foo(self):
return "foo"
print C().foo # foo
print C().bar() # bar
print B().foo # TypeError
It's boilerplate code that protects users from accidentally invoking the script when they didn't intend to. Here are some common problems when the guard is omitted from a script:
If you import the guardless script in another script (e.g. import my_script_without_a_name_eq_main_guard), then the latter script will trigger the former to run at import time and using the second script's command line arguments. This is almost always a mistake.
If you have a custom class in the guardless script and save it to a pickle file, then unpickling it in another script will trigger an import of the guardless script, with the same problems outlined in the previous bullet.
To better understand why and how this matters, we need to take a step back to understand how Python initializes scripts and how this interacts with its module import mechanism.
Whenever the Python interpreter reads a source file, it does two things:
it sets a few special variables like __name__, and then
it executes all of the code found in the file.
Let's see how this works and how it relates to your question about the __name__ checks we always see in Python scripts.
Let's use a slightly different code sample to explore how imports and scripts work. Suppose the following is in a file called foo.py.
# Suppose this is foo.py.
print("before import")
import math
print("before function_a")
def function_a():
print("Function A")
print("before function_b")
def function_b():
print("Function B {}".format(math.sqrt(100)))
print("before __name__ guard")
if __name__ == '__main__':
function_a()
function_b()
print("after __name__ guard")
When the Python interpreter reads a source file, it first defines a few special variables. In this case, we care about the __name__ variable.
When Your Module Is the Main Program
If you are running your module (the source file) as the main program, e.g.
python foo.py
the interpreter will assign the hard-coded string "__main__" to the __name__ variable, i.e.
# It's as if the interpreter inserts this at the top
# of your module when run as the main program.
__name__ = "__main__"
When Your Module Is Imported By Another
On the other hand, suppose some other module is the main program and it imports your module. This means there's a statement like this in the main program, or in some other module the main program imports:
# Suppose this is in some other main program.
import foo
The interpreter will search for your foo.py file (along with searching for a few other variants), and prior to executing that module, it will assign the name "foo" from the import statement to the __name__ variable, i.e.
# It's as if the interpreter inserts this at the top
# of your module when it's imported from another module.
__name__ = "foo"
After the special variables are set up, the interpreter executes all the code in the module, one statement at a time. You may want to open another window on the side with the code sample so you can follow along with this explanation.
Always
It prints the string "before import" (without quotes).
It loads the math module and assigns it to a variable called math. This is equivalent to replacing import math with the following (note that __import__ is a low-level function in Python that takes a string and triggers the actual import):
# Find and load a module given its string name, "math",
# then assign it to a local variable called math.
math = __import__("math")
It prints the string "before function_a".
It executes the def block, creating a function object, then assigning that function object to a variable called function_a.
It prints the string "before function_b".
It executes the second def block, creating another function object, then assigning it to a variable called function_b.
It prints the string "before __name__ guard".
Only When Your Module Is the Main Program
__name__ was indeed set to "__main__" and it calls the two functions, printing the strings "Function A" and "Function B 10.0".Only When Your Module Is Imported by Another
__name__ will be "foo", not "__main__", and it'll skip the body of the if statement.Always
"after __name__ guard" in both situations.Summary
In summary, here's what'd be printed in the two cases:
# What gets printed if foo is the main program
before import
before function_a
before function_b
before __name__ guard
Function A
Function B 10.0
after __name__ guard
# What gets printed if foo is imported as a regular module
before import
before function_a
before function_b
before __name__ guard
after __name__ guard
You might naturally wonder why anybody would want this. Well, sometimes you want to write a .py file that can be both used by other programs and/or modules as a module, and can also be run as the main program itself. Examples:
Your module is a library, but you want to have a script mode where it runs some unit tests or a demo.
Your module is only used as a main program, but it has some unit tests, and the testing framework works by importing .py files like your script and running special test functions. You don't want it to try running the script just because it's importing the module.
Your module is mostly used as a main program, but it also provides a programmer-friendly API for advanced users.
Beyond those examples, it's elegant that running a script in Python is just setting up a few magic variables and importing the script. "Running" the script is a side effect of importing the script's module.
Question: Can I have multiple __name__ checking blocks? Answer: it's strange to do so, but the language won't stop you.
Suppose the following is in foo2.py. What happens if you say python foo2.py on the command-line? Why?
# Suppose this is foo2.py.
import os, sys; sys.path.insert(0, os.path.dirname(__file__)) # needed for some interpreters
def function_a():
print("a1")
from foo2 import function_b
print("a2")
function_b()
print("a3")
def function_b():
print("b")
print("t1")
if __name__ == "__main__":
print("m1")
function_a()
print("m2")
print("t2")
__name__ check in foo3.py:# Suppose this is foo3.py.
import os, sys; sys.path.insert(0, os.path.dirname(__file__)) # needed for some interpreters
def function_a():
print("a1")
from foo3 import function_b
print("a2")
function_b()
print("a3")
def function_b():
print("b")
print("t1")
print("m1")
function_a()
print("m2")
print("t2")
# Suppose this is in foo4.py
__name__ = "__main__"
def bar():
print("bar")
print("before __name__ guard")
if __name__ == "__main__":
bar()
print("after __name__ guard")
To understand what yield does, you must understand what generators are. And before you can understand generators, you must understand iterables.
When you create a list, you can read its items one by one. Reading its items one by one is called iteration:
>>> mylist = [1, 2, 3]
>>> for i in mylist:
... print(i)
1
2
3
mylist is an iterable. When you use a list comprehension, you create a list, and so an iterable:
>>> mylist = [x*x for x in range(3)]
>>> for i in mylist:
... print(i)
0
1
4
Everything you can use "for... in..." on is an iterable; lists, strings, files...
These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values.
Generators are iterators, a kind of iterable you can only iterate over once. Generators do not store all the values in memory, they generate the values on the fly:
>>> mygenerator = (x*x for x in range(3))
>>> for i in mygenerator:
... print(i)
0
1
4
It is just the same except you used () instead of []. BUT, you cannot perform for i in mygenerator a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end calculating 4, one by one.
yield is a keyword that is used like return, except the function will return a generator.
>>> def create_generator():
... mylist = range(3)
... for i in mylist:
... yield i*i
...
>>> mygenerator = create_generator() # create a generator
>>> print(mygenerator) # mygenerator is an object!
<generator object create_generator at 0xb7555c34>
>>> for i in mygenerator:
... print(i)
0
1
4
Here it's a useless example, but it's handy when you know your function will return a huge set of values that you will only need to read once.
To master yield, you must understand that when you call the function, the code you have written in the function body does not run. The function only returns the generator object, this is a bit tricky.
Then, your code will continue from where it left off each time for uses the generator.
Now the hard part:
The first time the for calls the generator object created from your function, it will run the code in your function from the beginning until it hits yield, then it'll return the first value of the loop. Then, each subsequent call will run another iteration of the loop you have written in the function and return the next value. This will continue until the generator is considered empty, which happens when the function runs without hitting yield. That can be because the loop has come to an end, or because you no longer satisfy an "if/else".
Generator:
# Here you create the method of the node object that will return the generator
def _get_child_candidates(self, distance, min_dist, max_dist):
# Here is the code that will be called each time you use the generator object:
# If there is still a child of the node object on its left
# AND if the distance is ok, return the next child
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
# If there is still a child of the node object on its right
# AND if the distance is ok, return the next child
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
# If the function arrives here, the generator will be considered empty
# there are no more than two values: the left and the right children
Caller:
# Create an empty list and a list with the current object reference
result, candidates = list(), [self]
# Loop on candidates (they contain only one element at the beginning)
while candidates:
# Get the last candidate and remove it from the list
node = candidates.pop()
# Get the distance between obj and the candidate
distance = node._get_dist(obj)
# If the distance is ok, then you can fill in the result
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
# Add the children of the candidate to the candidate's list
# so the loop will keep running until it has looked
# at all the children of the children of the children, etc. of the candidate
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
This code contains several smart parts:
The loop iterates on a list, but the list expands while the loop is being iterated. It's a concise way to go through all these nested data even if it's a bit dangerous since you can end up with an infinite loop. In this case, candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) exhausts all the values of the generator, but while keeps creating new generator objects which will produce different values from the previous ones since it's not applied on the same node.
The extend() method is a list object method that expects an iterable and adds its values to the list.
Usually, we pass a list to it:
>>> a = [1, 2]
>>> b = [3, 4]
>>> a.extend(b)
>>> print(a)
[1, 2, 3, 4]
But in your code, it gets a generator, which is good because:
And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples, and generators! This is called duck typing and is one of the reasons why Python is so cool. But this is another story, for another question...
You can stop here, or read a little bit to see an advanced use of a generator:
>>> class Bank(): # Let's create a bank, building ATMs
... crisis = False
... def create_atm(self):
... while not self.crisis:
... yield "$100"
>>> hsbc = Bank() # When everything's ok the ATM gives you as much as you want
>>> corner_street_atm = hsbc.create_atm()
>>> print(corner_street_atm.next())
$100
>>> print(corner_street_atm.next())
$100
>>> print([corner_street_atm.next() for cash in range(5)])
['$100', '$100', '$100', '$100', '$100']
>>> hsbc.crisis = True # Crisis is coming, no more money!
>>> print(corner_street_atm.next())
<type 'exceptions.StopIteration'>
>>> wall_street_atm = hsbc.create_atm() # It's even true for new ATMs
>>> print(wall_street_atm.next())
<type 'exceptions.StopIteration'>
>>> hsbc.crisis = False # The trouble is, even post-crisis the ATM remains empty
>>> print(corner_street_atm.next())
<type 'exceptions.StopIteration'>
>>> brand_new_atm = hsbc.create_atm() # Build a new one to get back in business
>>> for cash in brand_new_atm:
... print cash
$100
$100
$100
$100
$100
$100
$100
$100
$100
...
Note: For Python 3, useprint(corner_street_atm.__next__()) or print(next(corner_street_atm))
It can be useful for various things like controlling access to a resource.
The itertools module contains special functions to manipulate iterables. Ever wish to duplicate a generator?
Chain two generators? Group values in a nested list with a one-liner? Map / Zip without creating another list?
Then just import itertools.
An example? Let's see the possible orders of arrival for a four-horse race:
>>> horses = [1, 2, 3, 4]
>>> races = itertools.permutations(horses)
>>> print(races)
<itertools.permutations object at 0xb754f1dc>
>>> print(list(itertools.permutations(horses)))
[(1, 2, 3, 4),
(1, 2, 4, 3),
(1, 3, 2, 4),
(1, 3, 4, 2),
(1, 4, 2, 3),
(1, 4, 3, 2),
(2, 1, 3, 4),
(2, 1, 4, 3),
(2, 3, 1, 4),
(2, 3, 4, 1),
(2, 4, 1, 3),
(2, 4, 3, 1),
(3, 1, 2, 4),
(3, 1, 4, 2),
(3, 2, 1, 4),
(3, 2, 4, 1),
(3, 4, 1, 2),
(3, 4, 2, 1),
(4, 1, 2, 3),
(4, 1, 3, 2),
(4, 2, 1, 3),
(4, 2, 3, 1),
(4, 3, 1, 2),
(4, 3, 2, 1)]
Iteration is a process implying iterables (implementing the __iter__() method) and iterators (implementing the __next__() method).
Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables.
There is more about it in this article about how for loops work.
No, you're doing everything correctly as documentation shows. I'm unable to get it working either. On other hand, pytest has fixture tmpdir, which does what you need in the test: gives you unique temporary directory for your test.
Here is an example of test:
def test_one(tmpdir):
test_file = tmpdir.join('dir', 'file').ensure(file=True)
test_file.write('test\ntest\n')
assert test_file.read().rsplit() == ['test', 'test']
On other hand, pytest uses py.path, which has py.path.local object, and with use of tempfile.mkdtemp you can create temporary dir object:
import tempfile
tempdir = py.path.local(tempfile.mkdtemp())
Check out the latest blogs from LambdaTest on this topic:
Automation testing at first may sound like a nightmare especially when you have been into the manual testing business for quite long. Looking at the pace with which the need for automation testing is arising, it has become inevitable for website testers to deep dive into the automation river and starts swimming. To become a pro swimmer it takes time, similar is the case with becoming a successful automation tester. It requires knowledge & deep understanding of numerous automation tools & frameworks. As a beginner in automation testing, you may be looking forward to grabbing your hands on an open-source testing framework. After doing so the question that arises is what next? How do I go about using the open-source tool, framework, or platform, & I am here to help you out in that regard. Today, we will be looking at one of the most renowned open-source automation testing frameworks known as Selenium. In this Selenium Java tutorial, I will demonstrate a Selenium login example with Java to help you automate the login process.
The Day 2 of the Testμ Conference started with a special keynote from Maaret Pyhäjärvi, Principal Test Engineer, Vaisala, with our host, Manoj Kumar, VP Developer Relations, LambdaTest.
This article is a part of our Content Hub. For more in-depth resources, check out our content hub on Selenium pytest Tutorial.
Stating your journey into automation testing can be an overwhelming experience. Especially now when you have so many open-source frameworks and libraries to work with. Selenium offers compatibility across multiple programming languages and different browsers, making it a favorite among budding automation testers. However, even with Selenium testing, you would have to consider language-specific frameworks such as TestNG for Java, PyTest for Python and so on. You also have new programming languages entering the automation testing domain such as SmashTest. You also have domain-specific language such as Gherkin to help you overcome challenges around writing Selenium test automation scripts.
This article is a part of our Content Hub. For more in-depth resources, check out our content hub on Selenium Python Tutorial.
Looking for an in-depth tutorial around pytest? LambdaTest covers the detailed pytest tutorial that has everything related to the pytest, from setting up the pytest framework to automation testing. Delve deeper into pytest testing by exploring advanced use cases like parallel testing, pytest fixtures, parameterization, executing multiple test cases from a single file, and more.
Skim our below pytest tutorial playlist to get started with automation testing using the pytest framework.
https://www.youtube.com/playlist?list=PLZMWkkQEwOPlcGgDmHl8KkXKeLF83XlrP
Perform automation testing on 3000+ real desktop and mobile devices online.
Get 100 minutes of automation test minutes FREE!!
