Best Python code snippet using pytest
__init__.py
Source: __init__.py
...110 return res111 util._reprcompare = callbinrepr112 if item.ihook.pytest_assertion_pass.get_hookimpls():113 def call_assertion_pass_hook(lineno, expl, orig):114 item.ihook.pytest_assertion_pass(115 item=item, lineno=lineno, orig=orig, expl=expl116 )117 util._assertion_pass = call_assertion_pass_hook118def pytest_runtest_teardown(item):119 util._reprcompare = None120 util._assertion_pass = None121def pytest_sessionfinish(session):122 assertstate = getattr(session.config, "_assertstate", None)123 if assertstate:124 if assertstate.hook is not None:125 assertstate.hook.set_session(None)126# Expose this plugin's implementation for the pytest_assertrepr_compare hook...
How to condense if/else into one line in Python?
Python 3.7 anaconda environment - import _ssl DLL load fail error
How to set argparse arguments from python script
What does ** (double star/asterisk) and * (star/asterisk) do for parameters?
What does the "yield" keyword do?
log messages appearing twice with Python Logging
Does TensorFlow have cross validation implemented for its users?
How do I get the full path of the current file's directory?
What are metaclasses in Python?
Python Pytest: show full variables values on error (no minimization)
An example of Python's way of doing "ternary" expressions:
i = 5 if a > 7 else 0
translates into
if a > 7:
i = 5
else:
i = 0
This actually comes in handy when using list comprehensions, or sometimes in return statements, otherwise I'm not sure it helps that much in creating readable code.
The readability issue was discussed at length in this recent SO question better way than using if-else statement in python.
It also contains various other clever (and somewhat obfuscated) ways to accomplish the same task. It's worth a read just based on those posts.
I have answerd this here, to my understanding this error is caused by the missing/misplacement of libcrypto file in anaconda3/DLLs folder:
From
anaconda3\Library\bin
copy below files and paste them inanaconda3/DLLs
:
- libcrypto-1_1-x64.dll
- libssl-1_1-x64.dll
The argparse module actually reads input variables from special variable, which is called ARGV (short from ARGument Vector). This variable is usually accessed by reading sys.argv from sys module.
This variable is a ordinary list, so you can append your command-line parameters to it like this:
import sys
sys.argv.extend(['-a', SOME_VALUE])
main()
However, messing with sys.argv at runtime is not a good way of testing. A much more cleaner way to replace the sys.argv for some limited scope is using unittest.mock.patch context manager, like this:
with unittest.mock.patch('sys.argv'. ['-a', SOME_VALUE]):
main()
Read more about unittest.mock.patch in documentation
Also, check this SO question:
The *args
and **kwargs
is a common idiom to allow arbitrary number of arguments to functions as described in the section more on defining functions in the Python documentation.
The *args
will give you all function parameters as a tuple:
def foo(*args):
for a in args:
print(a)
foo(1)
# 1
foo(1,2,3)
# 1
# 2
# 3
The **kwargs
will give you all
keyword arguments except for those corresponding to a formal parameter as a dictionary.
def bar(**kwargs):
for a in kwargs:
print(a, kwargs[a])
bar(name='one', age=27)
# name one
# age 27
Both idioms can be mixed with normal arguments to allow a set of fixed and some variable arguments:
def foo(kind, *args, **kwargs):
pass
It is also possible to use this the other way around:
def foo(a, b, c):
print(a, b, c)
obj = {'b':10, 'c':'lee'}
foo(100,**obj)
# 100 10 lee
Another usage of the *l
idiom is to unpack argument lists when calling a function.
def foo(bar, lee):
print(bar, lee)
l = [1,2]
foo(*l)
# 1 2
In Python 3 it is possible to use *l
on the left side of an assignment (Extended Iterable Unpacking), though it gives a list instead of a tuple in this context:
first, *rest = [1,2,3,4]
first, *l, last = [1,2,3,4]
Also Python 3 adds new semantic (refer PEP 3102):
def func(arg1, arg2, arg3, *, kwarg1, kwarg2):
pass
For example the following works in python 3 but not python 2:
>>> x = [1, 2]
>>> [*x]
[1, 2]
>>> [*x, 3, 4]
[1, 2, 3, 4]
>>> x = {1:1, 2:2}
>>> x
{1: 1, 2: 2}
>>> {**x, 3:3, 4:4}
{1: 1, 2: 2, 3: 3, 4: 4}
Such function accepts only 3 positional arguments, and everything after *
can only be passed as keyword arguments.
dict
, semantically used for keyword argument passing, are arbitrarily ordered. However, in Python 3.6, keyword arguments are guaranteed to remember insertion order.**kwargs
now corresponds to the order in which keyword arguments were passed to the function." - What’s New In Python 3.6To understand what yield
does, you must understand what generators are. And before you can understand generators, you must understand iterables.
When you create a list, you can read its items one by one. Reading its items one by one is called iteration:
>>> mylist = [1, 2, 3]
>>> for i in mylist:
... print(i)
1
2
3
mylist
is an iterable. When you use a list comprehension, you create a list, and so an iterable:
>>> mylist = [x*x for x in range(3)]
>>> for i in mylist:
... print(i)
0
1
4
Everything you can use "for... in...
" on is an iterable; lists
, strings
, files...
These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values.
Generators are iterators, a kind of iterable you can only iterate over once. Generators do not store all the values in memory, they generate the values on the fly:
>>> mygenerator = (x*x for x in range(3))
>>> for i in mygenerator:
... print(i)
0
1
4
It is just the same except you used ()
instead of []
. BUT, you cannot perform for i in mygenerator
a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end calculating 4, one by one.
yield
is a keyword that is used like return
, except the function will return a generator.
>>> def create_generator():
... mylist = range(3)
... for i in mylist:
... yield i*i
...
>>> mygenerator = create_generator() # create a generator
>>> print(mygenerator) # mygenerator is an object!
<generator object create_generator at 0xb7555c34>
>>> for i in mygenerator:
... print(i)
0
1
4
Here it's a useless example, but it's handy when you know your function will return a huge set of values that you will only need to read once.
To master yield
, you must understand that when you call the function, the code you have written in the function body does not run. The function only returns the generator object, this is a bit tricky.
Then, your code will continue from where it left off each time for
uses the generator.
Now the hard part:
The first time the for
calls the generator object created from your function, it will run the code in your function from the beginning until it hits yield
, then it'll return the first value of the loop. Then, each subsequent call will run another iteration of the loop you have written in the function and return the next value. This will continue until the generator is considered empty, which happens when the function runs without hitting yield
. That can be because the loop has come to an end, or because you no longer satisfy an "if/else"
.
Generator:
# Here you create the method of the node object that will return the generator
def _get_child_candidates(self, distance, min_dist, max_dist):
# Here is the code that will be called each time you use the generator object:
# If there is still a child of the node object on its left
# AND if the distance is ok, return the next child
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
# If there is still a child of the node object on its right
# AND if the distance is ok, return the next child
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
# If the function arrives here, the generator will be considered empty
# there are no more than two values: the left and the right children
Caller:
# Create an empty list and a list with the current object reference
result, candidates = list(), [self]
# Loop on candidates (they contain only one element at the beginning)
while candidates:
# Get the last candidate and remove it from the list
node = candidates.pop()
# Get the distance between obj and the candidate
distance = node._get_dist(obj)
# If the distance is ok, then you can fill in the result
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
# Add the children of the candidate to the candidate's list
# so the loop will keep running until it has looked
# at all the children of the children of the children, etc. of the candidate
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
This code contains several smart parts:
The loop iterates on a list, but the list expands while the loop is being iterated. It's a concise way to go through all these nested data even if it's a bit dangerous since you can end up with an infinite loop. In this case, candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
exhausts all the values of the generator, but while
keeps creating new generator objects which will produce different values from the previous ones since it's not applied on the same node.
The extend()
method is a list object method that expects an iterable and adds its values to the list.
Usually, we pass a list to it:
>>> a = [1, 2]
>>> b = [3, 4]
>>> a.extend(b)
>>> print(a)
[1, 2, 3, 4]
But in your code, it gets a generator, which is good because:
And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples, and generators! This is called duck typing and is one of the reasons why Python is so cool. But this is another story, for another question...
You can stop here, or read a little bit to see an advanced use of a generator:
>>> class Bank(): # Let's create a bank, building ATMs
... crisis = False
... def create_atm(self):
... while not self.crisis:
... yield "$100"
>>> hsbc = Bank() # When everything's ok the ATM gives you as much as you want
>>> corner_street_atm = hsbc.create_atm()
>>> print(corner_street_atm.next())
$100
>>> print(corner_street_atm.next())
$100
>>> print([corner_street_atm.next() for cash in range(5)])
['$100', '$100', '$100', '$100', '$100']
>>> hsbc.crisis = True # Crisis is coming, no more money!
>>> print(corner_street_atm.next())
<type 'exceptions.StopIteration'>
>>> wall_street_atm = hsbc.create_atm() # It's even true for new ATMs
>>> print(wall_street_atm.next())
<type 'exceptions.StopIteration'>
>>> hsbc.crisis = False # The trouble is, even post-crisis the ATM remains empty
>>> print(corner_street_atm.next())
<type 'exceptions.StopIteration'>
>>> brand_new_atm = hsbc.create_atm() # Build a new one to get back in business
>>> for cash in brand_new_atm:
... print cash
$100
$100
$100
$100
$100
$100
$100
$100
$100
...
Note: For Python 3, useprint(corner_street_atm.__next__())
or print(next(corner_street_atm))
It can be useful for various things like controlling access to a resource.
The itertools module contains special functions to manipulate iterables. Ever wish to duplicate a generator?
Chain two generators? Group values in a nested list with a one-liner? Map / Zip
without creating another list?
Then just import itertools
.
An example? Let's see the possible orders of arrival for a four-horse race:
>>> horses = [1, 2, 3, 4]
>>> races = itertools.permutations(horses)
>>> print(races)
<itertools.permutations object at 0xb754f1dc>
>>> print(list(itertools.permutations(horses)))
[(1, 2, 3, 4),
(1, 2, 4, 3),
(1, 3, 2, 4),
(1, 3, 4, 2),
(1, 4, 2, 3),
(1, 4, 3, 2),
(2, 1, 3, 4),
(2, 1, 4, 3),
(2, 3, 1, 4),
(2, 3, 4, 1),
(2, 4, 1, 3),
(2, 4, 3, 1),
(3, 1, 2, 4),
(3, 1, 4, 2),
(3, 2, 1, 4),
(3, 2, 4, 1),
(3, 4, 1, 2),
(3, 4, 2, 1),
(4, 1, 2, 3),
(4, 1, 3, 2),
(4, 2, 1, 3),
(4, 2, 3, 1),
(4, 3, 1, 2),
(4, 3, 2, 1)]
Iteration is a process implying iterables (implementing the __iter__()
method) and iterators (implementing the __next__()
method).
Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables.
There is more about it in this article about how for
loops work.
You are calling configure_logging
twice (maybe in the __init__
method of Boy
) : getLogger
will return the same object, but addHandler
does not check if a similar handler has already been added to the logger.
Try tracing calls to that method and eliminating one of these. Or set up a flag logging_initialized
initialized to False
in the __init__
method of Boy
and change configure_logging
to do nothing if logging_initialized
is True
, and to set it to True
after you've initialized the logger.
If your program creates several Boy
instances, you'll have to change the way you do things with a global configure_logging
function adding the handlers, and the Boy.configure_logging
method only initializing the self.logger
attribute.
Another way of solving this is by checking the handlers attribute of your logger:
logger = logging.getLogger('my_logger')
if not logger.handlers:
# create the handlers and call logger.addHandler(logging_handler)
As already discussed, tensorflow doesn't provide its own way to cross-validate the model. The recommended way is to use KFold
. It's a bit tedious, but doable. Here's a complete example of cross-validating MNIST model with tensorflow
and KFold
:
from sklearn.model_selection import KFold
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# Parameters
learning_rate = 0.01
batch_size = 500
# TF graph
x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
pred = tf.nn.softmax(tf.matmul(x, W) + b)
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
init = tf.global_variables_initializer()
mnist = input_data.read_data_sets("data/mnist-tf", one_hot=True)
train_x_all = mnist.train.images
train_y_all = mnist.train.labels
test_x = mnist.test.images
test_y = mnist.test.labels
def run_train(session, train_x, train_y):
print "\nStart training"
session.run(init)
for epoch in range(10):
total_batch = int(train_x.shape[0] / batch_size)
for i in range(total_batch):
batch_x = train_x[i*batch_size:(i+1)*batch_size]
batch_y = train_y[i*batch_size:(i+1)*batch_size]
_, c = session.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})
if i % 50 == 0:
print "Epoch #%d step=%d cost=%f" % (epoch, i, c)
def cross_validate(session, split_size=5):
results = []
kf = KFold(n_splits=split_size)
for train_idx, val_idx in kf.split(train_x_all, train_y_all):
train_x = train_x_all[train_idx]
train_y = train_y_all[train_idx]
val_x = train_x_all[val_idx]
val_y = train_y_all[val_idx]
run_train(session, train_x, train_y)
results.append(session.run(accuracy, feed_dict={x: val_x, y: val_y}))
return results
with tf.Session() as session:
result = cross_validate(session)
print "Cross-validation result: %s" % result
print "Test accuracy: %f" % session.run(accuracy, feed_dict={x: test_x, y: test_y})
The special variable __file__
contains the path to the current file. From that we can get the directory using either pathlib
or the os.path
module.
For the directory of the script being run:
import pathlib
pathlib.Path(__file__).parent.resolve()
For the current working directory:
import pathlib
pathlib.Path().resolve()
For the directory of the script being run:
import os
os.path.dirname(os.path.abspath(__file__))
If you mean the current working directory:
import os
os.path.abspath(os.getcwd())
Note that before and after file
is two underscores, not just one.
Also note that if you are running interactively or have loaded code from something other than a file (eg: a database or online resource), __file__
may not be set since there is no notion of "current file". The above answer assumes the most common scenario of running a python script that is in a file.
Before understanding metaclasses, you need to master classes in Python. And Python has a very peculiar idea of what classes are, borrowed from the Smalltalk language.
In most languages, classes are just pieces of code that describe how to produce an object. That's kinda true in Python too:
>>> class ObjectCreator(object):
... pass
...
>>> my_object = ObjectCreator()
>>> print(my_object)
<__main__.ObjectCreator object at 0x8974f2c>
But classes are more than that in Python. Classes are objects too.
Yes, objects.
As soon as you use the keyword class
, Python executes it and creates
an object. The instruction
>>> class ObjectCreator(object):
... pass
...
creates in memory an object with the name ObjectCreator
.
This object (the class) is itself capable of creating objects (the instances), and this is why it's a class.
But still, it's an object, and therefore:
e.g.:
>>> print(ObjectCreator) # you can print a class because it's an object
<class '__main__.ObjectCreator'>
>>> def echo(o):
... print(o)
...
>>> echo(ObjectCreator) # you can pass a class as a parameter
<class '__main__.ObjectCreator'>
>>> print(hasattr(ObjectCreator, 'new_attribute'))
False
>>> ObjectCreator.new_attribute = 'foo' # you can add attributes to a class
>>> print(hasattr(ObjectCreator, 'new_attribute'))
True
>>> print(ObjectCreator.new_attribute)
foo
>>> ObjectCreatorMirror = ObjectCreator # you can assign a class to a variable
>>> print(ObjectCreatorMirror.new_attribute)
foo
>>> print(ObjectCreatorMirror())
<__main__.ObjectCreator object at 0x8997b4c>
Since classes are objects, you can create them on the fly, like any object.
First, you can create a class in a function using class
:
>>> def choose_class(name):
... if name == 'foo':
... class Foo(object):
... pass
... return Foo # return the class, not an instance
... else:
... class Bar(object):
... pass
... return Bar
...
>>> MyClass = choose_class('foo')
>>> print(MyClass) # the function returns a class, not an instance
<class '__main__.Foo'>
>>> print(MyClass()) # you can create an object from this class
<__main__.Foo object at 0x89c6d4c>
But it's not so dynamic, since you still have to write the whole class yourself.
Since classes are objects, they must be generated by something.
When you use the class
keyword, Python creates this object automatically. But as
with most things in Python, it gives you a way to do it manually.
Remember the function type
? The good old function that lets you know what
type an object is:
>>> print(type(1))
<type 'int'>
>>> print(type("1"))
<type 'str'>
>>> print(type(ObjectCreator))
<type 'type'>
>>> print(type(ObjectCreator()))
<class '__main__.ObjectCreator'>
Well, type
has also a completely different ability: it can create classes on the fly. type
can take the description of a class as parameters,
and return a class.
(I know, it's silly that the same function can have two completely different uses according to the parameters you pass to it. It's an issue due to backward compatibility in Python)
type
works this way:
type(name, bases, attrs)
Where:
name
: name of the classbases
: tuple of the parent class (for inheritance, can be empty)attrs
: dictionary containing attributes names and valuese.g.:
>>> class MyShinyClass(object):
... pass
can be created manually this way:
>>> MyShinyClass = type('MyShinyClass', (), {}) # returns a class object
>>> print(MyShinyClass)
<class '__main__.MyShinyClass'>
>>> print(MyShinyClass()) # create an instance with the class
<__main__.MyShinyClass object at 0x8997cec>
You'll notice that we use MyShinyClass
as the name of the class
and as the variable to hold the class reference. They can be different,
but there is no reason to complicate things.
type
accepts a dictionary to define the attributes of the class. So:
>>> class Foo(object):
... bar = True
Can be translated to:
>>> Foo = type('Foo', (), {'bar':True})
And used as a normal class:
>>> print(Foo)
<class '__main__.Foo'>
>>> print(Foo.bar)
True
>>> f = Foo()
>>> print(f)
<__main__.Foo object at 0x8a9b84c>
>>> print(f.bar)
True
And of course, you can inherit from it, so:
>>> class FooChild(Foo):
... pass
would be:
>>> FooChild = type('FooChild', (Foo,), {})
>>> print(FooChild)
<class '__main__.FooChild'>
>>> print(FooChild.bar) # bar is inherited from Foo
True
Eventually, you'll want to add methods to your class. Just define a function with the proper signature and assign it as an attribute.
>>> def echo_bar(self):
... print(self.bar)
...
>>> FooChild = type('FooChild', (Foo,), {'echo_bar': echo_bar})
>>> hasattr(Foo, 'echo_bar')
False
>>> hasattr(FooChild, 'echo_bar')
True
>>> my_foo = FooChild()
>>> my_foo.echo_bar()
True
And you can add even more methods after you dynamically create the class, just like adding methods to a normally created class object.
>>> def echo_bar_more(self):
... print('yet another method')
...
>>> FooChild.echo_bar_more = echo_bar_more
>>> hasattr(FooChild, 'echo_bar_more')
True
You see where we are going: in Python, classes are objects, and you can create a class on the fly, dynamically.
This is what Python does when you use the keyword class
, and it does so by using a metaclass.
Metaclasses are the 'stuff' that creates classes.
You define classes in order to create objects, right?
But we learned that Python classes are objects.
Well, metaclasses are what create these objects. They are the classes' classes, you can picture them this way:
MyClass = MetaClass()
my_object = MyClass()
You've seen that type
lets you do something like this:
MyClass = type('MyClass', (), {})
It's because the function type
is in fact a metaclass. type
is the
metaclass Python uses to create all classes behind the scenes.
Now you wonder "why the heck is it written in lowercase, and not Type
?"
Well, I guess it's a matter of consistency with str
, the class that creates
strings objects, and int
the class that creates integer objects. type
is
just the class that creates class objects.
You see that by checking the __class__
attribute.
Everything, and I mean everything, is an object in Python. That includes integers, strings, functions and classes. All of them are objects. And all of them have been created from a class:
>>> age = 35
>>> age.__class__
<type 'int'>
>>> name = 'bob'
>>> name.__class__
<type 'str'>
>>> def foo(): pass
>>> foo.__class__
<type 'function'>
>>> class Bar(object): pass
>>> b = Bar()
>>> b.__class__
<class '__main__.Bar'>
Now, what is the __class__
of any __class__
?
>>> age.__class__.__class__
<type 'type'>
>>> name.__class__.__class__
<type 'type'>
>>> foo.__class__.__class__
<type 'type'>
>>> b.__class__.__class__
<type 'type'>
So, a metaclass is just the stuff that creates class objects.
You can call it a 'class factory' if you wish.
type
is the built-in metaclass Python uses, but of course, you can create your
own metaclass.
__metaclass__
attributeIn Python 2, you can add a __metaclass__
attribute when you write a class (see next section for the Python 3 syntax):
class Foo(object):
__metaclass__ = something...
[...]
If you do so, Python will use the metaclass to create the class Foo
.
Careful, it's tricky.
You write class Foo(object)
first, but the class object Foo
is not created
in memory yet.
Python will look for __metaclass__
in the class definition. If it finds it,
it will use it to create the object class Foo
. If it doesn't, it will use
type
to create the class.
Read that several times.
When you do:
class Foo(Bar):
pass
Python does the following:
Is there a __metaclass__
attribute in Foo
?
If yes, create in-memory a class object (I said a class object, stay with me here), with the name Foo
by using what is in __metaclass__
.
If Python can't find __metaclass__
, it will look for a __metaclass__
at the MODULE level, and try to do the same (but only for classes that don't inherit anything, basically old-style classes).
Then if it can't find any __metaclass__
at all, it will use the Bar
's (the first parent) own metaclass (which might be the default type
) to create the class object.
Be careful here that the __metaclass__
attribute will not be inherited, the metaclass of the parent (Bar.__class__
) will be. If Bar
used a __metaclass__
attribute that created Bar
with type()
(and not type.__new__()
), the subclasses will not inherit that behavior.
Now the big question is, what can you put in __metaclass__
?
The answer is something that can create a class.
And what can create a class? type
, or anything that subclasses or uses it.
The syntax to set the metaclass has been changed in Python 3:
class Foo(object, metaclass=something):
...
i.e. the __metaclass__
attribute is no longer used, in favor of a keyword argument in the list of base classes.
The behavior of metaclasses however stays largely the same.
One thing added to metaclasses in Python 3 is that you can also pass attributes as keyword-arguments into a metaclass, like so:
class Foo(object, metaclass=something, kwarg1=value1, kwarg2=value2):
...
Read the section below for how Python handles this.
The main purpose of a metaclass is to change the class automatically, when it's created.
You usually do this for APIs, where you want to create classes matching the current context.
Imagine a stupid example, where you decide that all classes in your module
should have their attributes written in uppercase. There are several ways to
do this, but one way is to set __metaclass__
at the module level.
This way, all classes of this module will be created using this metaclass, and we just have to tell the metaclass to turn all attributes to uppercase.
Luckily, __metaclass__
can actually be any callable, it doesn't need to be a
formal class (I know, something with 'class' in its name doesn't need to be
a class, go figure... but it's helpful).
So we will start with a simple example, by using a function.
# the metaclass will automatically get passed the same argument
# that you usually pass to `type`
def upper_attr(future_class_name, future_class_parents, future_class_attrs):
"""
Return a class object, with the list of its attribute turned
into uppercase.
"""
# pick up any attribute that doesn't start with '__' and uppercase it
uppercase_attrs = {
attr if attr.startswith("__") else attr.upper(): v
for attr, v in future_class_attrs.items()
}
# let `type` do the class creation
return type(future_class_name, future_class_parents, uppercase_attrs)
__metaclass__ = upper_attr # this will affect all classes in the module
class Foo(): # global __metaclass__ won't work with "object" though
# but we can define __metaclass__ here instead to affect only this class
# and this will work with "object" children
bar = 'bip'
Let's check:
>>> hasattr(Foo, 'bar')
False
>>> hasattr(Foo, 'BAR')
True
>>> Foo.BAR
'bip'
Now, let's do exactly the same, but using a real class for a metaclass:
# remember that `type` is actually a class like `str` and `int`
# so you can inherit from it
class UpperAttrMetaclass(type):
# __new__ is the method called before __init__
# it's the method that creates the object and returns it
# while __init__ just initializes the object passed as parameter
# you rarely use __new__, except when you want to control how the object
# is created.
# here the created object is the class, and we want to customize it
# so we override __new__
# you can do some stuff in __init__ too if you wish
# some advanced use involves overriding __call__ as well, but we won't
# see this
def __new__(upperattr_metaclass, future_class_name,
future_class_parents, future_class_attrs):
uppercase_attrs = {
attr if attr.startswith("__") else attr.upper(): v
for attr, v in future_class_attrs.items()
}
return type(future_class_name, future_class_parents, uppercase_attrs)
Let's rewrite the above, but with shorter and more realistic variable names now that we know what they mean:
class UpperAttrMetaclass(type):
def __new__(cls, clsname, bases, attrs):
uppercase_attrs = {
attr if attr.startswith("__") else attr.upper(): v
for attr, v in attrs.items()
}
return type(clsname, bases, uppercase_attrs)
You may have noticed the extra argument cls
. There is
nothing special about it: __new__
always receives the class it's defined in, as the first parameter. Just like you have self
for ordinary methods which receive the instance as the first parameter, or the defining class for class methods.
But this is not proper OOP. We are calling type
directly and we aren't overriding or calling the parent's __new__
. Let's do that instead:
class UpperAttrMetaclass(type):
def __new__(cls, clsname, bases, attrs):
uppercase_attrs = {
attr if attr.startswith("__") else attr.upper(): v
for attr, v in attrs.items()
}
return type.__new__(cls, clsname, bases, uppercase_attrs)
We can make it even cleaner by using super
, which will ease inheritance (because yes, you can have metaclasses, inheriting from metaclasses, inheriting from type):
class UpperAttrMetaclass(type):
def __new__(cls, clsname, bases, attrs):
uppercase_attrs = {
attr if attr.startswith("__") else attr.upper(): v
for attr, v in attrs.items()
}
# Python 2 requires passing arguments to super:
return super(UpperAttrMetaclass, cls).__new__(
cls, clsname, bases, uppercase_attrs)
# Python 3 can use no-arg super() which infers them:
return super().__new__(cls, clsname, bases, uppercase_attrs)
Oh, and in Python 3 if you do this call with keyword arguments, like this:
class Foo(object, metaclass=MyMetaclass, kwarg1=value1):
...
It translates to this in the metaclass to use it:
class MyMetaclass(type):
def __new__(cls, clsname, bases, dct, kwargs1=default):
...
That's it. There is really nothing more about metaclasses.
The reason behind the complexity of the code using metaclasses is not because
of metaclasses, it's because you usually use metaclasses to do twisted stuff
relying on introspection, manipulating inheritance, vars such as __dict__
, etc.
Indeed, metaclasses are especially useful to do black magic, and therefore complicated stuff. But by themselves, they are simple:
Since __metaclass__
can accept any callable, why would you use a class
since it's obviously more complicated?
There are several reasons to do so:
UpperAttrMetaclass(type)
, you know
what's going to follow__new__
, __init__
and __call__
. Which will allow you to do different stuff, Even if usually you can do it all in __new__
,
some people are just more comfortable using __init__
.Now the big question. Why would you use some obscure error-prone feature?
Well, usually you don't:
Metaclasses are deeper magic that 99% of users should never worry about it. If you wonder whether you need them, you don't (the people who actually need them know with certainty that they need them, and don't need an explanation about why).
Python Guru Tim Peters
The main use case for a metaclass is creating an API. A typical example of this is the Django ORM. It allows you to define something like this:
class Person(models.Model):
name = models.CharField(max_length=30)
age = models.IntegerField()
But if you do this:
person = Person(name='bob', age='35')
print(person.age)
It won't return an IntegerField
object. It will return an int
, and can even take it directly from the database.
This is possible because models.Model
defines __metaclass__
and
it uses some magic that will turn the Person
you just defined with simple statements
into a complex hook to a database field.
Django makes something complex look simple by exposing a simple API and using metaclasses, recreating code from this API to do the real job behind the scenes.
First, you know that classes are objects that can create instances.
Well, in fact, classes are themselves instances. Of metaclasses.
>>> class Foo(object): pass
>>> id(Foo)
142630324
Everything is an object in Python, and they are all either instance of classes or instances of metaclasses.
Except for type
.
type
is actually its own metaclass. This is not something you could
reproduce in pure Python, and is done by cheating a little bit at the implementation
level.
Secondly, metaclasses are complicated. You may not want to use them for very simple class alterations. You can change classes by using two different techniques:
99% of the time you need class alteration, you are better off using these.
But 98% of the time, you don't need class alteration at all.
--showlocals
If you just need the full variables printing, you can use the --showlocals
option, combined with the verbose output. Example:
$ pytest -vvl
==================================== test session starts ====================================
platform darwin -- Python 3.6.6, pytest-3.9.1, py-1.5.4, pluggy-0.7.1
cachedir: .pytest_cache
rootdir: /Users/hoefling/projects/private/stackoverflow, inifile:
collected 1 item
test_spam.py::test_in FAILED [100%]
========================================= FAILURES ==========================================
__________________________________________ test_in __________________________________________
def test_in():
d = {key: key for key in range(500)} # makes d sufficiently big to trigger minimization of output
> assert 1000 in d # this will fail
E assert 1000 in {0: 0, 1: 1, 2: 2, 3: 3, ...}
d = {0: 0,
1: 1,
2: 2,
3: 3,
4: 4,
5: 5,
6: 6,
7: 7,
8: 8,
9: 9,
10: 10,
etc.
pytest_assertrepr_compare
hookAnother option is to redefine the specific assertion messages via pytest_assertrepr_compare
hook. In the example below, I redefine assertion for el in dict
checks. This will be automatically applied to all matching assertions in all tests:
# conftest.py
def pytest_assertrepr_compare(op, left, right):
if op == 'in' and isinstance(right, dict):
return [f'{left} in {right}']
Running pytest -vv
will now yield the full dict as no no truncating is done in custom hook:
$ pytest -vv
================================== test session starts ===================================
platform darwin -- Python 3.6.6, pytest-3.9.1, py-1.5.4, pluggy-0.7.1
cachedir: .pytest_cache
rootdir: /Users/hoefling/projects/private/stackoverflow, inifile:
collected 1 item
test_spam.py::test_in FAILED [100%]
======================================== FAILURES ========================================
________________________________________ test_in _________________________________________
def test_in():
d = {key: key for key in range(500)} # makes d sufficiently big to trigger minimization of output
> assert 1000 in d # this will fail
E assert 1000 in {0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9, 10: 10, 11: 11, 12: 12, 13: 13, 14: 14, 15: 15, 16: 16, 17: 17, 18: 18, 19: 19, 20: 20, 21: 21, 22: 22, 23: 23, 24: 24, 25: 25, 26: 26, 27: 27, 28: 28, 29: 29, 30: 30, 31: 31, 32: 32, 33: 33, 34: 34, 35: 35, 36: 36, 37: 37, 38: 38, 39: 39, 40: 40, 41: 41, 42: 42, 43: 43, 44: 44, 45: 45, 46: 46, 47: 47, 48: 48, 49: 49, 50: 50, 51: 51, 52: 52, 53: 53, 54: 54, 55: 55, 56: 56, 57: 57, 58: 58, 59: 59, 60: 60, 61: 61, 62: 62, 63: 63, 64: 64, 65: 65, 66: 66, 67: 67, 68: 68, 69: 69, 70: 70, 71: 71, 72: 72, 73: 73, 74: 74, 75: 75, 76: 76, 77: 77, 78: 78, 79: 79, 80: 80, 81: 81, 82: 82, 83: 83, 84: 84, 85: 85, 86: 86, 87: 87, 88: 88, 89: 89, 90: 90, 91: 91, 92: 92, 93: 93, 94: 94, 95: 95, 96: 96, 97: 97, 98: 98, 99: 99, 100: 100, 101: 101, 102: 102, 103: 103, 104: 104, 105: 105, 106: 106, 107: 107, 108: 108, 109: 109, 110: 110, 111: 111, 112: 112, 113: 113, 114: 114, 115: 115, 116: 116, 117: 117, 118: 118, 119: 119, 120: 120, 121: 121, 122: 122, 123: 123, 124: 124, 125: 125, 126: 126, 127: 127, 128: 128, 129: 129, 130: 130, 131: 131, 132: 132, 133: 133, 134: 134, 135: 135, 136: 136, 137: 137, 138: 138, 139: 139, 140: 140, 141: 141, 142: 142, 143: 143, 144: 144, 145: 145, 146: 146, 147: 147, 148: 148, 149: 149, 150: 150, 151: 151, 152: 152, 153: 153, 154: 154, 155: 155, 156: 156, 157: 157, 158: 158, 159: 159, 160: 160, 161: 161, 162: 162, 163: 163, 164: 164, 165: 165, 166: 166, 167: 167, 168: 168, 169: 169, 170: 170, 171: 171, 172: 172, 173: 173, 174: 174, 175: 175, 176: 176, 177: 177, 178: 178, 179: 179, 180: 180, 181: 181, 182: 182, 183: 183, 184: 184, 185: 185, 186: 186, 187: 187, 188: 188, 189: 189, 190: 190, 191: 191, 192: 192, 193: 193, 194: 194, 195: 195, 196: 196, 197: 197, 198: 198, 199: 199, 200: 200, 201: 201, 202: 202, 203: 203, 204: 204, 205: 205, 206: 206, 207: 207, 208: 208, 209: 209, 210: 210, 211: 211, 212: 212, 213: 213, 214: 214, 215: 215, 216: 216, 217: 217, 218: 218, 219: 219, 220: 220, 221: 221, 222: 222, 223: 223, 224: 224, 225: 225, 226: 226, 227: 227, 228: 228, 229: 229, 230: 230, 231: 231, 232: 232, 233: 233, 234: 234, 235: 235, 236: 236, 237: 237, 238: 238, 239: 239, 240: 240, 241: 241, 242: 242, 243: 243, 244: 244, 245: 245, 246: 246, 247: 247, 248: 248, 249: 249, 250: 250, 251: 251, 252: 252, 253: 253, 254: 254, 255: 255, 256: 256, 257: 257, 258: 258, 259: 259, 260: 260, 261: 261, 262: 262, 263: 263, 264: 264, 265: 265, 266: 266, 267: 267, 268: 268, 269: 269, 270: 270, 271: 271, 272: 272, 273: 273, 274: 274, 275: 275, 276: 276, 277: 277, 278: 278, 279: 279, 280: 280, 281: 281, 282: 282, 283: 283, 284: 284, 285: 285, 286: 286, 287: 287, 288: 288, 289: 289, 290: 290, 291: 291, 292: 292, 293: 293, 294: 294, 295: 295, 296: 296, 297: 297, 298: 298, 299: 299, 300: 300, 301: 301, 302: 302, 303: 303, 304: 304, 305: 305, 306: 306, 307: 307, 308: 308, 309: 309, 310: 310, 311: 311, 312: 312, 313: 313, 314: 314, 315: 315, 316: 316, 317: 317, 318: 318, 319: 319, 320: 320, 321: 321, 322: 322, 323: 323, 324: 324, 325: 325, 326: 326, 327: 327, 328: 328, 329: 329, 330: 330, 331: 331, 332: 332, 333: 333, 334: 334, 335: 335, 336: 336, 337: 337, 338: 338, 339: 339, 340: 340, 341: 341, 342: 342, 343: 343, 344: 344, 345: 345, 346: 346, 347: 347, 348: 348, 349: 349, 350: 350, 351: 351, 352: 352, 353: 353, 354: 354, 355: 355, 356: 356, 357: 357, 358: 358, 359: 359, 360: 360, 361: 361, 362: 362, 363: 363, 364: 364, 365: 365, 366: 366, 367: 367, 368: 368, 369: 369, 370: 370, 371: 371, 372: 372, 373: 373, 374: 374, 375: 375, 376: 376, 377: 377, 378: 378, 379: 379, 380: 380, 381: 381, 382: 382, 383: 383, 384: 384, 385: 385, 386: 386, 387: 387, 388: 388, 389: 389, 390: 390, 391: 391, 392: 392, 393: 393, 394: 394, 395: 395, 396: 396, 397: 397, 398: 398, 399: 399, 400: 400, 401: 401, 402: 402, 403: 403, 404: 404, 405: 405, 406: 406, 407: 407, 408: 408, 409: 409, 410: 410, 411: 411, 412: 412, 413: 413, 414: 414, 415: 415, 416: 416, 417: 417, 418: 418, 419: 419, 420: 420, 421: 421, 422: 422, 423: 423, 424: 424, 425: 425, 426: 426, 427: 427, 428: 428, 429: 429, 430: 430, 431: 431, 432: 432, 433: 433, 434: 434, 435: 435, 436: 436, 437: 437, 438: 438, 439: 439, 440: 440, 441: 441, 442: 442, 443: 443, 444: 444, 445: 445, 446: 446, 447: 447, 448: 448, 449: 449, 450: 450, 451: 451, 452: 452, 453: 453, 454: 454, 455: 455, 456: 456, 457: 457, 458: 458, 459: 459, 460: 460, 461: 461, 462: 462, 463: 463, 464: 464, 465: 465, 466: 466, 467: 467, 468: 468, 469: 469, 470: 470, 471: 471, 472: 472, 473: 473, 474: 474, 475: 475, 476: 476, 477: 477, 478: 478, 479: 479, 480: 480, 481: 481, 482: 482, 483: 483, 484: 484, 485: 485, 486: 486, 487: 487, 488: 488, 489: 489, 490: 490, 491: 491, 492: 492, 493: 493, 494: 494, 495: 495, 496: 496, 497: 497, 498: 498, 499: 499}
test_spam.py:3: AssertionError
================================ 1 failed in 0.08 seconds ================================
Check out the latest blogs from LambdaTest on this topic:
A challenge that many developers face in Selenium test automation is choosing the right test framework that can help them come up with automated tests with minimal (or no) requirement of boilerplate code. Like me, most of you would have come across test code where a huge chunk of code is written to perform a simple test.
Automation testing with Selenium has been a lifeline in grooming budding automation testers into professionals. Selenium being open-source is largely adopted on a global scale. As a result of which you get huge support from the community. There are multiple frameworks for different languages that offer bindings with Selenium. So you have got everything on board for getting started with Selenium. Now, comes the phases where you run your first test script to perform Selenium. The scripts would involve basic test scenarios if you are learning Selenium automation. You may validate:
This article is a part of our Content Hub. For more in-depth resources, check out our content hub on Jenkins Tutorial.
This article is a part of our Content Hub. For more in-depth resources, check out our content hub on Selenium Python Tutorial.
Irrespective of the test framework being used, one aspect that would be of interest to the various stakeholders in the project e.g. developers, testers, project managers, etc. would be the output of the test results. Keeping a track of the progression of test suites/test cases and their corresponding results can be a daunting task. The task can become more complex in the later stages of the project since the product would have undergone various levels of testing.
Looking for an in-depth tutorial around pytest? LambdaTest covers the detailed pytest tutorial that has everything related to the pytest, from setting up the pytest framework to automation testing. Delve deeper into pytest testing by exploring advanced use cases like parallel testing, pytest fixtures, parameterization, executing multiple test cases from a single file, and more.
Skim our below pytest tutorial playlist to get started with automation testing using the pytest framework.
https://www.youtube.com/playlist?list=PLZMWkkQEwOPlcGgDmHl8KkXKeLF83XlrP
Perform automation testing on 3000+ real desktop and mobile devices online.
Get 100 minutes of automation test minutes FREE!!