This article explains the new features in Python 3.2 as compared to 3.1. It
focuses on a few highlights and gives a few examples. For full details, see the
Misc/NEWS file.
New, Improved, and Deprecated Modules
Python’s standard library has undergone significant maintenance efforts and
quality improvements.
The biggest news for Python 3.2 is that the email package, mailbox
module, and nntplib modules now work correctly with the bytes/text model
in Python 3. For the first time, there is correct handling of messages with
mixed encodings.
Throughout the standard library, there has been more careful attention to
encodings and text versus bytes issues. In particular, interactions with the
operating system are now better able to exchange non-ASCII data using the
Windows MBCS encoding, locale-aware encodings, or UTF-8.
Another significant win is the addition of substantially better support for
SSL connections and security certificates.
In addition, more classes now implement a context manager to support
convenient and reliable resource clean-up using a with statement.
email
The usability of the email package in Python 3 has been mostly fixed by
the extensive efforts of R. David Murray. The problem was that emails are
typically read and stored in the form of bytes rather than str
text, and they may contain multiple encodings within a single email. So, the
email package had to be extended to parse and generate email messages in bytes
format.
New functions message_from_bytes() and
message_from_binary_file(), and new classes
BytesFeedParser and BytesParser
allow binary message data to be parsed into model objects.
Given bytes input to the model, get_payload()
will by default decode a message body that has a
of 8bit using the charset
specified in the MIME headers and return the resulting string.
Given bytes input to the model, Generator will
convert message bodies that have a of
8bit to instead have a 7bit .
Headers with unencoded non-ASCII bytes are deemed to be RFC 2047-encoded
using the unknown-8bit character set.
A new class BytesGenerator produces bytes as output,
preserving any unchanged non-ASCII data that was present in the input used to
build the model, including message bodies with a
of 8bit.
The smtplib SMTP class now accepts a byte string
for the msg argument to the sendmail() method,
and a new method, send_message() accepts a
Message object and can optionally obtain the
from_addr and to_addrs addresses directly from the object.
(Proposed and implemented by R. David Murray, bpo-4661 and bpo-10321.)
elementtree
The xml.etree.ElementTree package and its xml.etree.cElementTree
counterpart have been updated to version 1.3.
Several new and useful functions and methods have been added:
Two methods have been deprecated:
xml.etree.ElementTree.getchildren() use list(elem) instead.
xml.etree.ElementTree.getiterator() use Element.iter instead.
For details of the update, see Introducing ElementTree on Fredrik Lundh’s website.
(Contributed by Florent Xicluna and Fredrik Lundh, bpo-6472.)
collections
The collections.Counter class now has two forms of in-place
subtraction, the existing -= operator for saturating subtraction and the new
subtract() method for regular subtraction. The
former is suitable for multisets
which only have positive counts, and the latter is more suitable for use cases
that allow negative counts:
>>> from collections import Counter
>>> tally = Counter(dogs=5, cats=3)
>>> tally -= Counter(dogs=2, cats=8) # saturating subtraction
>>> tally
Counter({'dogs': 3})
>>> tally = Counter(dogs=5, cats=3)
>>> tally.subtract(dogs=2, cats=8) # regular subtraction
>>> tally
Counter({'dogs': 3, 'cats': -5})
(Contributed by Raymond Hettinger.)
The collections.OrderedDict class has a new method
move_to_end() which takes an existing key and
moves it to either the first or last position in the ordered sequence.
The default is to move an item to the last position. This is equivalent of
renewing an entry with od[k] = od.pop(k).
A fast move-to-end operation is useful for resequencing entries. For example,
an ordered dictionary can be used to track order of access by aging entries
from the oldest to the most recently accessed.
>>> from collections import OrderedDict
>>> d = OrderedDict.fromkeys(['a', 'b', 'X', 'd', 'e'])
>>> list(d)
['a', 'b', 'X', 'd', 'e']
>>> d.move_to_end('X')
>>> list(d)
['a', 'b', 'd', 'e', 'X']
(Contributed by Raymond Hettinger.)
The collections.deque class grew two new methods
count() and reverse() that
make them more substitutable for list objects:
>>> from collections import deque
>>> d = deque('simsalabim')
>>> d.count('s')
2
>>> d.reverse()
>>> d
deque(['m', 'i', 'b', 'a', 'l', 'a', 's', 'm', 'i', 's'])
(Contributed by Raymond Hettinger.)
threading
The threading module has a new Barrier
synchronization class for making multiple threads wait until all of them have
reached a common barrier point. Barriers are useful for making sure that a task
with multiple preconditions does not run until all of the predecessor tasks are
complete.
Barriers can work with an arbitrary number of threads. This is a generalization
of a Rendezvous which
is defined for only two threads.
Implemented as a two-phase cyclic barrier, Barrier objects
are suitable for use in loops. The separate filling and draining phases
assure that all threads get released (drained) before any one of them can loop
back and re-enter the barrier. The barrier fully resets after each cycle.
Example of using barriers:
from threading import Barrier, Thread
def get_votes(site):
ballots = conduct_election(site)
all_polls_closed.wait() # do not count until all polls are closed
totals = summarize(ballots)
publish(site, totals)
all_polls_closed = Barrier(len(sites))
for site in sites:
Thread(target=get_votes, args=(site,)).start()
In this example, the barrier enforces a rule that votes cannot be counted at any
polling site until all polls are closed. Notice how a solution with a barrier
is similar to one with threading.Thread.join(), but the threads stay alive
and continue to do work (summarizing ballots) after the barrier point is
crossed.
If any of the predecessor tasks can hang or be delayed, a barrier can be created
with an optional timeout parameter. Then if the timeout period elapses before
all the predecessor tasks reach the barrier point, all waiting threads are
released and a BrokenBarrierError exception is raised:
def get_votes(site):
ballots = conduct_election(site)
try:
all_polls_closed.wait(timeout=midnight - time.now())
except BrokenBarrierError:
lockbox = seal_ballots(ballots)
queue.put(lockbox)
else:
totals = summarize(ballots)
publish(site, totals)
In this example, the barrier enforces a more robust rule. If some election
sites do not finish before midnight, the barrier times-out and the ballots are
sealed and deposited in a queue for later handling.
See Barrier Synchronization Patterns for
more examples of how barriers can be used in parallel computing. Also, there is
a simple but thorough explanation of barriers in The Little Book of Semaphores, section 3.6.
(Contributed by Kristján Valur Jónsson with an API review by Jeffrey Yasskin in
bpo-8777.)
datetime and time
The datetime module has a new type timezone that
implements the tzinfo interface by returning a fixed UTC
offset and timezone name. This makes it easier to create timezone-aware
datetime objects:
>>> from datetime import datetime, timezone
>>> datetime.now(timezone.utc)
datetime.datetime(2010, 12, 8, 21, 4, 2, 923754, tzinfo=datetime.timezone.utc)
>>> datetime.strptime("01/01/2000 12:00 +0000", "%m/%d/%Y %H:%M %z")
datetime.datetime(2000, 1, 1, 12, 0, tzinfo=datetime.timezone.utc)
Also, timedelta objects can now be multiplied by
float and divided by float and int objects.
And timedelta objects can now divide one another.
The datetime.date.strftime() method is no longer restricted to years
after 1900. The new supported year range is from 1000 to 9999 inclusive.
Whenever a two-digit year is used in a time tuple, the interpretation has been
governed by time.accept2dyear. The default is True which means that
for a two-digit year, the century is guessed according to the POSIX rules
governing the %y strptime format.
Starting with Py3.2, use of the century guessing heuristic will emit a
DeprecationWarning. Instead, it is recommended that
time.accept2dyear be set to False so that large date ranges
can be used without guesswork:
>>> import time, warnings
>>> warnings.resetwarnings() # remove the default warning filters
>>> time.accept2dyear = True # guess whether 11 means 11 or 2011
>>> time.asctime((11, 1, 1, 12, 34, 56, 4, 1, 0))
Warning (from warnings module):
...
DeprecationWarning: Century info guessed for a 2-digit year.
'Fri Jan 1 12:34:56 2011'
>>> time.accept2dyear = False # use the full range of allowable dates
>>> time.asctime((11, 1, 1, 12, 34, 56, 4, 1, 0))
'Fri Jan 1 12:34:56 11'
Several functions now have significantly expanded date ranges. When
time.accept2dyear is false, the time.asctime() function will
accept any year that fits in a C int, while the time.mktime() and
time.strftime() functions will accept the full range supported by the
corresponding operating system functions.
(Contributed by Alexander Belopolsky and Victor Stinner in bpo-1289118,
bpo-5094, bpo-6641, bpo-2706, bpo-1777412, bpo-8013,
and bpo-10827.)
math
The math module has been updated with six new functions inspired by the
C99 standard.
The isfinite() function provides a reliable and fast way to detect
special values. It returns True for regular numbers and False for Nan or
Infinity:
>>> from math import isfinite
>>> [isfinite(x) for x in (123, 4.56, float('Nan'), float('Inf'))]
[True, True, False, False]
The expm1() function computes e**x-1 for small values of x
without incurring the loss of precision that usually accompanies the subtraction
of nearly equal quantities:
>>> from math import expm1
>>> expm1(0.013671875) # more accurate way to compute e**x-1 for a small x
0.013765762467652909
The erf() function computes a probability integral or Gaussian
error function. The
complementary error function, erfc(), is 1 - erf(x):
>>> from math import erf, erfc, sqrt
>>> erf(1.0/sqrt(2.0)) # portion of normal distribution within 1 standard deviation
0.682689492137086
>>> erfc(1.0/sqrt(2.0)) # portion of normal distribution outside 1 standard deviation
0.31731050786291404
>>> erf(1.0/sqrt(2.0)) + erfc(1.0/sqrt(2.0))
1.0
The gamma() function is a continuous extension of the factorial
function. See https://en.wikipedia.org/wiki/Gamma_function for details. Because
the function is related to factorials, it grows large even for small values of
x, so there is also a lgamma() function for computing the natural
logarithm of the gamma function:
>>> from math import gamma, lgamma
>>> gamma(7.0) # six factorial
720.0
>>> lgamma(801.0) # log(800 factorial)
4551.950730698041
(Contributed by Mark Dickinson.)
io
The io.BytesIO has a new method, getbuffer(), which
provides functionality similar to memoryview(). It creates an editable
view of the data without making a copy. The buffer’s random access and support
for slice notation are well-suited to in-place editing:
>>> REC_LEN, LOC_START, LOC_LEN = 34, 7, 11
>>> def change_location(buffer, record_number, location):
... start = record_number * REC_LEN + LOC_START
... buffer[start: start+LOC_LEN] = location
>>> import io
>>> byte_stream = io.BytesIO(
... b'G3805 storeroom Main chassis '
... b'X7899 shipping Reserve cog '
... b'L6988 receiving Primary sprocket'
... )
>>> buffer = byte_stream.getbuffer()
>>> change_location(buffer, 1, b'warehouse ')
>>> change_location(buffer, 0, b'showroom ')
>>> print(byte_stream.getvalue())
b'G3805 showroom Main chassis '
b'X7899 warehouse Reserve cog '
b'L6988 receiving Primary sprocket'
(Contributed by Antoine Pitrou in bpo-5506.)
reprlib
When writing a __repr__() method for a custom container, it is easy to
forget to handle the case where a member refers back to the container itself.
Python’s builtin objects such as list and set handle
self-reference by displaying “…” in the recursive part of the representation
string.
To help write such __repr__() methods, the reprlib module has a new
decorator, recursive_repr(), for detecting recursive calls to
__repr__() and substituting a placeholder string instead:
>>> class MyList(list):
... @recursive_repr()
... def __repr__(self):
... return '<' + '|'.join(map(repr, self)) + '>'
...
>>> m = MyList('abc')
>>> m.append(m)
>>> m.append('x')
>>> print(m)
<'a'|'b'|'c'|...|'x'>
(Contributed by Raymond Hettinger in bpo-9826 and bpo-9840.)
logging
In addition to dictionary-based configuration described above, the
logging package has many other improvements.
The logging documentation has been augmented by a basic tutorial, an advanced tutorial, and a cookbook of
logging recipes. These documents are the fastest way to learn about logging.
The logging.basicConfig() set-up function gained a style argument to
support three different types of string formatting. It defaults to “%” for
traditional %-formatting, can be set to “{” for the new str.format() style, or
can be set to “$” for the shell-style formatting provided by
string.Template. The following three configurations are equivalent:
>>> from logging import basicConfig
>>> basicConfig(style='%', format="%(name)s -> %(levelname)s: %(message)s")
>>> basicConfig(style='{', format="{name} -> {levelname} {message}")
>>> basicConfig(style='$', format="$name -> $levelname: $message")
If no configuration is set-up before a logging event occurs, there is now a
default configuration using a StreamHandler directed to
sys.stderr for events of WARNING level or higher. Formerly, an
event occurring before a configuration was set-up would either raise an
exception or silently drop the event depending on the value of
logging.raiseExceptions. The new default handler is stored in
logging.lastResort.
The use of filters has been simplified. Instead of creating a
Filter object, the predicate can be any Python callable that
returns True or False.
There were a number of other improvements that add flexibility and simplify
configuration. See the module documentation for a full listing of changes in
Python 3.2.
csv
The csv module now supports a new dialect, unix_dialect,
which applies quoting for all fields and a traditional Unix style with '\n' as
the line terminator. The registered dialect name is unix.
The csv.DictWriter has a new method,
writeheader() for writing-out an initial row to document
the field names:
>>> import csv, sys
>>> w = csv.DictWriter(sys.stdout, ['name', 'dept'], dialect='unix')
>>> w.writeheader()
"name","dept"
>>> w.writerows([
... {'name': 'tom', 'dept': 'accounting'},
... {'name': 'susan', 'dept': 'Salesl'}])
"tom","accounting"
"susan","sales"
(New dialect suggested by Jay Talbot in bpo-5975, and the new method
suggested by Ed Abraham in bpo-1537721.)
contextlib
There is a new and slightly mind-blowing tool
ContextDecorator that is helpful for creating a
context manager that does double duty as a function decorator.
As a convenience, this new functionality is used by
contextmanager() so that no extra effort is needed to support
both roles.
The basic idea is that both context managers and function decorators can be used
for pre-action and post-action wrappers. Context managers wrap a group of
statements using a with statement, and function decorators wrap a
group of statements enclosed in a function. So, occasionally there is a need to
write a pre-action or post-action wrapper that can be used in either role.
For example, it is sometimes useful to wrap functions or groups of statements
with a logger that can track the time of entry and time of exit. Rather than
writing both a function decorator and a context manager for the task, the
contextmanager() provides both capabilities in a single
definition:
from contextlib import contextmanager
import logging
logging.basicConfig(level=logging.INFO)
@contextmanager
def track_entry_and_exit(name):
logging.info('Entering: %s', name)
yield
logging.info('Exiting: %s', name)
Formerly, this would have only been usable as a context manager:
with track_entry_and_exit('widget loader'):
print('Some time consuming activity goes here')
load_widget()
Now, it can be used as a decorator as well:
@track_entry_and_exit('widget loader')
def activity():
print('Some time consuming activity goes here')
load_widget()
Trying to fulfill two roles at once places some limitations on the technique.
Context managers normally have the flexibility to return an argument usable by
a with statement, but there is no parallel for function decorators.
In the above example, there is not a clean way for the track_entry_and_exit
context manager to return a logging instance for use in the body of enclosed
statements.
(Contributed by Michael Foord in bpo-9110.)
decimal and fractions
Mark Dickinson crafted an elegant and efficient scheme for assuring that
different numeric datatypes will have the same hash value whenever their actual
values are equal (bpo-8188):
assert hash(Fraction(3, 2)) == hash(1.5) == \
hash(Decimal("1.5")) == hash(complex(1.5, 0))
Some of the hashing details are exposed through a new attribute,
sys.hash_info, which describes the bit width of the hash value, the
prime modulus, the hash values for infinity and nan, and the multiplier
used for the imaginary part of a number:
>>> sys.hash_info
sys.hash_info(width=64, modulus=2305843009213693951, inf=314159, nan=0, imag=1000003)
An early decision to limit the inter-operability of various numeric types has
been relaxed. It is still unsupported (and ill-advised) to have implicit
mixing in arithmetic expressions such as Decimal('1.1') + float('1.1')
because the latter loses information in the process of constructing the binary
float. However, since existing floating point value can be converted losslessly
to either a decimal or rational representation, it makes sense to add them to
the constructor and to support mixed-type comparisons.
Similar changes were made to fractions.Fraction so that the
from_float() and from_decimal()
methods are no longer needed (bpo-8294):
>>> from decimal import Decimal
>>> from fractions import Fraction
>>> Decimal(1.1)
Decimal('1.100000000000000088817841970012523233890533447265625')
>>> Fraction(1.1)
Fraction(2476979795053773, 2251799813685248)
Another useful change for the decimal module is that the
Context.clamp attribute is now public. This is useful in creating
contexts that correspond to the decimal interchange formats specified in IEEE
754 (see bpo-8540).
(Contributed by Mark Dickinson and Raymond Hettinger.)
ftp
The ftplib.FTP class now supports the context management protocol to
unconditionally consume socket.error exceptions and to close the FTP
connection when done:
>>> from ftplib import FTP
>>> with FTP("ftp1.at.proftpd.org") as ftp:
ftp.login()
ftp.dir()
'230 Anonymous login ok, restrictions apply.'
dr-xr-xr-x 9 ftp ftp 154 May 6 10:43 .
dr-xr-xr-x 9 ftp ftp 154 May 6 10:43 ..
dr-xr-xr-x 5 ftp ftp 4096 May 6 10:43 CentOS
dr-xr-xr-x 3 ftp ftp 18 Jul 10 2008 Fedora
Other file-like objects such as mmap.mmap and fileinput.input()
also grew auto-closing context managers:
with fileinput.input(files=('log1.txt', 'log2.txt')) as f:
for line in f:
process(line)
(Contributed by Tarek Ziadé and Giampaolo Rodolà in bpo-4972, and
by Georg Brandl in bpo-8046 and bpo-1286.)
The FTP_TLS class now accepts a context parameter, which is a
ssl.SSLContext object allowing bundling SSL configuration options,
certificates and private keys into a single (potentially long-lived) structure.
(Contributed by Giampaolo Rodolà; bpo-8806.)
select
The select module now exposes a new, constant attribute,
PIPE_BUF, which gives the minimum number of bytes which are
guaranteed not to block when select.select() says a pipe is ready
for writing.
>>> import select
>>> select.PIPE_BUF
512
(Available on Unix systems. Patch by Sébastien Sablé in bpo-9862)
gzip and zipfile
gzip.GzipFile now implements the io.BufferedIOBase
abstract base class (except for truncate()). It also has a
peek() method and supports unseekable as well as
zero-padded file objects.
The gzip module also gains the compress() and
decompress() functions for easier in-memory compression and
decompression. Keep in mind that text needs to be encoded as bytes
before compressing and decompressing:
>>> import gzip
>>> s = 'Three shall be the number thou shalt count, '
>>> s += 'and the number of the counting shall be three'
>>> b = s.encode() # convert to utf-8
>>> len(b)
89
>>> c = gzip.compress(b)
>>> len(c)
77
>>> gzip.decompress(c).decode()[:42] # decompress and convert to text
'Three shall be the number thou shalt count'
(Contributed by Anand B. Pillai in bpo-3488; and by Antoine Pitrou, Nir
Aides and Brian Curtin in bpo-9962, bpo-1675951, bpo-7471 and
bpo-2846.)
Also, the zipfile.ZipExtFile class was reworked internally to represent
files stored inside an archive. The new implementation is significantly faster
and can be wrapped in an io.BufferedReader object for more speedups. It
also solves an issue where interleaved calls to read and readline gave the
wrong results.
(Patch submitted by Nir Aides in bpo-7610.)
tarfile
The TarFile class can now be used as a context manager. In
addition, its add() method has a new option, filter,
that controls which files are added to the archive and allows the file metadata
to be edited.
The new filter option replaces the older, less flexible exclude parameter
which is now deprecated. If specified, the optional filter parameter needs to
be a keyword argument. The user-supplied filter function accepts a
TarInfo object and returns an updated
TarInfo object, or if it wants the file to be excluded, the
function can return None:
>>> import tarfile, glob
>>> def myfilter(tarinfo):
... if tarinfo.isfile(): # only save real files
... tarinfo.uname = 'monty' # redact the user name
... return tarinfo
>>> with tarfile.open(name='myarchive.tar.gz', mode='w:gz') as tf:
... for filename in glob.glob('*.txt'):
... tf.add(filename, filter=myfilter)
... tf.list()
-rw-r--r-- monty/501 902 2011-01-26 17:59:11 annotations.txt
-rw-r--r-- monty/501 123 2011-01-26 17:59:11 general_questions.txt
-rw-r--r-- monty/501 3514 2011-01-26 17:59:11 prion.txt
-rw-r--r-- monty/501 124 2011-01-26 17:59:11 py_todo.txt
-rw-r--r-- monty/501 1399 2011-01-26 17:59:11 semaphore_notes.txt
(Proposed by Tarek Ziadé and implemented by Lars Gustäbel in bpo-6856.)
hashlib
The hashlib module has two new constant attributes listing the hashing
algorithms guaranteed to be present in all implementations and those available
on the current implementation:
>>> import hashlib
>>> hashlib.algorithms_guaranteed
{'sha1', 'sha224', 'sha384', 'sha256', 'sha512', 'md5'}
>>> hashlib.algorithms_available
{'md2', 'SHA256', 'SHA512', 'dsaWithSHA', 'mdc2', 'SHA224', 'MD4', 'sha256',
'sha512', 'ripemd160', 'SHA1', 'MDC2', 'SHA', 'SHA384', 'MD2',
'ecdsa-with-SHA1','md4', 'md5', 'sha1', 'DSA-SHA', 'sha224',
'dsaEncryption', 'DSA', 'RIPEMD160', 'sha', 'MD5', 'sha384'}
(Suggested by Carl Chenet in bpo-7418.)
ast
The ast module has a wonderful a general-purpose tool for safely
evaluating expression strings using the Python literal
syntax. The ast.literal_eval() function serves as a secure alternative to
the builtin eval() function which is easily abused. Python 3.2 adds
bytes and set literals to the list of supported types:
strings, bytes, numbers, tuples, lists, dicts, sets, booleans, and None.
>>> from ast import literal_eval
>>> request = "{'req': 3, 'func': 'pow', 'args': (2, 0.5)}"
>>> literal_eval(request)
{'args': (2, 0.5), 'req': 3, 'func': 'pow'}
>>> request = "os.system('do something harmful')"
>>> literal_eval(request)
Traceback (most recent call last):
...
ValueError: malformed node or string: <_ast.Call object at 0x101739a10>
(Implemented by Benjamin Peterson and Georg Brandl.)
os
Different operating systems use various encodings for filenames and environment
variables. The os module provides two new functions,
fsencode() and fsdecode(), for encoding and decoding
filenames:
>>> import os
>>> filename = 'Sehenswürdigkeiten'
>>> os.fsencode(filename)
b'Sehensw\xc3\xbcrdigkeiten'
Some operating systems allow direct access to encoded bytes in the
environment. If so, the os.supports_bytes_environ constant will be
true.
For direct access to encoded environment variables (if available),
use the new os.getenvb() function or use os.environb
which is a bytes version of os.environ.
(Contributed by Victor Stinner.)
shutil
The shutil.copytree() function has two new options:
- ignore_dangling_symlinks: when
symlinks=False so that the function
copies a file pointed to by a symlink, not the symlink itself. This option
will silence the error raised if the file doesn’t exist.
- copy_function: is a callable that will be used to copy files.
shutil.copy2() is used by default.
(Contributed by Tarek Ziadé.)
In addition, the shutil module now supports archiving operations for zipfiles, uncompressed tarfiles, gzipped tarfiles,
and bzipped tarfiles. And there are functions for registering additional
archiving file formats (such as xz compressed tarfiles or custom formats).
The principal functions are make_archive() and
unpack_archive(). By default, both operate on the current
directory (which can be set by os.chdir()) and on any sub-directories.
The archive filename needs to be specified with a full pathname. The archiving
step is non-destructive (the original files are left unchanged).
>>> import shutil, pprint
>>> os.chdir('mydata') # change to the source directory
>>> f = shutil.make_archive('/var/backup/mydata',
... 'zip') # archive the current directory
>>> f # show the name of archive
'/var/backup/mydata.zip'
>>> os.chdir('tmp') # change to an unpacking
>>> shutil.unpack_archive('/var/backup/mydata.zip') # recover the data
>>> pprint.pprint(shutil.get_archive_formats()) # display known formats
[('bztar', "bzip2'ed tar-file"),
('gztar', "gzip'ed tar-file"),
('tar', 'uncompressed tar file'),
('zip', 'ZIP file')]
>>> shutil.register_archive_format( # register a new archive format
... name='xz',
... function=xz.compress, # callable archiving function
... extra_args=[('level', 8)], # arguments to the function
... description='xz compression'
... )
(Contributed by Tarek Ziadé.)
sqlite3
The sqlite3 module was updated to pysqlite version 2.6.0. It has two new capabilities.
(Contributed by R. David Murray and Shashwat Anand; bpo-8845.)
html
A new html module was introduced with only a single function,
escape(), which is used for escaping reserved characters from HTML
markup:
>>> import html
>>> html.escape('x > 2 && x < 7')
'x > 2 && x < 7'
socket
The socket module has two new improvements.
- Socket objects now have a
detach() method which puts
the socket into closed state without actually closing the underlying file
descriptor. The latter can then be reused for other purposes.
(Added by Antoine Pitrou; bpo-8524.)
socket.create_connection() now supports the context management protocol
to unconditionally consume socket.error exceptions and to close the
socket when done.
(Contributed by Giampaolo Rodolà; bpo-9794.)
ssl
The ssl module added a number of features to satisfy common requirements
for secure (encrypted, authenticated) internet connections:
- A new class,
SSLContext, serves as a container for persistent
SSL data, such as protocol settings, certificates, private keys, and various
other options. It includes a wrap_socket() for creating
an SSL socket from an SSL context.
- A new function,
ssl.match_hostname(), supports server identity
verification for higher-level protocols by implementing the rules of HTTPS
(from RFC 2818) which are also suitable for other protocols.
- The
ssl.wrap_socket() constructor function now takes a ciphers
argument. The ciphers string lists the allowed encryption algorithms using
the format described in the OpenSSL documentation.
- When linked against recent versions of OpenSSL, the
ssl module now
supports the Server Name Indication extension to the TLS protocol, allowing
multiple “virtual hosts” using different certificates on a single IP port.
This extension is only supported in client mode, and is activated by passing
the server_hostname argument to ssl.SSLContext.wrap_socket().
- Various options have been added to the
ssl module, such as
OP_NO_SSLv2 which disables the insecure and obsolete SSLv2
protocol.
- The extension now loads all the OpenSSL ciphers and digest algorithms. If
some SSL certificates cannot be verified, they are reported as an “unknown
algorithm” error.
- The version of OpenSSL being used is now accessible using the module
attributes
ssl.OPENSSL_VERSION (a string),
ssl.OPENSSL_VERSION_INFO (a 5-tuple), and
ssl.OPENSSL_VERSION_NUMBER (an integer).
(Contributed by Antoine Pitrou in bpo-8850, bpo-1589, bpo-8322,
bpo-5639, bpo-4870, bpo-8484, and bpo-8321.)
nntp
The nntplib module has a revamped implementation with better bytes and
text semantics as well as more practical APIs. These improvements break
compatibility with the nntplib version in Python 3.1, which was partly
dysfunctional in itself.
Support for secure connections through both implicit (using
nntplib.NNTP_SSL) and explicit (using nntplib.NNTP.starttls())
TLS has also been added.
(Contributed by Antoine Pitrou in bpo-9360 and Andrew Vant in bpo-1926.)
imaplib
Support for explicit TLS on standard IMAP4 connections has been added through
the new imaplib.IMAP4.starttls method.
(Contributed by Lorenzo M. Catucci and Antoine Pitrou, bpo-4471.)
http.client
There were a number of small API improvements in the http.client module.
The old-style HTTP 0.9 simple responses are no longer supported and the strict
parameter is deprecated in all classes.
The HTTPConnection and
HTTPSConnection classes now have a source_address
parameter for a (host, port) tuple indicating where the HTTP connection is made
from.
Support for certificate checking and HTTPS virtual hosts were added to
HTTPSConnection.
The request() method on connection objects
allowed an optional body argument so that a file object could be used
to supply the content of the request. Conveniently, the body argument now
also accepts an iterable object so long as it includes an explicit
Content-Length header. This extended interface is much more flexible than
before.
To establish an HTTPS connection through a proxy server, there is a new
set_tunnel() method that sets the host and
port for HTTP Connect tunneling.
To match the behavior of http.server, the HTTP client library now also
encodes headers with ISO-8859-1 (Latin-1) encoding. It was already doing that
for incoming headers, so now the behavior is consistent for both incoming and
outgoing traffic. (See work by Armin Ronacher in bpo-10980.)
unittest
The unittest module has a number of improvements supporting test discovery for
packages, easier experimentation at the interactive prompt, new testcase
methods, improved diagnostic messages for test failures, and better method
names.
The command-line call python -m unittest can now accept file paths
instead of module names for running specific tests (bpo-10620). The new
test discovery can find tests within packages, locating any test importable
from the top-level directory. The top-level directory can be specified with
the -t option, a pattern for matching files with -p, and a directory to
start discovery with -s:
$ python -m unittest discover -s my_proj_dir -p _test.py
(Contributed by Michael Foord.)
Experimentation at the interactive prompt is now easier because the
unittest.case.TestCase class can now be instantiated without
arguments:
>>> from unittest import TestCase
>>> TestCase().assertEqual(pow(2, 3), 8)
(Contributed by Michael Foord.)
The unittest module has two new methods,
assertWarns() and
assertWarnsRegex() to verify that a given warning type
is triggered by the code under test:
with self.assertWarns(DeprecationWarning):
legacy_function('XYZ')
(Contributed by Antoine Pitrou, bpo-9754.)
Another new method, assertCountEqual() is used to
compare two iterables to determine if their element counts are equal (whether
the same elements are present with the same number of occurrences regardless
of order):
def test_anagram(self):
self.assertCountEqual('algorithm', 'logarithm')
(Contributed by Raymond Hettinger.)
A principal feature of the unittest module is an effort to produce meaningful
diagnostics when a test fails. When possible, the failure is recorded along
with a diff of the output. This is especially helpful for analyzing log files
of failed test runs. However, since diffs can sometime be voluminous, there is
a new maxDiff attribute that sets maximum length of
diffs displayed.
In addition, the method names in the module have undergone a number of clean-ups.
For example, assertRegex() is the new name for
assertRegexpMatches() which was misnamed because the
test uses re.search(), not re.match(). Other methods using
regular expressions are now named using short form “Regex” in preference to
“Regexp” – this matches the names used in other unittest implementations,
matches Python’s old name for the re module, and it has unambiguous
camel-casing.
(Contributed by Raymond Hettinger and implemented by Ezio Melotti.)
To improve consistency, some long-standing method aliases are being
deprecated in favor of the preferred names:
Likewise, the TestCase.fail* methods deprecated in Python 3.1 are expected
to be removed in Python 3.3. Also see the Deprecated aliases section in
the unittest documentation.
(Contributed by Ezio Melotti; bpo-9424.)
The assertDictContainsSubset() method was deprecated
because it was misimplemented with the arguments in the wrong order. This
created hard-to-debug optical illusions where tests like
TestCase().assertDictContainsSubset({'a':1, 'b':2}, {'a':1}) would fail.
(Contributed by Raymond Hettinger.)
random
The integer methods in the random module now do a better job of producing
uniform distributions. Previously, they computed selections with
int(n*random()) which had a slight bias whenever n was not a power of two.
Now, multiple selections are made from a range up to the next power of two and a
selection is kept only when it falls within the range 0 <= x < n. The
functions and methods affected are randrange(),
randint(), choice(), shuffle() and
sample().
(Contributed by Raymond Hettinger; bpo-9025.)
poplib
POP3_SSL class now accepts a context parameter, which is a
ssl.SSLContext object allowing bundling SSL configuration options,
certificates and private keys into a single (potentially long-lived)
structure.
(Contributed by Giampaolo Rodolà; bpo-8807.)
asyncore
asyncore.dispatcher now provides a
handle_accepted() method
returning a (sock, addr) pair which is called when a connection has actually
been established with a new remote endpoint. This is supposed to be used as a
replacement for old handle_accept() and avoids
the user to call accept() directly.
(Contributed by Giampaolo Rodolà; bpo-6706.)
tempfile
The tempfile module has a new context manager,
TemporaryDirectory which provides easy deterministic
cleanup of temporary directories:
with tempfile.TemporaryDirectory() as tmpdirname:
print('created temporary dir:', tmpdirname)
(Contributed by Neil Schemenauer and Nick Coghlan; bpo-5178.)
inspect
The inspect module has a new function
getgeneratorstate() to easily identify the current state of a
generator-iterator:
>>> from inspect import getgeneratorstate
>>> def gen():
... yield 'demo'
>>> g = gen()
>>> getgeneratorstate(g)
'GEN_CREATED'
>>> next(g)
'demo'
>>> getgeneratorstate(g)
'GEN_SUSPENDED'
>>> next(g, None)
>>> getgeneratorstate(g)
'GEN_CLOSED'
(Contributed by Rodolpho Eckhardt and Nick Coghlan, bpo-10220.)
To support lookups without the possibility of activating a dynamic attribute,
the inspect module has a new function, getattr_static().
Unlike hasattr(), this is a true read-only search, guaranteed not to
change state while it is searching:
>>> class A:
... @property
... def f(self):
... print('Running')
... return 10
...
>>> a = A()
>>> getattr(a, 'f')
Running
10
>>> inspect.getattr_static(a, 'f')
<property object at 0x1022bd788>
(Contributed by Michael Foord.)
pydoc
The pydoc module now provides a much-improved Web server interface, as
well as a new command-line option -b to automatically open a browser window
to display that server:
(Contributed by Ron Adam; bpo-2001.)
dis
The dis module gained two new functions for inspecting code,
code_info() and show_code(). Both provide detailed code
object information for the supplied function, method, source code string or code
object. The former returns a string and the latter prints it:
>>> import dis, random
>>> dis.show_code(random.choice)
Name: choice
Filename: /Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/random.py
Argument count: 2
Kw-only arguments: 0
Number of locals: 3
Stack size: 11
Flags: OPTIMIZED, NEWLOCALS, NOFREE
Constants:
0: 'Choose a random element from a non-empty sequence.'
1: 'Cannot choose from an empty sequence'
Names:
0: _randbelow
1: len
2: ValueError
3: IndexError
Variable names:
0: self
1: seq
2: i
In addition, the dis() function now accepts string arguments
so that the common idiom dis(compile(s, '', 'eval')) can be shortened
to dis(s):
>>> dis('3*x+1 if x%2==1 else x//2')
1 0 LOAD_NAME 0 (x)
3 LOAD_CONST 0 (2)
6 BINARY_MODULO
7 LOAD_CONST 1 (1)
10 COMPARE_OP 2 (==)
13 POP_JUMP_IF_FALSE 28
16 LOAD_CONST 2 (3)
19 LOAD_NAME 0 (x)
22 BINARY_MULTIPLY
23 LOAD_CONST 1 (1)
26 BINARY_ADD
27 RETURN_VALUE
>> 28 LOAD_NAME 0 (x)
31 LOAD_CONST 0 (2)
34 BINARY_FLOOR_DIVIDE
35 RETURN_VALUE
Taken together, these improvements make it easier to explore how CPython is
implemented and to see for yourself what the language syntax does
under-the-hood.
(Contributed by Nick Coghlan in bpo-9147.)
dbm
All database modules now support the get() and setdefault() methods.
(Suggested by Ray Allen in bpo-9523.)
site
The site module has three new functions useful for reporting on the
details of a given Python installation.
>>> import site
>>> site.getsitepackages()
['/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/site-packages',
'/Library/Frameworks/Python.framework/Versions/3.2/lib/site-python',
'/Library/Python/3.2/site-packages']
>>> site.getuserbase()
'/Users/raymondhettinger/Library/Python/3.2'
>>> site.getusersitepackages()
'/Users/raymondhettinger/Library/Python/3.2/lib/python/site-packages'
Conveniently, some of site’s functionality is accessible directly from the
command-line:
$ python -m site --user-base
/Users/raymondhettinger/.local
$ python -m site --user-site
/Users/raymondhettinger/.local/lib/python3.2/site-packages
(Contributed by Tarek Ziadé in bpo-6693.)
sysconfig
The new sysconfig module makes it straightforward to discover
installation paths and configuration variables that vary across platforms and
installations.
The module offers access simple access functions for platform and version
information:
It also provides access to the paths and variables corresponding to one of
seven named schemes used by distutils. Those include posix_prefix,
posix_home, posix_user, nt, nt_user, os2, os2_home:
get_paths() makes a dictionary containing installation paths
for the current installation scheme.
get_config_vars() returns a dictionary of platform specific
variables.
There is also a convenient command-line interface:
C:\Python32>python -m sysconfig
Platform: "win32"
Python version: "3.2"
Current installation scheme: "nt"
Paths:
data = "C:\Python32"
include = "C:\Python32\Include"
platinclude = "C:\Python32\Include"
platlib = "C:\Python32\Lib\site-packages"
platstdlib = "C:\Python32\Lib"
purelib = "C:\Python32\Lib\site-packages"
scripts = "C:\Python32\Scripts"
stdlib = "C:\Python32\Lib"
Variables:
BINDIR = "C:\Python32"
BINLIBDEST = "C:\Python32\Lib"
EXE = ".exe"
INCLUDEPY = "C:\Python32\Include"
LIBDEST = "C:\Python32\Lib"
SO = ".pyd"
VERSION = "32"
abiflags = ""
base = "C:\Python32"
exec_prefix = "C:\Python32"
platbase = "C:\Python32"
prefix = "C:\Python32"
projectbase = "C:\Python32"
py_version = "3.2"
py_version_nodot = "32"
py_version_short = "3.2"
srcdir = "C:\Python32"
userbase = "C:\Documents and Settings\Raymond\Application Data\Python"
(Moved out of Distutils by Tarek Ziadé.)
pdb
The pdb debugger module gained a number of usability improvements:
pdb.py now has a -c option that executes commands as given in a
.pdbrc script file.
- A
.pdbrc script file can contain continue and next commands
that continue debugging.
- The
Pdb class constructor now accepts a nosigint argument.
- New commands:
l(list), ll(long list) and source for
listing source code.
- New commands:
display and undisplay for showing or hiding
the value of an expression if it has changed.
- New command:
interact for starting an interactive interpreter containing
the global and local names found in the current scope.
- Breakpoints can be cleared by breakpoint number.
(Contributed by Georg Brandl, Antonio Cuni and Ilya Sandler.)
configparser
The configparser module was modified to improve usability and
predictability of the default parser and its supported INI syntax. The old
ConfigParser class was removed in favor of SafeConfigParser
which has in turn been renamed to ConfigParser. Support
for inline comments is now turned off by default and section or option
duplicates are not allowed in a single configuration source.
Config parsers gained a new API based on the mapping protocol:
>>> parser = ConfigParser()
>>> parser.read_string("""
... [DEFAULT]
... location = upper left
... visible = yes
... editable = no
... color = blue
...
... [main]
... title = Main Menu
... color = green
...
... [options]
... title = Options
... """)
>>> parser['main']['color']
'green'
>>> parser['main']['editable']
'no'
>>> section = parser['options']
>>> section['title']
'Options'
>>> section['title'] = 'Options (editable: %(editable)s)'
>>> section['title']
'Options (editable: no)'
The new API is implemented on top of the classical API, so custom parser
subclasses should be able to use it without modifications.
The INI file structure accepted by config parsers can now be customized. Users
can specify alternative option/value delimiters and comment prefixes, change the
name of the DEFAULT section or switch the interpolation syntax.
There is support for pluggable interpolation including an additional interpolation
handler ExtendedInterpolation:
>>> parser = ConfigParser(interpolation=ExtendedInterpolation())
>>> parser.read_dict({'buildout': {'directory': '/home/ambv/zope9'},
... 'custom': {'prefix': '/usr/local'}})
>>> parser.read_string("""
... [buildout]
... parts =
... zope9
... instance
... find-links =
... ${buildout:directory}/downloads/dist
...
... [zope9]
... recipe = plone.recipe.zope9install
... location = /opt/zope
...
... [instance]
... recipe = plone.recipe.zope9instance
... zope9-location = ${zope9:location}
... zope-conf = ${custom:prefix}/etc/zope.conf
... """)
>>> parser['buildout']['find-links']
'\n/home/ambv/zope9/downloads/dist'
>>> parser['instance']['zope-conf']
'/usr/local/etc/zope.conf'
>>> instance = parser['instance']
>>> instance['zope-conf']
'/usr/local/etc/zope.conf'
>>> instance['zope9-location']
'/opt/zope'
A number of smaller features were also introduced, like support for specifying
encoding in read operations, specifying fallback values for get-functions, or
reading directly from dictionaries and strings.
(All changes contributed by Łukasz Langa.)
urllib.parse
A number of usability improvements were made for the urllib.parse module.
The urlparse() function now supports IPv6 addresses as described in RFC 2732:
>>> import urllib.parse
>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')
ParseResult(scheme='http',
netloc='[dead:beef:cafe:5417:affe:8FA3:deaf:feed]',
path='/foo/',
params='',
query='',
fragment='')
The urldefrag() function now returns a named tuple:
>>> r = urllib.parse.urldefrag('http://python.org/about/#target')
>>> r
DefragResult(url='http://python.org/about/', fragment='target')
>>> r[0]
'http://python.org/about/'
>>> r.fragment
'target'
And, the urlencode() function is now much more flexible,
accepting either a string or bytes type for the query argument. If it is a
string, then the safe, encoding, and error parameters are sent to
quote_plus() for encoding:
>>> urllib.parse.urlencode([
... ('type', 'telenovela'),
... ('name', '¿Dónde Está Elisa?')],
... encoding='latin-1')
'type=telenovela&name=%BFD%F3nde+Est%E1+Elisa%3F'
As detailed in Parsing ASCII Encoded Bytes, all the urllib.parse
functions now accept ASCII-encoded byte strings as input, so long as they are
not mixed with regular strings. If ASCII-encoded byte strings are given as
parameters, the return types will also be an ASCII-encoded byte strings:
>>> urllib.parse.urlparse(b'http://www.python.org:80/about/')
ParseResultBytes(scheme=b'http', netloc=b'www.python.org:80',
path=b'/about/', params=b'', query=b'', fragment=b'')
(Work by Nick Coghlan, Dan Mahn, and Senthil Kumaran in bpo-2987,
bpo-5468, and bpo-9873.)
mailbox
Thanks to a concerted effort by R. David Murray, the mailbox module has
been fixed for Python 3.2. The challenge was that mailbox had been originally
designed with a text interface, but email messages are best represented with
bytes because various parts of a message may have different encodings.
The solution harnessed the email package’s binary support for parsing
arbitrary email messages. In addition, the solution required a number of API
changes.
As expected, the add() method for
mailbox.Mailbox objects now accepts binary input.
StringIO and text file input are deprecated. Also, string input
will fail early if non-ASCII characters are used. Previously it would fail when
the email was processed in a later step.
There is also support for binary output. The get_file()
method now returns a file in the binary mode (where it used to incorrectly set
the file to text-mode). There is also a new get_bytes()
method that returns a bytes representation of a message corresponding
to a given key.
It is still possible to get non-binary output using the old API’s
get_string() method, but that approach
is not very useful. Instead, it is best to extract messages from
a Message object or to load them from binary input.
(Contributed by R. David Murray, with efforts from Steffen Daode Nurpmeso and an
initial patch by Victor Stinner in bpo-9124.)
turtledemo
The demonstration code for the turtle module was moved from the Demo
directory to main library. It includes over a dozen sample scripts with
lively displays. Being on sys.path, it can now be run directly
from the command-line:
(Moved from the Demo directory by Alexander Belopolsky in bpo-10199.)