This module is a generic place used to hold little helper functions and classes until a better place in the distribution is found.
The dict cursors allow to access to the retrieved records using an iterface similar to the Python dictionaries instead of the tuples. You can use it either passing DictConnection as connection_factory argument to the connect() function or passing DictCursor as the cursor_factory argument to the cursor() method of a regular connection.
>>> dict_cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
>>> dict_cur.execute("INSERT INTO test (num, data) VALUES(%s, %s)",
... (100, "abc'def"))
>>> dict_cur.execute("SELECT * FROM test")
>>> rec = dict_cur.fetchone()
>>> rec['id']
1
>>> rec['num']
100
>>> rec['data']
"abc'def"
The records still support indexing as the original tuple:
>>> rec[2]
"abc'def"
A cursor that uses a real dict as the base type for rows.
Note that this cursor is extremely specialized and does not allow the normal access (using integer indices) to fetched data. If you need to access database rows both as a dictionary and a list, then use the generic DictCursor instead of RealDictCursor.
A connection that logs all queries to a file or logger object.
Initialize the connection to log to logobj.
The logobj parameter can be an open file object or a Logger instance from the standard logging module.
Filter the query before logging it.
This is the method to overwrite to filter unwanted queries out of the log or to add some extra data to the output. The default implementation just does nothing.
A connection that logs queries based on execution time.
This is just an example of how to sub-class LoggingConnection to provide some extra filtering for the logged queries. Both the inizialize() and filter() methods are overwritten to make sure that only queries executing for more than mintime ms are logged.
Note that this connection uses the specialized cursor MinTimeLoggingCursor.
New in version 2.0.9.
Changed in version 2.0.13: added UUID array support.
>>> psycopg2.extras.register_uuid()
<psycopg2._psycopg.type object at 0x...>
>>> # Python UUID can be used in SQL queries
>>> import uuid
>>> my_uuid = uuid.UUID('{12345678-1234-5678-1234-567812345678}')
>>> psycopg2.extensions.adapt(my_uuid).getquoted()
"'12345678-1234-5678-1234-567812345678'::uuid"
>>> # PostgreSQL UUID are transformed into Python UUID objects.
>>> cur.execute("SELECT 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid")
>>> cur.fetchone()[0]
UUID('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11')
New in version 2.0.9.
>>> psycopg2.extras.register_inet()
<psycopg2._psycopg.type object at 0x...>
>>> cur.mogrify("SELECT %s", (Inet('127.0.0.1/32'),))
"SELECT E'127.0.0.1/32'::inet"
>>> cur.execute("SELECT '192.168.0.1/24'::inet")
>>> cur.fetchone()[0].addr
'192.168.0.1/24'
Wrap a string to allow for correct SQL-quoting of inet values.
Note that this adapter does NOT check the passed value to make sure it really is an inet-compatible address but DOES call adapt() on it to make sure it is impossible to execute an SQL-injection by passing an evil value to the initializer.
Register alternate type caster for TIMESTAMP WITH TIME ZONE.
The Python datetime module cannot handle time zones with seconds in the UTC offset. There are, however, historical “time zones” which contain such offsets, eg. “Asia/Calcutta”. In many cases those offsets represent true local time.
If you encounter “unable to parse time” on a perfectly valid timestamp you likely want to try this type caster. It truncates the seconds from the time zone data and retries casting the timestamp. Note that this will generate timestamps which are inaccurate by the number of seconds truncated (unless the seconds were 00).
Parameters: |
|
---|
New in version 2.0.9.