Keyword arguments that are specific to the SQLAlchemy psycopg2 dialect
may be passed to _sa.create_engine()
, and include the following:
isolation_level: This option, available for all PostgreSQL dialects, includes the AUTOCOMMIT isolation level when using the psycopg2 dialect. This option sets the default isolation level for the connection that is set immediately upon connection to the database before the connection is pooled. This option is generally superseded by the more modern :paramref:`_engine.Connection.execution_options.isolation_level` execution option, detailed at :ref:`dbapi_autocommit`.
client_encoding: sets the client encoding in a libpq-agnostic way, using psycopg2's set_client_encoding() method.
See Also
use_native_unicode: Under Python 2 only, this can be set to False to disable the use of psycopg2's native Unicode support.
executemany_mode, executemany_batch_page_size, executemany_values_page_size: Allows use of psycopg2 extensions for optimizing "executemany"-style queries. See the referenced section below for details.
See Also
Tip
The above keyword arguments are dialect keyword arguments, meaning
that they are passed as explicit keyword arguments to _sa.create_engine()
:
engine = create_engine( "postgresql+psycopg2://scott:tiger@localhost/test", isolation_level="SERIALIZABLE", )
These should not be confused with DBAPI connect arguments, which are passed as part of the :paramref:`_sa.create_engine.connect_args` dictionary and/or are passed in the URL query string, as detailed in the section :ref:`custom_dbapi_args`.
The psycopg2 module has a connection argument named sslmode for controlling its behavior regarding secure (SSL) connections. The default is sslmode=prefer; it will attempt an SSL connection and if that fails it will fall back to an unencrypted connection. sslmode=require may be used to ensure that only secure connections are established. Consult the psycopg2 / libpq documentation for further options that are available.
Note that sslmode is specific to psycopg2 so it is included in the connection URI:
engine = sa.create_engine( "postgresql+psycopg2://scott:tiger@192.168.0.199:5432/test?sslmode=require" )
psycopg2 supports connecting via Unix domain connections. When the host portion of the URL is omitted, SQLAlchemy passes None to psycopg2, which specifies Unix-domain communication rather than TCP/IP communication:
create_engine("postgresql+psycopg2://user:password@/dbname")
By default, the socket file used is to connect to a Unix-domain socket in /tmp, or whatever socket directory was specified when PostgreSQL was built. This value can be overridden by passing a pathname to psycopg2, using host as an additional keyword argument:
create_engine("postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql")
See Also
psycopg2 supports multiple connection points in the connection string. When the host parameter is used multiple times in the query section of the URL, SQLAlchemy will create a single string of the host and port information provided to make the connections:
create_engine( "postgresql+psycopg2://user:password@/dbname?host=HostA:port1&host=HostB&host=HostC" )
A connection to each host is then attempted until either a connection is successful or all connections are unsuccessful in which case an error is raised.
See Also
The psycopg2 DBAPI can connect to PostgreSQL by passing an empty DSN to the libpq client library, which by default indicates to connect to a localhost PostgreSQL database that is open for "trust" connections. This behavior can be further tailored using a particular set of environment variables which are prefixed with PG_..., which are consumed by libpq to take the place of any or all elements of the connection string.
For this form, the URL can be passed without any elements other than the initial scheme:
engine = create_engine('postgresql+psycopg2://')
In the above form, a blank "dsn" string is passed to the psycopg2.connect() function which in turn represents an empty DSN passed to libpq.
See Also
Environment Variables - PostgreSQL documentation on how to use PG_... environment variables for connections.
The following DBAPI-specific options are respected when used with
_engine.Connection.execution_options
,
.Executable.execution_options
,
_query.Query.execution_options
,
in addition to those not specific to DBAPIs:
isolation_level - Set the transaction isolation level for the lifespan
of a _engine.Connection
(can only be set on a connection,
not a statement
or query). See :ref:`psycopg2_isolation_level`.
stream_results - Enable or disable usage of psycopg2 server side cursors - this feature makes use of "named" cursors in combination with special result handling methods so that result rows are not fully buffered. Defaults to False, meaning cursors are buffered by default.
max_row_buffer - when using stream_results, an integer value that
specifies the maximum number of rows to buffer at a time. This is
interpreted by the .BufferedRowCursorResult
, and if omitted the
buffer will grow to ultimately store 1000 rows at a time.
Changed in version 1.4: The max_row_buffer size can now be greater than 1000, and the buffer will grow to that size.
Modern versions of psycopg2 include a feature known as
Fast Execution Helpers , which
have been shown in benchmarking to improve psycopg2's executemany()
performance, primarily with INSERT statements, by multiple orders of magnitude.
SQLAlchemy internally makes use of these extensions for executemany() style
calls, which correspond to lists of parameters being passed to
_engine.Connection.execute
as detailed in :ref:`multiple parameter
sets <execute_multiple>`. The ORM also uses this mode internally whenever
possible.
The two available extensions on the psycopg2 side are the execute_values() and execute_batch() functions. The psycopg2 dialect defaults to using the execute_values() extension for all qualifying INSERT statements.
The use of these extensions is controlled by the executemany_mode flag
which may be passed to _sa.create_engine
:
engine = create_engine( "postgresql+psycopg2://scott:tiger@host/dbname", executemany_mode='values_plus_batch')
Possible options for executemany_mode include:
values_only - this is the default value. the psycopg2 execute_values() extension is used for qualifying INSERT statements, which rewrites the INSERT to include multiple VALUES clauses so that many parameter sets can be inserted with one statement.
New in version 1.4: Added "values_only" setting for executemany_mode which is also now the default.
None - No psycopg2 extensions are not used, and the usual cursor.executemany() method is used when invoking statements with multiple parameter sets.
'batch' - Uses psycopg2.extras.execute_batch for all qualifying
INSERT, UPDATE and DELETE statements, so that multiple copies
of a SQL query, each one corresponding to a parameter set passed to
executemany(), are joined into a single SQL string separated by a
semicolon. When using this mode, the _engine.CursorResult.rowcount
attribute will not contain a value for executemany-style executions.
'values_plus_batch'- execute_values is used for qualifying INSERT
statements, execute_batch is used for UPDATE and DELETE.
When using this mode, the _engine.CursorResult.rowcount
attribute will not contain a value for executemany-style executions against
UPDATE and DELETE statements.
By "qualifying statements", we mean that the statement being executed
must be a Core _expression.insert
, _expression.update
or _expression.delete
construct, and not a plain textual SQL
string or one constructed using _expression.text
. When using the
ORM, all insert/update/delete statements used by the ORM flush process
are qualifying.
The "page size" for the "values" and "batch" strategies can be affected
by using the executemany_batch_page_size and
executemany_values_page_size engine parameters. These
control how many parameter sets
should be represented in each execution. The "values" page size defaults
to 1000, which is different that psycopg2's default. The "batch" page
size defaults to 100. These can be affected by passing new values to
_engine.create_engine
:
engine = create_engine( "postgresql+psycopg2://scott:tiger@host/dbname", executemany_mode='values', executemany_values_page_size=10000, executemany_batch_page_size=500)
See Also
:ref:`execute_multiple` - General information on using the
_engine.Connection
object to execute statements in such a way as to make
use of the DBAPI .executemany() method.
The psycopg2 DBAPI driver supports Unicode data transparently. Under Python 2 only, the SQLAlchemy psycopg2 dialect will enable the psycopg2.extensions.UNICODE extension by default to ensure Unicode is handled properly; under Python 3, this is psycopg2's default behavior.
The client character encoding can be controlled for the psycopg2 dialect in the following ways:
For PostgreSQL 9.1 and above, the client_encoding parameter may be passed in the database URL; this parameter is consumed by the underlying libpq PostgreSQL client library:
engine = create_engine("postgresql+psycopg2://user:pass@host/dbname?client_encoding=utf8")
Alternatively, the above client_encoding value may be passed using :paramref:`_sa.create_engine.connect_args` for programmatic establishment with libpq:
engine = create_engine( "postgresql+psycopg2://user:pass@host/dbname", connect_args={'client_encoding': 'utf8'} )
For all PostgreSQL versions, psycopg2 supports a client-side encoding
value that will be passed to database connections when they are first
established. The SQLAlchemy psycopg2 dialect supports this using the
client_encoding parameter passed to _sa.create_engine
:
engine = create_engine( "postgresql+psycopg2://user:pass@host/dbname", client_encoding="utf8" )
Tip
The above client_encoding parameter admittedly is very similar in appearance to usage of the parameter within the :paramref:`_sa.create_engine.connect_args` dictionary; the difference above is that the parameter is consumed by psycopg2 and is passed to the database connection using SET client_encoding TO 'utf8'; in the previously mentioned style, the parameter is instead passed through psycopg2 and consumed by the libpq library.
A common way to set up client encoding with PostgreSQL databases is to ensure it is configured within the server-side postgresql.conf file; this is the recommended way to set encoding for a server that is consistently of one encoding in all databases:
# postgresql.conf file # client_encoding = sql_ascii # actually, defaults to database # encoding client_encoding = utf8
Under Python 2 only, SQLAlchemy can also be instructed to skip the usage of the
psycopg2 UNICODE extension and to instead utilize its own unicode
encode/decode services, which are normally reserved only for those DBAPIs that
don't fully support unicode directly. Passing use_native_unicode=False to
_sa.create_engine
will disable usage of psycopg2.extensions.
UNICODE. SQLAlchemy will instead encode data itself into Python bytestrings
on the way in and coerce from bytes on the way back, using the value of the
_sa.create_engine
encoding parameter, which defaults to utf-8.
SQLAlchemy's own unicode encode/decode functionality is steadily becoming
obsolete as most DBAPIs now support unicode fully.
The psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations.
As discussed in :ref:`postgresql_isolation_level`,
all PostgreSQL dialects support setting of transaction isolation level
both via the isolation_level parameter passed to _sa.create_engine
,
as well as the isolation_level argument used by
_engine.Connection.execution_options
. When using the psycopg2 dialect
, these
options make use of psycopg2's set_isolation_level() connection method,
rather than emitting a PostgreSQL directive; this is because psycopg2's
API-level setting is always emitted at the start of each transaction in any
case.
The psycopg2 dialect supports these constants for isolation level:
The psycopg2 dialect will log PostgreSQL NOTICE messages via the sqlalchemy.dialects.postgresql logger. When this logger is set to the logging.INFO level, notice messages will be logged:
import logging logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)
Above, it is assumed that logging is configured externally. If this is not the case, configuration such as logging.basicConfig() must be utilized:
import logging logging.basicConfig() # log messages to stdout logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)
See Also
Logging HOWTO - on the python.org website
The psycopg2 DBAPI includes an extension to natively handle marshalling of the HSTORE type. The SQLAlchemy psycopg2 dialect will enable this extension by default when psycopg2 version 2.4 or greater is used, and it is detected that the target database has the HSTORE type set up for use. In other words, when the dialect makes the first connection, a sequence like the following is performed:
The register_hstore() extension has the effect of all Python dictionaries being accepted as parameters regardless of the type of target column in SQL. The dictionaries are converted by this extension into a textual HSTORE expression. If this behavior is not desired, disable the use of the hstore extension by setting use_native_hstore to False as follows:
engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test", use_native_hstore=False)
The HSTORE type is still supported when the psycopg2.extensions.register_hstore() extension is not used. It merely means that the coercion between Python dictionaries and the HSTORE string format, on both the parameter side and the result side, will take place within SQLAlchemy's own marshalling logic, and not that of psycopg2 which may be more performant.
Constant | EXECUTEMANY_BATCH |
Undocumented |
Constant | EXECUTEMANY_PLAIN |
Undocumented |
Constant | EXECUTEMANY_VALUES |
Undocumented |
Constant | EXECUTEMANY_VALUES_PLUS_BATCH |
Undocumented |
Variable | logger |
Undocumented |
Class | _PGARRAY |
Undocumented |
Class | _PGEnum |
Undocumented |
Class | _PGHStore |
Undocumented |
Class | _PGJSON |
Undocumented |
Class | _PGJSONB |
Undocumented |
Class | _PGNumeric |
Undocumented |
Class | _PGUUID |
Undocumented |
Class | PGCompiler_psycopg2 |
Undocumented |
Class | PGDialect_psycopg2 |
Undocumented |
Class | PGExecutionContext_psycopg2 |
Undocumented |
Class | PGIdentifierPreparer_psycopg2 |
Undocumented |
Variable | _server_side_id |
Undocumented |
Undocumented
Value |
|