Connect a Dash App to an SQL Database

This documentation is for Dash Enterprise.
Dash Enterprise is the fastest way to write & deploy Dash apps and
Jupyter notebooks.
10% of the Fortune 500 uses Dash Enterprise to productionize AI and
data science apps. Find out if your company is using Dash Enterprise

Managing Connection Drivers and Libraries

Dash Apps can use open source libraries to query external databases and datastores in callbacks or job queues.

The Python DB-API standardizes the interface for most Python database access modules. This allows us to write Dash
apps with code that can be modified with little effort to connect to different types of databases with different dialects
by following the standardized API. For more information on the Python DB-API, see PEP-0249.

To query databases, you’ll need to install the Python database packages and in many cases system level dependencies.
For complete examples, see the relevant “Database Connections” section in Sample Apps & Templates:

Working with Connection Objects in a Dash App

Many of the database apps follow a similar pattern:

  1. Install the database driver & system level dependencies. This will vary by database type and we recommend viewing
    the examples that we’ve provided first. Many of the database drivers more easily installed on Linux environments,
    and so we recommend developing these apps in a Dash Enterprise Workspace.

  2. Create a connection. We recommend using SQLAlchemy if available for your database.

  3. Store your connection’s database password as an environment variable in your App Settings in the Dash Enterprise
    App Manager instead of storing it within code.

  4. Test that the database server has started successfully, verify that it is accepting connections, and validate the credentials and connection string; we can create a simple
    try_connection function which sends a basic query to the database and checks to see if it is successful. If it fails,
    the connection is retried after an exponentially increasing delay. This allows us to distinguish between errors
    arising from issues with the callback logic, and errors caused by database configuration or connectivity problems. Eg:

@retry(wait=wait_exponential(multiplier=2, min=1, max=10), stop=stop_after_attempt(5))
def try_connection():
        with postgres_engine.connect() as connection:
            stmt = text("SELECT 1")
        print("Connection to database successful.")

    except Exception as e:
        print("Connection to database failed, retrying.")
        raise Exception
  1. Consider connection pooling.
    In these examples above, we don’t use database connection pools. Connection pools have tricky implications when using
    alongside celery or gunicorn’s --preload option. If you aren’t using --preload nor celery, then you can improve
    your query performance by using a node pool. With a node pool, connections are shared instead of recreated and discarded.
    It is possible to use –preload and celery with a node pool but it requires extra caution. For more details, see:

To disable connection pooling, you will use the NullPool:

from sqlalchemy.pool import NullPool
engine = create_engine(
  1. Construct your SQL query. You have several options here:

    • Use raw SQL commands. This is the low-level option and great if you want full control & transparency and are
      comfortable with SQL. We recommend using the text
      function in sqlalchemy and parameterized queries. This will avoid SQL Injection issues:
      import pandas as pd
      from sqlalchemy import text

    t = text(“SELECT * FROM users WHERE id=:user_id”)
    result = pd.read_sql(t, params={‘user_id’: 12})
    `` * Use the Pythonic classes available withsqlalchemy`. For examples, see SQLAlchemy — Python Tutorial

    *; Use ibis, a library that uses SQLAlchemy under the hood but has a more Pandas-like interface.

  2. Open a connection when you need to perform an operation and close it after. In sqlalchemy, this can be done with the with engine.connect() clause.
    While a NullPool can do this for us by default as well, we can also use the Engine object to implicitly open and close connections
    on an ad hoc basis for both this and other pooling implementations. Enclosed in the withcontext statement, the
    connection object generated by the Engine.connect() method is automatically closed by Connection.close() at the
    end of the codeblock e.g.

with mssql_engine.connect() as connection:
     countries = pd.read_sql('SELECT DISTINCT location from data', connection)

Database URLs

As we saw earlier when creating our SQLAlchemy engine object, a database URL is required to be passed to specify the
dialect and driver we will use to connect to our database. This can be passed to the pool or directly to the Engine
object to use a default pool.


# For PostgreSQL

For more information on dialects and drivers, see here.